writable.write(chunk[, encoding][, callback])


  • chunk <string> | <Buffer> | <Uint8Array> | <any> 要写入的数据。  对于非对象模式的流, chunk 必须是字符串、 BufferUint8Array。 对于对象模式的流, chunk 可以是任何 JavaScript 值,除了 null
  • encoding <string> 如果 chunk 是字符串,则指定字符编码。
  • callback <Function> 当数据块被输出到目标后的回调函数。
  • 返回: <boolean> 如果流需要等待 'drain' 事件触发才能继续写入更多数据,则返回 false,否则返回 true

writable.write() 写入数据到流,并在数据被完全处理之后调用 callback。 如果发生错误,则 callback 可能被调用也可能不被调用。 为了可靠地检测错误,可以为 'error' 事件添加监听器。

在接收了 chunk 后,如果内部的缓冲小于创建流时配置的 highWaterMark,则返回 true 。 如果返回 false ,则应该停止向流写入数据,直到 'drain' 事件被触发。

当流还未被排空时,调用 write() 会缓冲 chunk,并返回 false。 一旦所有当前缓冲的数据块都被排空了(被操作系统接收并传输),则触发 'drain' 事件。 建议一旦 write() 返回 false,则不再写入任何数据块,直到 'drain' 事件被触发。 当流还未被排空时,也是可以调用 write(),Node.js 会缓冲所有被写入的数据块,直到达到最大内存占用,这时它会无条件中止。 甚至在它中止之前, 高内存占用将会导致垃圾回收器的性能变差和 RSS 变高(即使内存不再需要,通常也不会被释放回系统)。 如果远程的另一端没有读取数据,TCP 的 socket 可能永远也不会排空,所以写入到一个不会排空的 socket 可能会导致远程可利用的漏洞。

对于 Transform, 写入数据到一个不会排空的流尤其成问题,因为 Transform 流默认会被暂停,直到它们被 pipe 或者添加了 'data''readable' 事件句柄。

如果要被写入的数据可以根据需要生成或者取得,建议将逻辑封装为一个可读流并且使用 stream.pipe()。 如果要优先调用 write(),则可以使用 'drain' 事件来防止背压与避免内存问题:

function write(data, cb) {
  if (!stream.write(data)) {
    stream.once('drain', cb);
  } else {
    process.nextTick(cb);
  }
}

// 在回调函数被执行后再进行其他的写入。
write('hello', () => {
  console.log('完成写入,可以进行更多的写入');
});
  • chunk <string> | <Buffer> | <Uint8Array> | <any> Optional data to write. For streams not operating in object mode, chunk must be a string, Buffer or Uint8Array. For object mode streams, chunk may be any JavaScript value other than null.
  • encoding <string> The encoding, if chunk is a string
  • callback <Function> Callback for when this chunk of data is flushed
  • Returns: <boolean> false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback may or may not be called with the error as its first argument. To reliably detect write errors, add a listener for the 'error' event.

The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. It is recommended that once write() returns false, no more chunks be written until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use stream.pipe(). However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

function write(data, cb) {
  if (!stream.write(data)) {
    stream.once('drain', cb);
  } else {
    process.nextTick(cb);
  }
}

// Wait for cb to be called before doing any other write.
write('hello', () => {
  console.log('Write completed, do more writes now.');
});

A Writable stream in object mode will always ignore the encoding argument.