readable.read([size])
readable.read()
方法从内部缓冲区中取出一些数据并返回。
如果没有可供读取的数据,则返回 null
。
默认情况下,数据将作为 Buffer
对象返回,除非使用 readable.setEncoding()
方法指定了编码、或者流是在对象模式下操作。
可选的 size
参数指定要读取的特定字节数。
如果无法读取 size
字节,则将返回 null
,除非流已结束,在这种情况下,将返回内部缓冲区中剩余的所有数据。
如果未指定 size
参数,则将返回内部缓冲区中包含的所有数据。
readable.read()
方法应该只在暂停模式下操作的 Readable
流上调用。
在流动模式下,会自动调用 readable.read()
,直到内部缓冲区完全排空。
const readable = getReadableStreamSomehow();
// 随着数据被缓冲,'readable' 可能会被多次触发
readable.on('readable', () => {
let chunk;
console.log('Stream is readable (new data received in buffer)');
// 使用循环来确保读取所有当前可用的数据
while (null !== (chunk = readable.read())) {
console.log(`Read ${chunk.length} bytes of data...`);
}
});
// 当没有更多可用数据时,则触发一次 'end'。
readable.on('end', () => {
console.log('Reached end of stream.');
});
每次调用 readable.read()
都会返回一个数据块或 null
。
块不是串联的。
需要 while
循环来消费当前缓冲区中的所有数据。
当读取大文件时,.read()
可能会返回 null
,到目前为止已经消费了所有缓冲的内容,但是还有更多的数据尚未缓冲。
在这种情况下,当缓冲区中有更多数据时,将触发新的 'readable'
事件。
最后,当没有更多数据时,则将触发 'end'
事件。
因此,要从 readable
读取文件的全部内容,必须跨越多个 'readable'
事件来收集块:
const chunks = [];
readable.on('readable', () => {
let chunk;
while (null !== (chunk = readable.read())) {
chunks.push(chunk);
}
});
readable.on('end', () => {
const content = chunks.join('');
});
对象模式下的 Readable
流将始终从对 readable.read(size)
的调用返回单个条目,而不管 size
参数的值如何。
如果 readable.read()
方法返回数据块,则还将触发 'data'
事件。
在 'end'
事件触发后调用 stream.read([size])
将返回 null
。
不会引发运行时错误。
size
<number> Optional argument to specify how much data to read.- Returns: <string> | <Buffer> | <null> | <any>
The readable.read()
method pulls some data out of the internal buffer and
returns it. If no data available to be read, null
is returned. By default,
the data will be returned as a Buffer
object unless an encoding has been
specified using the readable.setEncoding()
method or the stream is operating
in object mode.
The optional size
argument specifies a specific number of bytes to read. If
size
bytes are not available to be read, null
will be returned unless
the stream has ended, in which case all of the data remaining in the internal
buffer will be returned.
If the size
argument is not specified, all of the data contained in the
internal buffer will be returned.
The size
argument must be less than or equal to 1 GB.
The readable.read()
method should only be called on Readable
streams
operating in paused mode. In flowing mode, readable.read()
is called
automatically until the internal buffer is fully drained.
const readable = getReadableStreamSomehow();
// 'readable' may be triggered multiple times as data is buffered in
readable.on('readable', () => {
let chunk;
console.log('Stream is readable (new data received in buffer)');
// Use a loop to make sure we read all currently available data
while (null !== (chunk = readable.read())) {
console.log(`Read ${chunk.length} bytes of data...`);
}
});
// 'end' will be triggered once when there is no more data available
readable.on('end', () => {
console.log('Reached end of stream.');
});
Each call to readable.read()
returns a chunk of data, or null
. The chunks
are not concatenated. A while
loop is necessary to consume all data
currently in the buffer. When reading a large file .read()
may return null
,
having consumed all buffered content so far, but there is still more data to
come not yet buffered. In this case a new 'readable'
event will be emitted
when there is more data in the buffer. Finally the 'end'
event will be
emitted when there is no more data to come.
Therefore to read a file's whole contents from a readable
, it is necessary
to collect chunks across multiple 'readable'
events:
const chunks = [];
readable.on('readable', () => {
let chunk;
while (null !== (chunk = readable.read())) {
chunks.push(chunk);
}
});
readable.on('end', () => {
const content = chunks.join('');
});
A Readable
stream in object mode will always return a single item from
a call to readable.read(size)
, regardless of the value of the
size
argument.
If the readable.read()
method returns a chunk of data, a 'data'
event will
also be emitted.
Calling stream.read([size])
after the 'end'
event has
been emitted will return null
. No runtime error will be raised.