stream.pipeline(streams, callback)
-
streams
<Stream[]> | <Iterable[]> | <AsyncIterable[]> | <Function[]> | <ReadableStream[]> | <WritableStream[]> | <TransformStream[]> -
source
<Stream> | <Iterable> | <AsyncIterable> | <Function> | <ReadableStream>-
返回:<Iterable> | <AsyncIterable>
¥Returns: <Iterable> | <AsyncIterable>
-
-
...transforms
<Stream> | <Function> | <TransformStream>-
source
<AsyncIterable> -
¥Returns: <AsyncIterable>
-
-
destination
<Stream> | <Function> | <WritableStream>-
source
<AsyncIterable> -
返回:<AsyncIterable> | <Promise>
¥Returns: <AsyncIterable> | <Promise>
-
-
callback
<Function> 当管道完全完成时调用。¥
callback
<Function> Called when the pipeline is fully done.-
err
<Error> -
val
destination
返回的Promise
的解析值。¥
val
Resolved value ofPromise
returned bydestination
.
-
-
返回:<Stream>
¥Returns: <Stream>
一种模块方法,用于在流和生成器之间进行管道转发错误并正确清理并在管道完成时提供回调。
¥A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.
const { pipeline } = require('node:stream');
const fs = require('node:fs');
const zlib = require('node:zlib');
// Use the pipeline API to easily pipe a series of streams
// together and get notified when the pipeline is fully done.
// A pipeline to gzip a potentially huge tar file efficiently:
pipeline(
fs.createReadStream('archive.tar'),
zlib.createGzip(),
fs.createWriteStream('archive.tar.gz'),
(err) => {
if (err) {
console.error('Pipeline failed.', err);
} else {
console.log('Pipeline succeeded.');
}
},
);
pipeline
API 提供了一个 promise 版本。
¥The pipeline
API provides a promise version.
stream.pipeline()
将在所有流上调用 stream.destroy(err)
,除了:
¥stream.pipeline()
will call stream.destroy(err)
on all streams except:
-
已触发
'end'
或'close'
的Readable
流。¥
Readable
streams which have emitted'end'
or'close'
. -
已触发
'finish'
或'close'
的Writable
流。¥
Writable
streams which have emitted'finish'
or'close'
.
在调用 callback
后,stream.pipeline()
在流上留下悬空事件监听器。在失败后重用流的情况下,这可能会导致事件监听器泄漏和吞噬错误。如果最后一个流是可读的,悬空事件监听器将被删除,以便稍后可以使用最后一个流。
¥stream.pipeline()
leaves dangling event listeners on the streams
after the callback
has been invoked. In the case of reuse of streams after
failure, this can cause event listener leaks and swallowed errors. If the last
stream is readable, dangling event listeners will be removed so that the last
stream can be consumed later.
stream.pipeline()
在出现错误时关闭所有流。将 IncomingRequest
与 pipeline
一起使用可能会导致意外行为,因为它会销毁套接字而不发送预期的响应。请参见下面的示例:
¥stream.pipeline()
closes all the streams when an error is raised.
The IncomingRequest
usage with pipeline
could lead to an unexpected behavior
once it would destroy the socket without sending the expected response.
See the example below:
const fs = require('node:fs');
const http = require('node:http');
const { pipeline } = require('node:stream');
const server = http.createServer((req, res) => {
const fileStream = fs.createReadStream('./fileNotExist.txt');
pipeline(fileStream, res, (err) => {
if (err) {
console.log(err); // No such file
// this message can't be sent once `pipeline` already destroyed the socket
return res.end('error!!!');
}
});
});