pipeTo(source[, ...transforms], writer[, options])
source<AsyncIterable> | <Iterable> 数据源。...transforms<Function> | <Object> 要应用的零个或多个转换。writer<Object> 使用write(chunk)方法的目的地。options<Object>signal<AbortSignal> 中止管道。preventClose<boolean> 如果true,当源结束时不要调用writer.end()。默认值:false。preventFail<boolean> 如果true,在出现错误时不要调用writer.fail()。默认:false。
- 返回:Promise
写入的总字节数。
将一个源通过转换传输到一个写入器。如果写入器有一个 writev(chunks) 方法,整个批次会在一次调用中传递(支持分散/聚集 I/O)。
🌐 Pipe a source through transforms into a writer. If the writer has a
writev(chunks) method, entire batches are passed in a single call (enabling
scatter/gather I/O).
如果编写者实现了可选的 *Sync 方法(writeSync、writevSync、endSync),pipeTo() 将首先尝试使用同步方法作为快速路径,并且仅在同步方法表明无法完成时(例如,背压或等待下一个周期)才回退到异步版本。fail() 总是以同步方式调用。
🌐 If the writer implements the optional *Sync methods (writeSync, writevSync,
endSync), pipeTo() will attempt to use the synchronous methods
first as a fast path, and fall back to the async versions only when the sync
methods indicate they cannot complete (e.g., backpressure or waiting for the
next tick). fail() is always called synchronously.
import { from, pipeTo } from 'node:stream/iter';
import { compressGzip } from 'node:zlib/iter';
import { open } from 'node:fs/promises';
const fh = await open('output.gz', 'w');
const totalBytes = await pipeTo(
from('Hello, world!'),
compressGzip(),
fh.writer({ autoClose: true }),
);const { from, pipeTo } = require('node:stream/iter');
const { compressGzip } = require('node:zlib/iter');
const { open } = require('node:fs/promises');
async function run() {
const fh = await open('output.gz', 'w');
const totalBytes = await pipeTo(
from('Hello, world!'),
compressGzip(),
fh.writer({ autoClose: true }),
);
}
run().catch(console.error);