- assert 断言
- async_hooks 异步钩子
- async_hooks/context 异步上下文
- buffer 缓冲区
- C++插件
- C/C++插件(使用 Node-API)
- C++嵌入器
- child_process 子进程
- cluster 集群
- CLI 命令行
- console 控制台
- Corepack 核心包
- crypto 加密
- crypto/webcrypto 网络加密
- debugger 调试器
- deprecation 弃用
- dgram 数据报
- diagnostics_channel 诊断通道
- dns 域名服务器
- domain 域
- Error 错误
- events 事件触发器
- fs 文件系统
- global 全局变量
- http 超文本传输协议
- http2 超文本传输协议 2.0
- https 安全超文本传输协议
- inspector 检查器
- Intl 国际化
- module 模块
- module/cjs CommonJS 模块
- module/esm ECMAScript 模块
- module/package 包模块
- net 网络
- os 操作系统
- path 路径
- perf_hooks 性能钩子
- permission 权限
- process 进程
- punycode 域名代码
- querystring 查询字符串
- readline 逐行读取
- repl 交互式解释器
- report 诊断报告
- stream 流
- stream/web 网络流
- string_decoder 字符串解码器
- test 测试
- timers 定时器
- tls 安全传输层
- trace_events 跟踪事件
- tty 终端
- url 网址
- util 实用工具
- v8 引擎
- vm 虚拟机
- wasi 网络汇编系统接口
- worker_threads 工作线程
- zlib 压缩
Node.js v16.20.0 文档
- Node.js v16.20.0
-
目录
- 集群
- 工作原理
- 类:
Worker
- 事件:
'disconnect'
- 事件:
'exit'
- 事件:
'fork'
- 事件:
'listening'
- 事件:
'message'
- 事件:
'online'
- 事件:
'setup'
cluster.disconnect([callback])
cluster.fork([env])
cluster.isMaster
cluster.isPrimary
cluster.isWorker
cluster.schedulingPolicy
cluster.settings
cluster.setupMaster([settings])
cluster.setupPrimary([settings])
cluster.worker
cluster.workers
- 集群
-
导航
- assert 断言
- async_hooks 异步钩子
- async_hooks/context 异步上下文
- buffer 缓冲区
- C++插件
- C/C++插件(使用 Node-API)
- C++嵌入器
- child_process 子进程
- cluster 集群
- CLI 命令行
- console 控制台
- Corepack 核心包
- crypto 加密
- crypto/webcrypto 网络加密
- debugger 调试器
- deprecation 弃用
- dgram 数据报
- diagnostics_channel 诊断通道
- dns 域名服务器
- domain 域
- Error 错误
- events 事件触发器
- fs 文件系统
- global 全局变量
- http 超文本传输协议
- http2 超文本传输协议 2.0
- https 安全超文本传输协议
- inspector 检查器
- Intl 国际化
- module 模块
- module/cjs CommonJS 模块
- module/esm ECMAScript 模块
- module/package 包模块
- net 网络
- os 操作系统
- path 路径
- perf_hooks 性能钩子
- permission 权限
- process 进程
- punycode 域名代码
- querystring 查询字符串
- readline 逐行读取
- repl 交互式解释器
- report 诊断报告
- stream 流
- stream/web 网络流
- string_decoder 字符串解码器
- test 测试
- timers 定时器
- tls 安全传输层
- trace_events 跟踪事件
- tty 终端
- url 网址
- util 实用工具
- v8 引擎
- vm 虚拟机
- wasi 网络汇编系统接口
- worker_threads 工作线程
- zlib 压缩
- 其他版本
集群#
¥Cluster
¥Stability: 2 - Stable
源代码: lib/cluster.js
Node.js 进程集群可用于运行多个 Node.js 实例,这些实例可以在其应用线程之间分配工作负载。当不需要进程隔离时,请改用 worker_threads
模块,它允许在单个 Node.js 实例中运行多个应用线程。
¥Clusters of Node.js processes can be used to run multiple instances of Node.js
that can distribute workloads among their application threads. When process
isolation is not needed, use the worker_threads
module instead, which
allows running multiple application threads within a single Node.js instance.
集群模块可以轻松创建共享服务器端口的子进程。
¥The cluster module allows easy creation of child processes that all share server ports.
import cluster from 'node:cluster';
import http from 'node:http';
import { cpus } from 'node:os';
import process from 'node:process';
const numCPUs = cpus().length;
if (cluster.isPrimary) {
console.log(`Primary ${process.pid} is running`);
// Fork workers.
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`worker ${worker.process.pid} died`);
});
} else {
// Workers can share any TCP connection
// In this case it is an HTTP server
http.createServer((req, res) => {
res.writeHead(200);
res.end('hello world\n');
}).listen(8000);
console.log(`Worker ${process.pid} started`);
}
const cluster = require('node:cluster');
const http = require('node:http');
const numCPUs = require('node:os').cpus().length;
const process = require('node:process');
if (cluster.isPrimary) {
console.log(`Primary ${process.pid} is running`);
// Fork workers.
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`worker ${worker.process.pid} died`);
});
} else {
// Workers can share any TCP connection
// In this case it is an HTTP server
http.createServer((req, res) => {
res.writeHead(200);
res.end('hello world\n');
}).listen(8000);
console.log(`Worker ${process.pid} started`);
}
运行 Node.js 现在将在工作进程之间共享端口 8000:
¥Running Node.js will now share port 8000 between the workers:
$ node server.js
Primary 3596 is running
Worker 4324 started
Worker 4520 started
Worker 6056 started
Worker 5644 started
在 Windows 上,还不能在工作进程中设置命名管道服务器。
¥On Windows, it is not yet possible to set up a named pipe server in a worker.
工作原理#
¥How it works
工作进程使用 child_process.fork()
方法衍生,因此它们可以通过 IPC 与父进程通信并且来回传递服务器句柄。
¥The worker processes are spawned using the child_process.fork()
method,
so that they can communicate with the parent via IPC and pass server
handles back and forth.
集群模块支持两种分发传入连接的方法。
¥The cluster module supports two methods of distributing incoming connections.
第一种方法(也是除 Windows 之外的所有平台上的默认方法)是循环法,其中主进程监听端口,接受新连接并以循环方式将它们分发给工作进程,其中使用一些内置智能以避免工作进程过载。
¥The first one (and the default one on all platforms except Windows) is the round-robin approach, where the primary process listens on a port, accepts new connections and distributes them across the workers in a round-robin fashion, with some built-in smarts to avoid overloading a worker process.
第二种方法是,主进程创建监听套接字并将其发送给感兴趣的工作进程。然后工作进程直接接受传入的连接。
¥The second approach is where the primary process creates the listen socket and sends it to interested workers. The workers then accept incoming connections directly.
理论上,第二种方法具有最好的性能。但是,在实践中,由于操作系统调度机制难以捉摸,分发往往非常不平衡。可能会出现八个进程中的两个进程分担了所有连接超过 70% 的负载。
¥The second approach should, in theory, give the best performance. In practice however, distribution tends to be very unbalanced due to operating system scheduler vagaries. Loads have been observed where over 70% of all connections ended up in just two processes, out of a total of eight.
由于 server.listen()
将大部分工作交给了主进程,因此普通的 Node.js 进程和集群工作进程之间的行为在三种情况下会有所不同:
¥Because server.listen()
hands off most of the work to the primary
process, there are three cases where the behavior between a normal
Node.js process and a cluster worker differs:
-
server.listen({fd: 7})
因为消息被传递到主节点,所以父节点中的文件描述符 7 将被监听,并将句柄传递给工作进程,而不是监听工作进程关于 7 号文件描述符引用的内容。¥
server.listen({fd: 7})
Because the message is passed to the primary, file descriptor 7 in the parent will be listened on, and the handle passed to the worker, rather than listening to the worker's idea of what the number 7 file descriptor references. -
server.listen(handle)
显式监听句柄将导致工作进程使用提供的句柄,而不是与主进程交谈。¥
server.listen(handle)
Listening on handles explicitly will cause the worker to use the supplied handle, rather than talk to the primary process. -
server.listen(0)
通常,这会导致服务器监听随机端口。但是,在集群中,每个工作进程每次执行listen(0)
时都会收到相同的 "random" 端口。实质上,端口第一次是随机的,但之后是可预测的。要监听唯一的端口,则根据集群工作进程 ID 生成端口号。¥
server.listen(0)
Normally, this will cause servers to listen on a random port. However, in a cluster, each worker will receive the same "random" port each time they dolisten(0)
. In essence, the port is random the first time, but predictable thereafter. To listen on a unique port, generate a port number based on the cluster worker ID.
Node.js 不提供路由逻辑。因此,重要的是设计一个应用,使其不会过于依赖内存中的数据对象来处理会话和登录等事情。
¥Node.js does not provide routing logic. It is therefore important to design an application such that it does not rely too heavily on in-memory data objects for things like sessions and login.
因为工作进程都是独立的进程,所以它们可以根据程序的需要被杀死或重新衍生,而不会影响其他工作进程。只要还有工作进程仍然活动,服务器就会继续接受连接。如果没有工作进程活动,则现有的连接将被丢弃,且新的连接将被拒绝。但是,Node.js 不会自动管理工作进程的数量。应用有责任根据自己的需要管理工作进程池。
¥Because workers are all separate processes, they can be killed or re-spawned depending on a program's needs, without affecting other workers. As long as there are some workers still alive, the server will continue to accept connections. If no workers are alive, existing connections will be dropped and new connections will be refused. Node.js does not automatically manage the number of workers, however. It is the application's responsibility to manage the worker pool based on its own needs.
尽管 node:cluster
模块的主要使用场景是网络,但它也可用于需要工作进程的其他使用场景。
¥Although a primary use case for the node:cluster
module is networking, it can
also be used for other use cases requiring worker processes.
类:Worker
#
¥Class: Worker
-
¥Extends: <EventEmitter>
Worker
对象包含了工作进程的所有公共的信息和方法。在主进程中,可以使用 cluster.workers
来获取它。在工作进程中,可以使用 cluster.worker
来获取它。
¥A Worker
object contains all public information and method about a worker.
In the primary it can be obtained using cluster.workers
. In a worker
it can be obtained using cluster.worker
.
事件:'disconnect'
#
¥Event: 'disconnect'
类似于 cluster.on('disconnect')
事件,但特定于此工作进程。
¥Similar to the cluster.on('disconnect')
event, but specific to this worker.
cluster.fork().on('disconnect', () => {
// Worker has disconnected
});
事件:'error'
#
¥Event: 'error'
此事件与 child_process.fork()
提供的相同。
¥This event is the same as the one provided by child_process.fork()
.
在工作进程中,也可以使用 process.on('error')
。
¥Within a worker, process.on('error')
may also be used.
事件:'exit'
#
¥Event: 'exit'
-
code
<number> 如果其正常退出,则为退出码。¥
code
<number> The exit code, if it exited normally. -
signal
<string> 造成进程被终止的信号的名称(例如'SIGHUP'
)。¥
signal
<string> The name of the signal (e.g.'SIGHUP'
) that caused the process to be killed.
类似于 cluster.on('exit')
事件,但特定于此工作进程。
¥Similar to the cluster.on('exit')
event, but specific to this worker.
import cluster from 'node:cluster';
if (cluster.isPrimary) {
const worker = cluster.fork();
worker.on('exit', (code, signal) => {
if (signal) {
console.log(`worker was killed by signal: ${signal}`);
} else if (code !== 0) {
console.log(`worker exited with error code: ${code}`);
} else {
console.log('worker success!');
}
});
}
const cluster = require('node:cluster');
if (cluster.isPrimary) {
const worker = cluster.fork();
worker.on('exit', (code, signal) => {
if (signal) {
console.log(`worker was killed by signal: ${signal}`);
} else if (code !== 0) {
console.log(`worker exited with error code: ${code}`);
} else {
console.log('worker success!');
}
});
}
事件:'listening'
#
¥Event: 'listening'
address
<Object>
类似于 cluster.on('listening')
事件,但特定于此工作进程。
¥Similar to the cluster.on('listening')
event, but specific to this worker.
cluster.fork().on('listening', (address) => {
// Worker is listening
});
cluster.fork().on('listening', (address) => {
// Worker is listening
});
它不会在工作进程中触发。
¥It is not emitted in the worker.
事件:'message'
#
¥Event: 'message'
-
message
<Object> -
handle
<undefined> | <Object>
类似于 cluster
的 'message'
事件,但特定于此工作线程。
¥Similar to the 'message'
event of cluster
, but specific to this worker.
在工作进程中,也可以使用 process.on('message')
。
¥Within a worker, process.on('message')
may also be used.
¥See process
event: 'message'
.
这是使用消息系统的示例。它在主进程中记录工作进程接收到的 HTTP 请求数:
¥Here is an example using the message system. It keeps a count in the primary process of the number of HTTP requests received by the workers:
import cluster from 'node:cluster';
import http from 'node:http';
import { cpus } from 'node:os';
import process from 'node:process';
if (cluster.isPrimary) {
// Keep track of http requests
let numReqs = 0;
setInterval(() => {
console.log(`numReqs = ${numReqs}`);
}, 1000);
// Count requests
function messageHandler(msg) {
if (msg.cmd && msg.cmd === 'notifyRequest') {
numReqs += 1;
}
}
// Start workers and listen for messages containing notifyRequest
const numCPUs = cpus().length;
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
for (const id in cluster.workers) {
cluster.workers[id].on('message', messageHandler);
}
} else {
// Worker processes have a http server.
http.Server((req, res) => {
res.writeHead(200);
res.end('hello world\n');
// Notify primary about the request
process.send({ cmd: 'notifyRequest' });
}).listen(8000);
}
const cluster = require('node:cluster');
const http = require('node:http');
const process = require('node:process');
if (cluster.isPrimary) {
// Keep track of http requests
let numReqs = 0;
setInterval(() => {
console.log(`numReqs = ${numReqs}`);
}, 1000);
// Count requests
function messageHandler(msg) {
if (msg.cmd && msg.cmd === 'notifyRequest') {
numReqs += 1;
}
}
// Start workers and listen for messages containing notifyRequest
const numCPUs = require('node:os').cpus().length;
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
for (const id in cluster.workers) {
cluster.workers[id].on('message', messageHandler);
}
} else {
// Worker processes have a http server.
http.Server((req, res) => {
res.writeHead(200);
res.end('hello world\n');
// Notify primary about the request
process.send({ cmd: 'notifyRequest' });
}).listen(8000);
}
事件:'online'
#
¥Event: 'online'
类似于 cluster.on('online')
事件,但特定于此工作进程。
¥Similar to the cluster.on('online')
event, but specific to this worker.
cluster.fork().on('online', () => {
// Worker is online
});
它不会在工作进程中触发。
¥It is not emitted in the worker.
worker.disconnect()
#
-
返回:<cluster.Worker>
worker
的引用。¥Returns: <cluster.Worker> A reference to
worker
.
在工作进程中,此函数将关闭所有服务器,等待那些服务器上的 'close'
事件,然后断开 IPC 通道。
¥In a worker, this function will close all servers, wait for the 'close'
event
on those servers, and then disconnect the IPC channel.
在主进程中,内部的消息被发送给工作进程,使其调用自身的 .disconnect()
。
¥In the primary, an internal message is sent to the worker causing it to call
.disconnect()
on itself.
使 .exitedAfterDisconnect
被设置。
¥Causes .exitedAfterDisconnect
to be set.
服务器关闭后,它将不再接受新连接,但连接可能会被任何其他监听的工作进程接受。现有的连接将被允许像往常一样关闭。当不再存在连接时(参见 server.close()
),到工作进程的 IPC 通道将关闭,允许其正常地死亡。
¥After a server is closed, it will no longer accept new connections,
but connections may be accepted by any other listening worker. Existing
connections will be allowed to close as usual. When no more connections exist,
see server.close()
, the IPC channel to the worker will close allowing it
to die gracefully.
以上只适用于服务端连接,客户端连接不会被 worker 自动关闭,disconnect 也不会等到他们关闭才退出。
¥The above applies only to server connections, client connections are not automatically closed by workers, and disconnect does not wait for them to close before exiting.
在一个 worker 中,process.disconnect
存在,但不是这个函数;它是 disconnect()
。
¥In a worker, process.disconnect
exists, but it is not this function;
it is disconnect()
.
因为长期存在的服务器连接可能会阻止工作进程断开连接,所以发送消息可能很有用,因此可以采取特定于应用的操作来关闭它们。实现超时也可能很有用,如果 'disconnect'
事件在一段时间后没有触发,则杀死工作进程。
¥Because long living server connections may block workers from disconnecting, it
may be useful to send a message, so application specific actions may be taken to
close them. It also may be useful to implement a timeout, killing a worker if
the 'disconnect'
event has not been emitted after some time.
if (cluster.isPrimary) {
const worker = cluster.fork();
let timeout;
worker.on('listening', (address) => {
worker.send('shutdown');
worker.disconnect();
timeout = setTimeout(() => {
worker.kill();
}, 2000);
});
worker.on('disconnect', () => {
clearTimeout(timeout);
});
} else if (cluster.isWorker) {
const net = require('node:net');
const server = net.createServer((socket) => {
// Connections never end
});
server.listen(8000);
process.on('message', (msg) => {
if (msg === 'shutdown') {
// Initiate graceful close of any connections to server
}
});
}
worker.exitedAfterDisconnect
#
如果工作线程因 .kill()
或 .disconnect()
而退出,则此属性为 true
。如果工作进程以任何其他方式退出,则为 false
。如果工作进程没有退出,则为 undefined
。
¥This property is true
if the worker exited due to .kill()
or
.disconnect()
. If the worker exited any other way, it is false
. If the
worker has not exited, it is undefined
.
布尔值 worker.exitedAfterDisconnect
可以区分自愿退出和意外退出,主进程可以根据此值选择不重新衍生工作进程。
¥The boolean worker.exitedAfterDisconnect
allows distinguishing between
voluntary and accidental exit, the primary may choose not to respawn a worker
based on this value.
cluster.on('exit', (worker, code, signal) => {
if (worker.exitedAfterDisconnect === true) {
console.log('Oh, it was just voluntary – no need to worry');
}
});
// kill worker
worker.kill();
worker.id
#
每个新的工作进程都被赋予了自己唯一的 id,此 id 存储在 id
。
¥Each new worker is given its own unique id, this id is stored in the
id
.
当工作进程存活时,这是在 cluster.workers
中索引它的键。
¥While a worker is alive, this is the key that indexes it in
cluster.workers
.
worker.isConnected()
#
如果工作进程通过其 IPC 通道连接到其主进程,则此函数返回 true
,否则返回 false
。工作进程在创建后连接到其主进程。触发 'disconnect'
事件后断开连接。
¥This function returns true
if the worker is connected to its primary via its
IPC channel, false
otherwise. A worker is connected to its primary after it
has been created. It is disconnected after the 'disconnect'
event is emitted.
worker.isDead()
#
如果工作进程已终止(由于退出或收到信号),则此函数返回 true
。否则,它返回 false
。
¥This function returns true
if the worker's process has terminated (either
because of exiting or being signaled). Otherwise, it returns false
.
import cluster from 'node:cluster';
import http from 'node:http';
import { cpus } from 'node:os';
import process from 'node:process';
const numCPUs = cpus().length;
if (cluster.isPrimary) {
console.log(`Primary ${process.pid} is running`);
// Fork workers.
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('fork', (worker) => {
console.log('worker is dead:', worker.isDead());
});
cluster.on('exit', (worker, code, signal) => {
console.log('worker is dead:', worker.isDead());
});
} else {
// Workers can share any TCP connection. In this case, it is an HTTP server.
http.createServer((req, res) => {
res.writeHead(200);
res.end(`Current process\n ${process.pid}`);
process.kill(process.pid);
}).listen(8000);
}
const cluster = require('node:cluster');
const http = require('node:http');
const numCPUs = require('node:os').cpus().length;
const process = require('node:process');
if (cluster.isPrimary) {
console.log(`Primary ${process.pid} is running`);
// Fork workers.
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('fork', (worker) => {
console.log('worker is dead:', worker.isDead());
});
cluster.on('exit', (worker, code, signal) => {
console.log('worker is dead:', worker.isDead());
});
} else {
// Workers can share any TCP connection. In this case, it is an HTTP server.
http.createServer((req, res) => {
res.writeHead(200);
res.end(`Current process\n ${process.pid}`);
process.kill(process.pid);
}).listen(8000);
}
worker.kill([signal])
#
-
signal
<string> 发送给工作进程的终止信号的名称。默认值:'SIGTERM'
¥
signal
<string> Name of the kill signal to send to the worker process. Default:'SIGTERM'
此函数会杀死工作进程。在初级中,它通过断开 worker.process
来做到这一点,一旦断开,就用 signal
杀死。在 worker 中,它通过断开通道,然后以代码 0
退出来实现。
¥This function will kill the worker. In the primary, it does this
by disconnecting the worker.process
, and once disconnected, killing
with signal
. In the worker, it does it by disconnecting the channel,
and then exiting with code 0
.
因为 kill()
试图优雅地断开工作进程,它很容易无限期地等待断开连接完成。例如,如果 worker 进入无限循环,则永远不会发生优雅的断开连接。如果不需要优雅的断开连接行为,请使用 worker.process.kill()
。
¥Because kill()
attempts to gracefully disconnect the worker process, it is
susceptible to waiting indefinitely for the disconnect to complete. For example,
if the worker enters an infinite loop, a graceful disconnect will never occur.
If the graceful disconnect behavior is not needed, use worker.process.kill()
.
使 .exitedAfterDisconnect
被设置。
¥Causes .exitedAfterDisconnect
to be set.
为了向后兼容,此方法别名为 worker.destroy()
。
¥This method is aliased as worker.destroy()
for backward compatibility.
在一个 worker 中,process.kill()
存在,但不是这个函数;它是 kill()
。
¥In a worker, process.kill()
exists, but it is not this function;
it is kill()
.
worker.process
#
所有工作进程都是使用 child_process.fork()
创建,此函数返回的对象存储为 .process
。在工作进程中,存储了全局的 process
。
¥All workers are created using child_process.fork()
, the returned object
from this function is stored as .process
. In a worker, the global process
is stored.
看:子进程模块。
¥See: Child Process module.
如果 'disconnect'
事件发生在 process
并且 .exitedAfterDisconnect
不是 true
,则工作进程将调用 process.exit(0)
。这可以防止意外断开连接。
¥Workers will call process.exit(0)
if the 'disconnect'
event occurs
on process
and .exitedAfterDisconnect
is not true
. This protects against
accidental disconnection.
worker.send(message[, sendHandle[, options]][, callback])
#
-
message
<Object> -
sendHandle
<Handle> -
options
<Object>options
参数(如果存在)是用于参数化某些类型句柄的发送的对象。options
支持以下属性:¥
options
<Object> Theoptions
argument, if present, is an object used to parameterize the sending of certain types of handles.options
supports the following properties: -
callback
<Function> -
返回:<boolean>
¥Returns: <boolean>
向工作进程或主进程发送消息,可选择使用句柄。
¥Send a message to a worker or primary, optionally with a handle.
在主进程中,这会向特定的工作进程发送消息。它与 ChildProcess.send()
相同。
¥In the primary, this sends a message to a specific worker. It is identical to
ChildProcess.send()
.
在工作进程中,这会向主进程发送消息。它与 process.send()
相同。
¥In a worker, this sends a message to the primary. It is identical to
process.send()
.
此示例将回显来自主进程的所有消息:
¥This example will echo back all messages from the primary:
if (cluster.isPrimary) {
const worker = cluster.fork();
worker.send('hi there');
} else if (cluster.isWorker) {
process.on('message', (msg) => {
process.send(msg);
});
}
事件:'disconnect'
#
¥Event: 'disconnect'
worker
<cluster.Worker>
在工作进程 IPC 通道断开连接后触发。当工作进程正常退出、被杀死、或手动断开连接(例如使用 worker.disconnect()
)时,可能会发生这种情况。
¥Emitted after the worker IPC channel has disconnected. This can occur when a
worker exits gracefully, is killed, or is disconnected manually (such as with
worker.disconnect()
).
'disconnect'
和 'exit'
事件之间可能存在延迟。这些事件可用于检测进程是否陷入清理或是否存在长期连接。
¥There may be a delay between the 'disconnect'
and 'exit'
events. These
events can be used to detect if the process is stuck in a cleanup or if there
are long-living connections.
cluster.on('disconnect', (worker) => {
console.log(`The worker #${worker.id} has disconnected`);
});
事件:'exit'
#
¥Event: 'exit'
-
worker
<cluster.Worker> -
code
<number> 如果其正常退出,则为退出码。¥
code
<number> The exit code, if it exited normally. -
signal
<string> 造成进程被终止的信号的名称(例如'SIGHUP'
)。¥
signal
<string> The name of the signal (e.g.'SIGHUP'
) that caused the process to be killed.
当任何工作进程死亡时,则集群模块将触发 'exit'
事件。
¥When any of the workers die the cluster module will emit the 'exit'
event.
这可用于通过再次调用 .fork()
来重新启动工作进程。
¥This can be used to restart the worker by calling .fork()
again.
cluster.on('exit', (worker, code, signal) => {
console.log('worker %d died (%s). restarting...',
worker.process.pid, signal || code);
cluster.fork();
});
事件:'fork'
#
¥Event: 'fork'
worker
<cluster.Worker>
当新的工作进程被衍生时,则集群模块将触发 'fork'
事件。这可用于记录工作进程的活动,并创建自定义的超时。
¥When a new worker is forked the cluster module will emit a 'fork'
event.
This can be used to log worker activity, and create a custom timeout.
const timeouts = [];
function errorMsg() {
console.error('Something must be wrong with the connection ...');
}
cluster.on('fork', (worker) => {
timeouts[worker.id] = setTimeout(errorMsg, 2000);
});
cluster.on('listening', (worker, address) => {
clearTimeout(timeouts[worker.id]);
});
cluster.on('exit', (worker, code, signal) => {
clearTimeout(timeouts[worker.id]);
errorMsg();
});
事件:'listening'
#
¥Event: 'listening'
-
worker
<cluster.Worker> -
address
<Object>
从工作进程调用 listen()
后,当服务器上触发 'listening'
事件时,则主进程中的 cluster
也将触发 'listening'
事件。
¥After calling listen()
from a worker, when the 'listening'
event is emitted
on the server, a 'listening'
event will also be emitted on cluster
in the
primary.
事件处理程序使用两个参数执行,worker
包含工作对象,address
对象包含以下连接属性:address
、port
和 addressType
。如果工作进程正在监听多个地址,则这将非常有用。
¥The event handler is executed with two arguments, the worker
contains the
worker object and the address
object contains the following connection
properties: address
, port
, and addressType
. This is very useful if the
worker is listening on more than one address.
cluster.on('listening', (worker, address) => {
console.log(
`A worker is now connected to ${address.address}:${address.port}`);
});
addressType
是以下之一:
¥The addressType
is one of:
-
4
(TCPv4) -
6
(TCPv6) -
-1
(Unix 域套接字)¥
-1
(Unix domain socket) -
'udp4'
或'udp6'
(UDPv4 或 UDPv6)¥
'udp4'
or'udp6'
(UDPv4 or UDPv6)
事件:'message'
#
¥Event: 'message'
-
worker
<cluster.Worker> -
message
<Object> -
handle
<undefined> | <Object>
当集群主进程接收到来自任何工作进程的消息时触发。
¥Emitted when the cluster primary receives a message from any worker.
参见 child_process
事件:'message'
。
事件:'online'
#
¥Event: 'online'
worker
<cluster.Worker>
衍生新的工作进程之后,工作进程应该使用在线消息进行响应。当主进程接收到在线消息时,它将触发此事件。'fork'
和 'online'
的区别在于主进程衍生工作进程时触发衍生,而 'online'
在工作进程运行时触发。
¥After forking a new worker, the worker should respond with an online message.
When the primary receives an online message it will emit this event.
The difference between 'fork'
and 'online'
is that fork is emitted when the
primary forks a worker, and 'online'
is emitted when the worker is running.
cluster.on('online', (worker) => {
console.log('Yay, the worker responded after it was forked');
});
事件:'setup'
#
¥Event: 'setup'
settings
<Object>
每次调用 .setupPrimary()
时触发。
¥Emitted every time .setupPrimary()
is called.
settings
对象是调用 .setupPrimary()
时的 cluster.settings
对象,并且只是建议性的,因为可以在单个滴答中对 .setupPrimary()
进行多次调用。
¥The settings
object is the cluster.settings
object at the time
.setupPrimary()
was called and is advisory only, since multiple calls to
.setupPrimary()
can be made in a single tick.
如果准确性很重要,则使用 cluster.settings
。
¥If accuracy is important, use cluster.settings
.
cluster.disconnect([callback])
#
-
callback
<Function> 当所有工作进程断开连接并关闭句柄时调用。¥
callback
<Function> Called when all workers are disconnected and handles are closed.
对 cluster.workers
中的每个工作进程调用 .disconnect()
。
¥Calls .disconnect()
on each worker in cluster.workers
.
当它们断开连接时,所有的内部句柄都将关闭,如果没有其他事件在等待,则允许主进程正常终止。
¥When they are disconnected all internal handles will be closed, allowing the primary process to die gracefully if no other event is waiting.
该方法采用可选的回调参数,当完成时将被调用。
¥The method takes an optional callback argument which will be called when finished.
这只能从主进程调用。
¥This can only be called from the primary process.
cluster.fork([env])
#
-
env
<Object> 要添加到工作进程环境变量的键/值对。¥
env
<Object> Key/value pairs to add to worker process environment. -
¥Returns: <cluster.Worker>
衍生新的工作进程。
¥Spawn a new worker process.
这只能从主进程调用。
¥This can only be called from the primary process.
cluster.isMaster
#
弃用的 cluster.isPrimary
别名。
¥Deprecated alias for cluster.isPrimary
.
cluster.isPrimary
#
如果进程是主进程,则为真。这是由 process.env.NODE_UNIQUE_ID
决定的。如果 process.env.NODE_UNIQUE_ID
未定义,则 isPrimary
为 true
。
¥True if the process is a primary. This is determined
by the process.env.NODE_UNIQUE_ID
. If process.env.NODE_UNIQUE_ID
is
undefined, then isPrimary
is true
.
cluster.isWorker
#
如果进程不是主进程,则为真(与 cluster.isPrimary
相反)。
¥True if the process is not a primary (it is the negation of cluster.isPrimary
).
cluster.schedulingPolicy
#
调度策略,cluster.SCHED_RR
用于循环或 cluster.SCHED_NONE
将其留给操作系统。这是全局的设置,一旦衍生第一个工作进程或调用 .setupPrimary()
(以先到者为准),就会有效地冻结。
¥The scheduling policy, either cluster.SCHED_RR
for round-robin or
cluster.SCHED_NONE
to leave it to the operating system. This is a
global setting and effectively frozen once either the first worker is spawned,
or .setupPrimary()
is called, whichever comes first.
SCHED_RR
是除 Windows 之外的所有操作系统的默认值。一旦 libuv 能够有效地分发 IOCP 句柄而不会导致大量性能损失,则 Windows 将更改为 SCHED_RR
。
¥SCHED_RR
is the default on all operating systems except Windows.
Windows will change to SCHED_RR
once libuv is able to effectively
distribute IOCP handles without incurring a large performance hit.
cluster.schedulingPolicy
也可以通过 NODE_CLUSTER_SCHED_POLICY
环境变量设置。有效值为 'rr'
和 'none'
。
¥cluster.schedulingPolicy
can also be set through the
NODE_CLUSTER_SCHED_POLICY
environment variable. Valid
values are 'rr'
and 'none'
.
cluster.settings
#
-
-
execArgv
<string[]> 传给 Node.js 可执行文件的字符串参数列表。默认值:process.execArgv
。¥
execArgv
<string[]> List of string arguments passed to the Node.js executable. Default:process.execArgv
. -
exec
<string> 工作进程文件的文件路径。默认值:process.argv[1]
。¥
exec
<string> File path to worker file. Default:process.argv[1]
. -
args
<string[]> 传给工作进程的字符串参数。默认值:process.argv.slice(2)
。¥
args
<string[]> String arguments passed to worker. Default:process.argv.slice(2)
. -
cwd
<string> 工作进程的当前工作目录。默认值:undefined
(从父进程继承)。¥
cwd
<string> Current working directory of the worker process. Default:undefined
(inherits from parent process). -
serialization
<string> 指定用于在进程之间发送消息的序列化类型。可能的值为'json'
和'advanced'
。有关详细信息,请参阅child_process
的高级序列化。默认值:false
。¥
serialization
<string> Specify the kind of serialization used for sending messages between processes. Possible values are'json'
and'advanced'
. See Advanced serialization forchild_process
for more details. Default:false
. -
silent
<boolean> 是否将输出发送到父进程的标准输入输出。默认值:false
。¥
silent
<boolean> Whether or not to send output to parent's stdio. Default:false
. -
stdio
<Array> 配置衍生进程的标准输入输出。由于集群模块依赖 IPC 来运行,因此此配置必须包含'ipc'
条目。提供此选项时,它会覆盖silent
。¥
stdio
<Array> Configures the stdio of forked processes. Because the cluster module relies on IPC to function, this configuration must contain an'ipc'
entry. When this option is provided, it overridessilent
. -
uid
<number> 设置进程的用户标识。(请参阅setuid(2)
。)¥
uid
<number> Sets the user identity of the process. (Seesetuid(2)
.) -
gid
<number> 设置进程的群组标识。(请参阅setgid(2)
。)¥
gid
<number> Sets the group identity of the process. (Seesetgid(2)
.) -
inspectPort
<number> | <Function> 设置工作进程的检查器端口。这可以是数字,也可以是不带参数并返回数字的函数。默认情况下,每个工作进程都有自己的端口,从主进程的process.debugPort
开始递增。¥
inspectPort
<number> | <Function> Sets inspector port of worker. This can be a number, or a function that takes no arguments and returns a number. By default each worker gets its own port, incremented from the primary'sprocess.debugPort
. -
windowsHide
<boolean> 隐藏通常在 Windows 系统上创建的衍生进程控制台窗口。默认值:false
。¥
windowsHide
<boolean> Hide the forked processes console window that would normally be created on Windows systems. Default:false
.
-
调用 .setupPrimary()
(或 .fork()
)之后,此设置对象将包含设置,包括默认值。
¥After calling .setupPrimary()
(or .fork()
) this settings object will
contain the settings, including the default values.
此对象不应手动更改或设置。
¥This object is not intended to be changed or set manually.
cluster.setupMaster([settings])
#
弃用的 .setupPrimary()
别名。
¥Deprecated alias for .setupPrimary()
.
cluster.setupPrimary([settings])
#
-
settings
<Object> 参见cluster.settings
。¥
settings
<Object> Seecluster.settings
.
setupPrimary
用于更改默认的 'fork' 行为。调用后,设置将出现在 cluster.settings
中。
¥setupPrimary
is used to change the default 'fork' behavior. Once called,
the settings will be present in cluster.settings
.
任何设置更改只会影响未来对 .fork()
的调用,而不会影响已经运行的工作进程。
¥Any settings changes only affect future calls to .fork()
and have no
effect on workers that are already running.
唯一不能通过 .setupPrimary()
设置的工作进程属性是传给 .fork()
的 env
。
¥The only attribute of a worker that cannot be set via .setupPrimary()
is
the env
passed to .fork()
.
上面的默认值仅适用于第一次调用;以后调用的默认值是调用 cluster.setupPrimary()
时的当前值。
¥The defaults above apply to the first call only; the defaults for later
calls are the current values at the time of cluster.setupPrimary()
is called.
import cluster from 'node:cluster';
cluster.setupPrimary({
exec: 'worker.js',
args: ['--use', 'https'],
silent: true
});
cluster.fork(); // https worker
cluster.setupPrimary({
exec: 'worker.js',
args: ['--use', 'http']
});
cluster.fork(); // http worker
const cluster = require('node:cluster');
cluster.setupPrimary({
exec: 'worker.js',
args: ['--use', 'https'],
silent: true
});
cluster.fork(); // https worker
cluster.setupPrimary({
exec: 'worker.js',
args: ['--use', 'http']
});
cluster.fork(); // http worker
这只能从主进程调用。
¥This can only be called from the primary process.
cluster.worker
#
对当前工作进程对象的引用。在主进程中不可用。
¥A reference to the current worker object. Not available in the primary process.
import cluster from 'node:cluster';
if (cluster.isPrimary) {
console.log('I am primary');
cluster.fork();
cluster.fork();
} else if (cluster.isWorker) {
console.log(`I am worker #${cluster.worker.id}`);
}
const cluster = require('node:cluster');
if (cluster.isPrimary) {
console.log('I am primary');
cluster.fork();
cluster.fork();
} else if (cluster.isWorker) {
console.log(`I am worker #${cluster.worker.id}`);
}
cluster.workers
#
存储活动工作进程对象的哈希,以 id
字段为键。这样可以很容易地遍历所有工作进程。它仅在主进程中可用。
¥A hash that stores the active worker objects, keyed by id
field. This makes it
easy to loop through all the workers. It is only available in the primary
process.
工作进程断开连接并退出后,该工作进程将从 cluster.workers
中移除。这两个事件之间的顺序无法预先确定。但是,保证从 cluster.workers
列表中删除发生在最后一个 'disconnect'
或 'exit'
事件触发之前。
¥A worker is removed from cluster.workers
after the worker has disconnected
and exited. The order between these two events cannot be determined in
advance. However, it is guaranteed that the removal from the cluster.workers
list happens before the last 'disconnect'
or 'exit'
event is emitted.
import cluster from 'node:cluster';
for (const worker of Object.values(cluster.workers)) {
worker.send('big announcement to all workers');
}
const cluster = require('node:cluster');
for (const worker of Object.values(cluster.workers)) {
worker.send('big announcement to all workers');
}