For most requests, this should be ParamsDictionary
, but if you're
using this in a route handler for a route that uses a RegExp
or a wildcard
string
path (e.g. '/user/*'
), then req.params
will be an array, in
which case you should use ParamsArray
instead.
The message.aborted
property will be true
if the request has
been aborted.
Return an array of Accepted media types ordered from highest quality to lowest.
Readonly
closedIs true
after 'close'
has been emitted.
v18.0.0
The message.complete
property will be true
if a complete HTTP message has
been received and successfully parsed.
This property is particularly useful as a means of determining if a client or server fully transmitted a message before a connection was terminated:
const req = http.request({
host: '127.0.0.1',
port: 8080,
method: 'POST',
}, (res) => {
res.resume();
res.on('end', () => {
if (!res.complete)
console.error(
'The connection was terminated while the message was still being sent');
});
});
v0.3.0
Alias for message.socket
.
v0.1.90
Since v16.0.0 - Use socket
.
Is true
after readable.destroy()
has been called.
v8.0.0
Readonly
erroredReturns error if the stream has been destroyed with an error.
v18.0.0
Readonly
freshCheck if the request is fresh, aka Last-Modified and/or the ETag still match.
The request/response headers object.
Key-value pairs of header names and values. Header names are lower-cased.
// Prints something like:
//
// { 'user-agent': 'curl/7.22.0',
// host: '127.0.0.1:8000',
// accept: '*' }
console.log(request.headers);
Duplicates in raw headers are handled in the following ways, depending on the header name:
age
, authorization
, content-length
, content-type
, etag
, expires
, from
, host
, if-modified-since
, if-unmodified-since
, last-modified
, location
,
max-forwards
, proxy-authorization
, referer
, retry-after
, server
, or user-agent
are discarded.
To allow duplicate values of the headers listed above to be joined,
use the option joinDuplicateHeaders
in request and createServer. See RFC 9110 Section 5.3 for more
information.set-cookie
is always an array. Duplicates are added to the array.cookie
headers, the values are joined together with ;
.,
.v0.1.5
Similar to message.headers
, but there is no join logic and the values are
always arrays of strings, even for headers received just once.
// Prints something like:
//
// { 'user-agent': ['curl/7.22.0'],
// host: ['127.0.0.1:8000'],
// accept: ['*'] }
console.log(request.headersDistinct);
v18.3.0, v16.17.0
Readonly
hostUse hostname instead.
Readonly
hostnameParse the "Host" header field hostname.
In case of server request, the HTTP version sent by the client. In the case of
client response, the HTTP version of the connected-to server.
Probably either '1.1'
or '1.0'
.
Also message.httpVersionMajor
is the first integer and message.httpVersionMinor
is the second.
v0.1.1
Readonly
ipReturn the remote address, or when
"trust proxy" is true
return
the upstream addr.
Value may be undefined if the req.socket
is destroyed
(for example, if the client disconnected).
Readonly
ipsWhen "trust proxy" is true
, parse
the "X-Forwarded-For" ip address list.
For example if the value were "client, proxy1, proxy2"
you would receive the array ["client", "proxy1", "proxy2"]
where "proxy2" is the furthest down-stream.
Only valid for request obtained from Server.
The request method as a string. Read only. Examples: 'GET'
, 'DELETE'
.
v0.1.1
Optional
nextReadonly
pathShort-hand for url.parse(req.url).pathname
.
Readonly
protocolReturn the protocol string "http" or "https" when requested with TLS. When the "trust proxy" setting is enabled the "X-Forwarded-Proto" header field will be trusted. If you're running behind a reverse proxy that supplies https for you this may be enabled.
The raw request/response headers list exactly as they were received.
The keys and values are in the same list. It is not a list of tuples. So, the even-numbered offsets are key values, and the odd-numbered offsets are the associated values.
Header names are not lowercased, and duplicates are not merged.
// Prints something like:
//
// [ 'user-agent',
// 'this is invalid because there can be only one',
// 'User-Agent',
// 'curl/7.22.0',
// 'Host',
// '127.0.0.1:8000',
// 'ACCEPT',
// '*' ]
console.log(request.rawHeaders);
v0.11.6
The raw request/response trailer keys and values exactly as they were
received. Only populated at the 'end'
event.
v0.11.6
Is true
if it is safe to call read, which means
the stream has not been destroyed or emitted 'error'
or 'end'
.
v11.4.0
Readonly
Experimental
readableReturns whether the stream was destroyed or errored before emitting 'end'
.
v16.8.0
Readonly
Experimental
readableReturns whether 'data'
has been emitted.
v16.7.0, v14.18.0
Readonly
readableGetter for the property encoding
of a given Readable
stream. The encoding
property can be set using the setEncoding method.
v12.7.0
Readonly
readableBecomes true
when 'end'
event is emitted.
v12.9.0
Readonly
readableThis property reflects the current state of a Readable
stream as described
in the Three states section.
v9.4.0
Readonly
readableReturns the value of highWaterMark
passed when creating this Readable
.
v9.3.0
Readonly
readableThis property contains the number of bytes (or objects) in the queue
ready to be read. The value provides introspection data regarding
the status of the highWaterMark
.
v9.4.0
Readonly
readableGetter for the property objectMode
of a given Readable
stream.
v12.3.0
Optional
resAfter middleware.init executed, Request will contain res and next properties See: express/lib/middleware/init.js
Readonly
secureShort-hand for:
req.protocol == 'https'
The net.Socket
object associated with the connection.
With HTTPS support, use request.socket.getPeerCertificate()
to obtain the
client's authentication details.
This property is guaranteed to be an instance of the net.Socket
class,
a subclass of stream.Duplex
, unless the user specified a socket
type other than net.Socket
or internally nulled.
v0.3.0
Readonly
staleCheck if the request is stale, aka "Last-Modified" and / or the "ETag" for the resource has changed.
Optional
statusOnly valid for response obtained from ClientRequest.
The 3-digit HTTP response status code. E.G. 404
.
v0.1.1
Optional
statusOnly valid for response obtained from ClientRequest.
The HTTP response status message (reason phrase). E.G. OK
or Internal Server Error
.
v0.11.10
Readonly
subdomainsReturn subdomains as an array.
Subdomains are the dot-separated parts of the host before the main domain of the app. By default, the domain of the app is assumed to be the last two parts of the host. This can be changed by setting "subdomain offset".
For example, if the domain is "tobi.ferrets.example.com":
If "subdomain offset" is not set, req.subdomains is ["ferrets", "tobi"]
.
If "subdomain offset" is 3, req.subdomains is ["tobi"]
.
The request/response trailers object. Only populated at the 'end'
event.
v0.3.0
Similar to message.trailers
, but there is no join logic and the values are
always arrays of strings, even for headers received just once.
Only populated at the 'end'
event.
v18.3.0, v16.17.0
Only valid for request obtained from Server.
Request URL string. This contains only the URL that is present in the actual HTTP request. Take the following request:
GET /status?name=ryan HTTP/1.1
Accept: text/plain
To parse the URL into its parts:
new URL(`http://${process.env.HOST ?? 'localhost'}${request.url}`);
When request.url
is '/status?name=ryan'
and process.env.HOST
is undefined:
$ node
> new URL(`http://${process.env.HOST ?? 'localhost'}${request.url}`);
URL {
href: 'http://localhost/status?name=ryan',
origin: 'http://localhost',
protocol: 'http:',
username: '',
password: '',
host: 'localhost',
hostname: 'localhost',
port: '',
pathname: '/status',
search: '?name=ryan',
searchParams: URLSearchParams { 'name' => 'ryan' },
hash: ''
}
Ensure that you set process.env.HOST
to the server's host name, or consider replacing this part entirely. If using req.headers.host
, ensure proper
validation is used, as clients may specify a custom Host
header.
v0.1.90
Readonly
xhrCheck if the request was an XMLHttpRequest.
Optional
[captureCheck if the given type(s)
is acceptable, returning
the best match when true, otherwise undefined
, in which
case you should respond with 406 "Not Acceptable".
The type
value may be a single mime type string
such as "application/json", the extension name
such as "json", a comma-delimted list such as "json, html, text/plain",
or an array ["json", "html", "text/plain"]
. When a list
or array is given the best match, if any is returned.
Examples:
// Accept: text/html
req.accepts('html');
// => "html"
// Accept: text/*, application/json
req.accepts('html');
// => "html"
req.accepts('text/html');
// => "text/html"
req.accepts('json, text');
// => "json"
req.accepts('application/json');
// => "application/json"
// Accept: text/*, application/json
req.accepts('image/png');
req.accepts('png');
// => false
// Accept: text/*;q=.5, application/json
req.accepts(['html', 'json']);
req.accepts('html, json');
// => "json"
Rest
...type: string[]Returns the first accepted charset of the specified character sets, based on the request's Accept-Charset HTTP header field. If none of the specified charsets is accepted, returns false.
For more information, or if you have issues or concerns, see accepts.
Rest
...charset: string[]Returns the first accepted encoding of the specified encodings, based on the request's Accept-Encoding HTTP header field. If none of the specified encodings is accepted, returns false.
For more information, or if you have issues or concerns, see accepts.
Rest
...encoding: string[]Returns the first accepted language of the specified languages, based on the request's Accept-Language HTTP header field. If none of the specified languages is accepted, returns false.
For more information, or if you have issues or concerns, see accepts.
Rest
...lang: string[]Event emitter The defined events on documents including:
This method returns a new stream with chunks of the underlying stream paired with a counter
in the form [index, chunk]
. The first index value is 0
and it increases by 1 for each chunk produced.
Optional
options: Pick<ArrayOptions, "signal">a stream of indexed pairs.
v17.5.0
Optional
options: ObjectCalls destroy()
on the socket that received the IncomingMessage
. If error
is provided, an 'error'
event is emitted on the socket and error
is passed
as an argument to any listeners on the event.
Optional
error: Errorv0.3.0
This method returns a new stream with the first limit chunks dropped from the start.
the number of chunks to drop from the readable.
Optional
options: Pick<ArrayOptions, "signal">a stream with limit chunks dropped from the start.
v17.5.0
Synchronously calls each of the listeners registered for the event named eventName
, in the order they were registered, passing the supplied arguments
to each.
Returns true
if the event had listeners, false
otherwise.
import { EventEmitter } from 'node:events';
const myEmitter = new EventEmitter();
// First listener
myEmitter.on('event', function firstListener() {
console.log('Helloooo! first listener');
});
// Second listener
myEmitter.on('event', function secondListener(arg1, arg2) {
console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
});
// Third listener
myEmitter.on('event', function thirdListener(...args) {
const parameters = args.join(', ');
console.log(`event with parameters ${parameters} in third listener`);
});
console.log(myEmitter.listeners('event'));
myEmitter.emit('event', 1, 2, 3, 4, 5);
// Prints:
// [
// [Function: firstListener],
// [Function: secondListener],
// [Function: thirdListener]
// ]
// Helloooo! first listener
// event with parameters 1, 2 in second listener
// event with parameters 1, 2, 3, 4, 5 in third listener
v0.1.26
Rest
...args: any[]Returns an array listing the events for which the emitter has registered
listeners. The values in the array are strings or Symbol
s.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.on('foo', () => {});
myEE.on('bar', () => {});
const sym = Symbol('symbol');
myEE.on(sym, () => {});
console.log(myEE.eventNames());
// Prints: [ 'foo', 'bar', Symbol(symbol) ]
v6.0.0
This method is similar to Array.prototype.every
and calls fn on each chunk in the stream
to check if all awaited return values are truthy value for fn. Once an fn call on a chunk
await
ed return value is falsy, the stream is destroyed and the promise is fulfilled with false
.
If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true
.
a function to call on each chunk of the stream. Async or not.
Optional
options: ArrayOptionsa promise evaluating to true
if fn returned a truthy value for every one of the chunks.
v17.5.0
This method allows filtering the stream. For each chunk in the stream the fn function will be called
and if it returns a truthy value, the chunk will be passed to the result stream.
If the fn function returns a promise - that promise will be await
ed.
a function to filter chunks from the stream. Async or not.
Optional
options: ArrayOptionsa stream filtered with the predicate fn.
v17.4.0, v16.14.0
This method is similar to Array.prototype.find
and calls fn on each chunk in the stream
to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy,
the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value.
If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined
.
a function to call on each chunk of the stream. Async or not.
Optional
options: ArrayOptionsa promise evaluating to the first chunk for which fn evaluated with a truthy value,
or undefined
if no element was found.
v17.5.0
Optional
options: ArrayOptionsThis method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
a function to map over every chunk in the stream. May be async. May be a stream or generator.
Optional
options: ArrayOptionsa stream flat-mapped with the function fn.
v17.5.0
This method allows iterating a stream. For each chunk in the stream the fn function will be called.
If the fn function returns a promise - that promise will be await
ed.
This method is different from for await...of
loops in that it can optionally process chunks concurrently.
In addition, a forEach
iteration can only be stopped by having passed a signal
option
and aborting the related AbortController while for await...of
can be stopped with break
or return
.
In either case the stream will be destroyed.
This method is different from listening to the 'data'
event in that it uses the readable
event
in the underlying machinary and can limit the number of concurrent fn calls.
a function to call on each chunk of the stream. Async or not.
Optional
options: ArrayOptionsa promise for when the stream has finished.
v17.5.0
Return request header.
The Referrer
header field is special-cased,
both Referrer
and Referer
are interchangeable.
Examples:
req.get('Content-Type');
// => "text/plain"
req.get('content-type');
// => "text/plain"
req.get('Something');
// => undefined
Aliased as req.header()
.
Check if the incoming request contains the "Content-Type"
header field, and it contains the give mime type
.
Examples:
// With Content-Type: text/html; charset=utf-8
req.is('html');
req.is('text/html');
req.is('text/*');
// => true
// When Content-Type is application/json
req.is('json');
req.is('application/json');
req.is('application/*');
// => true
req.is('html');
// => false
The readable.isPaused()
method returns the current operating state of the Readable
.
This is used primarily by the mechanism that underlies the readable.pipe()
method.
In most typical cases, there will be no reason to use this method directly.
const readable = new stream.Readable();
readable.isPaused(); // === false
readable.pause();
readable.isPaused(); // === true
readable.resume();
readable.isPaused(); // === false
v0.11.14
The iterator created by this method gives users the option to cancel the destruction
of the stream if the for await...of
loop is exited by return
, break
, or throw
,
or if the iterator should destroy the stream if the stream emitted an error during iteration.
Optional
options: Objectv16.3.0
Returns the number of listeners listening for the event named eventName
.
If listener
is provided, it will return how many times the listener is found
in the list of the listeners of the event.
The name of the event being listened for
Optional
listener: FunctionThe event handler function
v3.2.0
Returns a copy of the array of listeners for the event named eventName
.
server.on('connection', (stream) => {
console.log('someone connected!');
});
console.log(util.inspect(server.listeners('connection')));
// Prints: [ [Function] ]
v0.1.26
This method allows mapping over the stream. The fn function will be called for every chunk in the stream.
If the fn function returns a promise - that promise will be await
ed before being passed to the result stream.
a function to map over every chunk in the stream. Async or not.
Optional
options: ArrayOptionsa stream mapped with the function fn.
v17.4.0, v16.14.0
Alias for emitter.removeListener()
.
v10.0.0
Adds the listener
function to the end of the listeners array for the event
named eventName
. No checks are made to see if the listener
has already
been added. Multiple calls passing the same combination of eventName
and
listener
will result in the listener
being added, and called, multiple times.
server.on('connection', (stream) => {
console.log('someone connected!');
});
Returns a reference to the EventEmitter
, so that calls can be chained.
By default, event listeners are invoked in the order they are added. The emitter.prependListener()
method can be used as an alternative to add the
event listener to the beginning of the listeners array.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.on('foo', () => console.log('a'));
myEE.prependListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a
The name of the event.
The callback function
v0.1.101
Adds a one-time listener
function for the event named eventName
. The
next time eventName
is triggered, this listener is removed and then invoked.
server.once('connection', (stream) => {
console.log('Ah, we have our first user!');
});
Returns a reference to the EventEmitter
, so that calls can be chained.
By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener()
method can be used as an alternative to add the
event listener to the beginning of the listeners array.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.once('foo', () => console.log('a'));
myEE.prependOnceListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a
The name of the event.
The callback function
v0.3.0
Optional
defaultValue: anysince 4.11 Use either req.params, req.body or req.query, as applicable.
Return the value of param name
when present or defaultValue
.
To utilize request bodies, req.body
should be an object. This can be done by using
the connect.bodyParser()
middleware.
The readable.pause()
method will cause a stream in flowing mode to stop
emitting 'data'
events, switching out of flowing mode. Any data that
becomes available will remain in the internal buffer.
const readable = getReadableStreamSomehow();
readable.on('data', (chunk) => {
console.log(`Received ${chunk.length} bytes of data.`);
readable.pause();
console.log('There will be no additional data for 1 second.');
setTimeout(() => {
console.log('Now data will start flowing again.');
readable.resume();
}, 1000);
});
The readable.pause()
method has no effect if there is a 'readable'
event listener.
v0.9.4
Optional
options: ObjectAdds the listener
function to the beginning of the listeners array for the
event named eventName
. No checks are made to see if the listener
has
already been added. Multiple calls passing the same combination of eventName
and listener
will result in the listener
being added, and called, multiple times.
server.prependListener('connection', (stream) => {
console.log('someone connected!');
});
Returns a reference to the EventEmitter
, so that calls can be chained.
The name of the event.
The callback function
v6.0.0
Adds a one-timelistener
function for the event named eventName
to the beginning of the listeners array. The next time eventName
is triggered, this
listener is removed, and then invoked.
server.prependOnceListener('connection', (stream) => {
console.log('Ah, we have our first user!');
});
Returns a reference to the EventEmitter
, so that calls can be chained.
The name of the event.
The callback function
v6.0.0
Optional
encoding: BufferEncodingParse Range header field, capping to the given size
.
Unspecified ranges such as "0-" require knowledge of your resource length. In
the case of a byte range this is of course the total number of bytes.
If the Range header field is not given undefined
is returned.
If the Range header field is given, return value is a result of range-parser.
See more ./types/range-parser/index.d.ts
NOTE: remember that ranges are inclusive, so for example "Range: users=0-3" should respond with 4 users when available, not 3.
Optional
options: OptionsReturns a copy of the array of listeners for the event named eventName
,
including any wrappers (such as those created by .once()
).
import { EventEmitter } from 'node:events';
const emitter = new EventEmitter();
emitter.once('log', () => console.log('log once'));
// Returns a new Array with a function `onceWrapper` which has a property
// `listener` which contains the original listener bound above
const listeners = emitter.rawListeners('log');
const logFnWrapper = listeners[0];
// Logs "log once" to the console and does not unbind the `once` event
logFnWrapper.listener();
// Logs "log once" to the console and removes the listener
logFnWrapper();
emitter.on('log', () => console.log('log persistently'));
// Will return a new Array with a single function bound by `.on()` above
const newListeners = emitter.rawListeners('log');
// Logs "log persistently" twice
newListeners[0]();
emitter.emit('log');
v9.4.0
The readable.read()
method reads data out of the internal buffer and
returns it. If no data is available to be read, null
is returned. By default,
the data is returned as a Buffer
object unless an encoding has been
specified using the readable.setEncoding()
method or the stream is operating
in object mode.
The optional size
argument specifies a specific number of bytes to read. If
size
bytes are not available to be read, null
will be returned unless the
stream has ended, in which case all of the data remaining in the internal buffer
will be returned.
If the size
argument is not specified, all of the data contained in the
internal buffer will be returned.
The size
argument must be less than or equal to 1 GiB.
The readable.read()
method should only be called on Readable
streams
operating in paused mode. In flowing mode, readable.read()
is called
automatically until the internal buffer is fully drained.
const readable = getReadableStreamSomehow();
// 'readable' may be triggered multiple times as data is buffered in
readable.on('readable', () => {
let chunk;
console.log('Stream is readable (new data received in buffer)');
// Use a loop to make sure we read all currently available data
while (null !== (chunk = readable.read())) {
console.log(`Read ${chunk.length} bytes of data...`);
}
});
// 'end' will be triggered once when there is no more data available
readable.on('end', () => {
console.log('Reached end of stream.');
});
Each call to readable.read()
returns a chunk of data, or null
. The chunks
are not concatenated. A while
loop is necessary to consume all data
currently in the buffer. When reading a large file .read()
may return null
,
having consumed all buffered content so far, but there is still more data to
come not yet buffered. In this case a new 'readable'
event will be emitted
when there is more data in the buffer. Finally the 'end'
event will be
emitted when there is no more data to come.
Therefore to read a file's whole contents from a readable
, it is necessary
to collect chunks across multiple 'readable'
events:
const chunks = [];
readable.on('readable', () => {
let chunk;
while (null !== (chunk = readable.read())) {
chunks.push(chunk);
}
});
readable.on('end', () => {
const content = chunks.join('');
});
A Readable
stream in object mode will always return a single item from
a call to readable.read(size)
, regardless of the value of the size
argument.
If the readable.read()
method returns a chunk of data, a 'data'
event will
also be emitted.
Calling read after the 'end'
event has
been emitted will return null
. No runtime error will be raised.
Optional
size: numberOptional argument to specify how much data to read.
v0.9.4
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value.
If the stream is empty, the promise is rejected with a TypeError
with the ERR_INVALID_ARGS
code property.
The reducer function iterates the stream element-by-element which means that there is no concurrency parameter
or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map
method.
a reducer function to call over every chunk in the stream. Async or not.
Optional
initial: undefinedthe initial value to use in the reduction.
Optional
options: Pick<ArrayOptions, "signal">a promise for the final value of the reduction.
v17.5.0
Optional
options: Pick<ArrayOptions, "signal">Removes all listeners, or those of the specified eventName
.
It is bad practice to remove listeners added elsewhere in the code,
particularly when the EventEmitter
instance was created by some other
component or module (e.g. sockets or file streams).
Returns a reference to the EventEmitter
, so that calls can be chained.
Optional
eventName: string | symbolv0.1.26
Removes the specified listener
from the listener array for the event named eventName
.
const callback = (stream) => {
console.log('someone connected!');
};
server.on('connection', callback);
// ...
server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the
listener array. If any single listener has been added multiple times to the
listener array for the specified eventName
, then removeListener()
must be
called multiple times to remove each instance.
Once an event is emitted, all listeners attached to it at the
time of emitting are called in order. This implies that any removeListener()
or removeAllListeners()
calls after emitting and before the last listener finishes execution
will not remove them fromemit()
in progress. Subsequent events behave as expected.
import { EventEmitter } from 'node:events';
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
const callbackA = () => {
console.log('A');
myEmitter.removeListener('event', callbackB);
};
const callbackB = () => {
console.log('B');
};
myEmitter.on('event', callbackA);
myEmitter.on('event', callbackB);
// callbackA removes listener callbackB but it will still be called.
// Internal listener array at time of emit [callbackA, callbackB]
myEmitter.emit('event');
// Prints:
// A
// B
// callbackB is now removed.
// Internal listener array [callbackA]
myEmitter.emit('event');
// Prints:
// A
Because listeners are managed using an internal array, calling this will
change the position indices of any listener registered after the listener
being removed. This will not impact the order in which listeners are called,
but it means that any copies of the listener array as returned by
the emitter.listeners()
method will need to be recreated.
When a single function has been added as a handler multiple times for a single
event (as in the example below), removeListener()
will remove the most
recently added instance. In the example the once('ping')
listener is removed:
import { EventEmitter } from 'node:events';
const ee = new EventEmitter();
function pong() {
console.log('pong');
}
ee.on('ping', pong);
ee.once('ping', pong);
ee.removeListener('ping', pong);
ee.emit('ping');
ee.emit('ping');
Returns a reference to the EventEmitter
, so that calls can be chained.
v0.1.26
The readable.resume()
method causes an explicitly paused Readable
stream to
resume emitting 'data'
events, switching the stream into flowing mode.
The readable.resume()
method can be used to fully consume the data from a
stream without actually processing any of that data:
getReadableStreamSomehow()
.resume()
.on('end', () => {
console.log('Reached the end, but did not read anything.');
});
The readable.resume()
method has no effect if there is a 'readable'
event listener.
v0.9.4
The readable.setEncoding()
method sets the character encoding for
data read from the Readable
stream.
By default, no encoding is assigned and stream data will be returned as Buffer
objects. Setting an encoding causes the stream data
to be returned as strings of the specified encoding rather than as Buffer
objects. For instance, calling readable.setEncoding('utf8')
will cause the
output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex')
will cause the data to be encoded in hexadecimal
string format.
The Readable
stream will properly handle multi-byte characters delivered
through the stream that would otherwise become improperly decoded if simply
pulled from the stream as Buffer
objects.
const readable = getReadableStreamSomehow();
readable.setEncoding('utf8');
readable.on('data', (chunk) => {
assert.equal(typeof chunk, 'string');
console.log('Got %d characters of string data:', chunk.length);
});
The encoding to use.
v0.9.4
By default EventEmitter
s will print a warning if more than 10
listeners are
added for a particular event. This is a useful default that helps finding
memory leaks. The emitter.setMaxListeners()
method allows the limit to be
modified for this specific EventEmitter
instance. The value can be set to Infinity
(or 0
) to indicate an unlimited number of listeners.
Returns a reference to the EventEmitter
, so that calls can be chained.
v0.3.5
This method is similar to Array.prototype.some
and calls fn on each chunk in the stream
until the awaited return value is true
(or any truthy value). Once an fn call on a chunk
await
ed return value is truthy, the stream is destroyed and the promise is fulfilled with true
.
If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false
.
a function to call on each chunk of the stream. Async or not.
Optional
options: ArrayOptionsa promise evaluating to true
if fn returned a truthy value for at least one of the chunks.
v17.5.0
This method returns a new stream with the first limit chunks.
the number of chunks to take from the readable.
Optional
options: Pick<ArrayOptions, "signal">a stream with limit chunks taken.
v17.5.0
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
Optional
options: Pick<ArrayOptions, "signal">a promise containing an array with the contents of the stream.
v17.5.0
The readable.unpipe()
method detaches a Writable
stream previously attached
using the pipe method.
If the destination
is not specified, then all pipes are detached.
If the destination
is specified, but no pipe is set up for it, then
the method does nothing.
import fs from 'node:fs';
const readable = getReadableStreamSomehow();
const writable = fs.createWriteStream('file.txt');
// All the data from readable goes into 'file.txt',
// but only for the first second.
readable.pipe(writable);
setTimeout(() => {
console.log('Stop writing to file.txt.');
readable.unpipe(writable);
console.log('Manually close the file stream.');
writable.end();
}, 1000);
Optional
destination: WritableStreamOptional specific stream to unpipe
v0.9.4
Passing chunk
as null
signals the end of the stream (EOF) and behaves the
same as readable.push(null)
, after which no more data can be written. The EOF
signal is put at the end of the buffer and any buffered data will still be
flushed.
The readable.unshift()
method pushes a chunk of data back into the internal
buffer. This is useful in certain situations where a stream is being consumed by
code that needs to "un-consume" some amount of data that it has optimistically
pulled out of the source, so that the data can be passed on to some other party.
The stream.unshift(chunk)
method cannot be called after the 'end'
event
has been emitted or a runtime error will be thrown.
Developers using stream.unshift()
often should consider switching to
use of a Transform
stream instead. See the API for stream implementers
section for more information.
// Pull off a header delimited by \n\n.
// Use unshift() if we get too much.
// Call the callback with (error, header, stream).
import { StringDecoder } from 'node:string_decoder';
function parseHeader(stream, callback) {
stream.on('error', callback);
stream.on('readable', onReadable);
const decoder = new StringDecoder('utf8');
let header = '';
function onReadable() {
let chunk;
while (null !== (chunk = stream.read())) {
const str = decoder.write(chunk);
if (str.includes('\n\n')) {
// Found the header boundary.
const split = str.split(/\n\n/);
header += split.shift();
const remaining = split.join('\n\n');
const buf = Buffer.from(remaining, 'utf8');
stream.removeListener('error', callback);
// Remove the 'readable' listener before unshifting.
stream.removeListener('readable', onReadable);
if (buf.length)
stream.unshift(buf);
// Now the body of the message can be read from the stream.
callback(null, header, stream);
return;
}
// Still reading the header.
header += str;
}
}
}
Unlike push, stream.unshift(chunk)
will not
end the reading process by resetting the internal reading state of the stream.
This can cause unexpected results if readable.unshift()
is called during a
read (i.e. from within a _read implementation on a
custom stream). Following the call to readable.unshift()
with an immediate push will reset the reading state appropriately,
however it is best to simply avoid calling readable.unshift()
while in the
process of performing a read.
Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk
must
be a {string}, {Buffer}, {TypedArray}, {DataView} or null
. For object mode streams, chunk
may be any JavaScript value.
Optional
encoding: BufferEncodingEncoding of string chunks. Must be a valid Buffer
encoding, such as 'utf8'
or 'ascii'
.
v0.9.11
Prior to Node.js 0.10, streams did not implement the entire node:stream
module API as it is currently defined. (See Compatibility
for more
information.)
When using an older Node.js library that emits 'data'
events and has a pause method that is advisory only, the readable.wrap()
method can be used to create a Readable
stream that uses
the old stream as its data source.
It will rarely be necessary to use readable.wrap()
but the method has been
provided as a convenience for interacting with older Node.js applications and
libraries.
import { OldReader } from './old-api-module.js';
import { Readable } from 'node:stream';
const oreader = new OldReader();
const myReader = new Readable().wrap(oreader);
myReader.on('readable', () => {
myReader.read(); // etc.
});
An "old style" readable stream
v0.9.4
See
https://expressjs.com/en/api.html#req.params
Example