Stream is a very important module in node. js and is widely used. A stream is a readable, writable, or both readable and writable interface through which we can interact with disk files, sockets, HTTP requests, and enable the ability to move data from one place to another.
All streams implement Eventemitter interfaces and have the ability to transmit events to feed the state of the stream. For example, when an error occurs, an "error" event is emitted, and a "data" event is emitted when there is information to be read. This allows us to register the listener to handle an event for our purposes.
node. JS defines four streams of readable, writable, Duplex, transform, and node. JS has a variety of modules that implement these streams, and we pick them up and look at their usage. Of course, we can also implement our own streams, and we can refer to the stream's documentation or the implementations in these node. js that we're about to mention.
Readable
The readable stream provides a mechanism for reading data from external sources (such as files, sockets, and so on) into the application.
There are two modes of a readable stream: Flow mode and pause mode. Flow mode, the data will automatically flow from the source, like the old fountain, until the source of data exhaustion. In pause mode, you have to take the initiative to get the data through Stream.read (), you want it to read from the source, you don't want it to wait for you there.
A readable stream is paused when it is created. Pause mode and Flow mode can be converted to each other.
There are three ways to switch from pause mode to flow mode:
- A processor was associated with the "Data" event
- Explicitly call Resume ()
- Call pipe () to bridge the readable stream to a writable stream
To switch from flow mode to pause mode, there are two options:
- If this readable stream does not have a bridged writable stream that makes up the pipeline, call pause () directly
- If this readable stream makes up a pipeline with several writable streams, you need to remove all the processors associated with the "data" event and call the Unpipe () method to disconnect all the pipelines.
It is important to note that, for backward compatibility reasons, the processor of the "data" event is removed, the readable stream does not automatically transition from flow mode to pause mode, and the call to pause does not guarantee that the stream will be converted to suspend mode for a readable stream that has formed a pipeline.
Some common examples of readable streams are as follows:
- HTTP response of the client
- HTTP requests on the server side
- FS Read stream
- Zlib Flow
- Crypto (encrypted) stream
- TCP sockets
- StdOut and stderr of child processes
- Process.stdin
The readable stream provides the following events:
- Readable: emitted when a block of data can be read from a stream. Its corresponding processor has no parameters and can read the data by calling the read ([size]) method in the processor.
- Data: Emitted when there is a read. It corresponds to a processor that has a parameter that represents the data. If you just want to read the data of a stream quickly, it is most convenient to associate a processor with data. The parameters of the processor are the buffer object, and if you call the readable setencoding (encoding) method, the processor parameter is a string object.
- End: Emitted when data is read out. The corresponding processor has no parameters.
- Close: emitted when the underlying resource, such as a file, has been closed. Not all readable streams will emit this event. The corresponding processor has no parameters.
- Error: Emitted when there are errors in the receive data. The corresponding processor parameter is an instance of error, and its message property describes the cause of the failure, and the stack property holds the stacking information when the error occurs.
Readable also provides functions that we can use to read or manipulate the flow:
- Read ([size]): If you pass a size as a parameter to the Read method, it returns the specified number of data and returns NULL if the data is insufficient. If you do not pass the Read method, it returns all the data in the internal buffer and returns NULL if there is no data, at which point it is possible to indicate that the end of the file was encountered. The data returned by read may be a buffer object or a string object.
- Setencoding (encoding): Sets an encoding format for the stream to decode the read data. After this method is called, the read ([size]) method returns a String object.
- Pause (): pauses a readable stream and no longer emits a data event
- Resume (): resume the readable stream and continue to issue the data event
- Pipe (Destination,[options]): The output of this readable stream is passed to the writable stream specified by destination, and two streams constitute a pipeline. The options is a JS object that has a Boolean type End property, the default value is true, and when End is true, the readable ends automatically when the writable ends. Note that we can connect a readable with several writable to form multiple pipelines, and each writable can get the same data. This method returns destination, and if the destination itself is a readable stream, you can cascade calls to the pipe (for example, we will do this when using gzip compression, decompression).
- Unpipe ([destination]): Port with the specified destination pipe. When you do not pass the destination, disconnect all the pipes with this readable linger.
Well, that's about it, let's give a simple example of using readable. Take the FS module for example.
Fs. Readstream implements the Stream.readable and also provides an "open" event that you can associate with the event handler, which is the file descriptor (an integer number).
Fs.createreadstream (path[, Options]) is used to open a readable file stream, which returns a FS. The Readstream object. The path parameter specifies the paths to the file, and the optional options are a JS object, which you can specify, like this:
{ flags: ‘r‘, encoding: ‘utf8‘, fd: null, mode: 0666, autoClose: true }
The Flags property of the options specifies what mode is used to open the file, ' W ' stands for write, ' R ' for reading, similar to ' r+ ', ' w+ ', ' a ', and so on, similar to the read-write mode accepted by the Open function under Linux. encoding specifies that the file is opened with an encoded format, which is "UTF8" by default, and you can also specify "ASCII" or "base64" for it. The FD property defaults to NULL, and when you specify this property, Createreadablestream creates a stream based on the incoming FD, ignoring path. In addition, if you want to read a specific area of a file, you can configure the start and End properties to specify the byte offset of the start and end (inclusive). When the AutoClose property is True (the default behavior), the file descriptor is automatically closed when an error occurs or when the file read ends.
OK, the background is almost, can be on the code, readable.js file content as follows:
var fs = require(‘fs‘);var readable = fs.createReadStream(‘readable.js‘,{ flags: ‘r‘, encoding: ‘utf8‘, autoClose: true, mode: 0666,});readable.on(‘open‘, function(fd){ console.log(‘file was opened, fd - ‘, fd);});readable.on(‘readable‘, function(){ console.log(‘received readable‘);});readable.on(‘data‘, function(chunk){ console.log(‘read %d bytes: %s‘, chunk.length, chunk);});readable.on(‘end‘, function(){ console.log(‘read end‘);});readable.on(‘close‘, function(){ console.log(‘file was closed.‘);});readable.on(‘error‘, function(err){ console.log(‘error occured: %s‘, err.message);});
The sample code reads the contents of the Readable.js, correlates various events, and demonstrates the general use of reading the file.
Writable
The writable stream provides an interface for writing data to the destination device (or memory). Some common examples of writable streams are:
- HTTP requests from clients
- HTTP response of the server
- FS Write Inflow
- Zlib Flow
- Crypto (encrypted) stream
- TCP nested words
- StdIn of child processes
- Process.stdout and Process.stderr
The Write (chunk[,encoding] [, Callback]) method of the writable stream writes the data to the stream. Where chunk is the data to be written, which is the buffer or string object. This parameter is required and the other parameters are optional. If chunk is a string object, encoding can be used to specify the encoding format of the string, and write decodes the chunk into a byte stream based on the encoded format. Callback is the callback function that executes when the data is fully flushed into the stream. The Write method returns a Boolean value that returns True when the data is fully processed (not necessarily a fully-written device).
The end of the writable stream ([chunk] [, encoding] [, callback]) method can be used to end a writable stream. All of its three parameters are optional. The meaning of chunk and encoding is similar to the Write method. Callback is an optional callback that will be associated to the writable finish event when you provide it, so that it will be called when the finish event is launched.
Writable also have setdefaultencoding and other methods, specifically can refer to the online documentation.
Now let's take a look at writable's public events:
- Finish: Fired when End () is called and all data has been written to the underlying device. The corresponding processor function has no parameters.
- Pipe: When you call the pipe () method on the readable stream, the writable stream emits the event, and the corresponding processor function has a parameter, the type is readable, which points to the readable stream to which it is connected.
- Unpipe: When you call the Unpipe () method on the readable stream, the writable stream emits the event, and the corresponding processor function has one parameter, the type is readable, which points to the readable stream that was just disconnected from it.
- Error: Emitted on error, the corresponding processor function parameter is the Error object.
OK, let's give two small examples. One is FS, one is the socket.
Fs.createwritestream (Path[,options]) is used to create a writable file stream that returns FS. The Writestream object. The first parameter is path, the second parameter options is the JS object, is optional, specifies the option to create the file, similar to:
{ flags: ‘w‘, defaultEncoding: ‘utf8‘, fd: null, mode: 0666 }
defaultencoding Specifies the default text encoding. Mentioned earlier in the Fs.createreadstream.
Writefile.js content is as follows:
var fs = require(‘fs‘);var writable = fs.createWriteStream(‘example.txt‘,{ flags: ‘w‘, defaultEncoding: ‘utf8‘, mode: 0666,});writable.on(‘finish‘, function(){ console.log(‘write finished‘); process.exit(0);});writable.on(‘error‘, function(err){ console.log(‘write error - %s‘, err.message);});writable.write(‘My name is 火云邪神‘, ‘utf8‘);writable.end();
A very simple example, note the Writefile.js file encoding format if UTF8 OH.
Here is an example of using a TCP socket, the Echoserver2.js content is as follows:
var net = require("net");var server = net.createServer(function(sock){ sock.setEncoding(‘utf8‘); sock.on(‘pipe‘, function(src){ console.log(‘piped‘); }); sock.on(‘error‘, function(err){ console.log(‘error - %s‘, err.message); }); sock.pipe(sock);});server.maxConnections = 10;server.listen(7, function(){ console.log(‘echo server bound at port - 7‘);});
The Echoserver feature above is the same as we did before in node. JS Development-SOCKET (socket) programming. The difference is that the pipe method is used, and that version listens to the data event and calls the Write method to write back the information it receives to the client.
Sock. The socket is a duplex stream that implements both readable and writable, so Sock.pipe (sock) is the correct call.
The common duplex flow is:
Duplex is a combination of readable and writable.
Transform
Transform extends the duplex stream, which modifies the data you write with the writable interface, and when you read it using the readable interface, the data has changed.
The more common transform flows are:
OK, let's take a simple example, using the Zlib module to compress and decompress. The sample file is Zlibfile.js and reads as follows:
var zlib = require("zlib");var gzip = zlib.createGzip();var fs = require(‘fs‘);var inFile = fs.createReadStream(‘readable.js‘);var outGzip = fs.createWriteStream(‘readable.gz‘);//inFile - Readable//gzip - Transform(Readable && Writable)//outFile - WritableinFile.pipe(gzip).pipe(outGzip);setTimeout(function(){ var gunzip = zlib.createUnzip({flush: zlib.Z_FULL_FLUSH}); var inGzip = fs.createReadStream(‘readable.gz‘); var outFile = fs.createWriteStream(‘readable.unzipped‘); inGzip.pipe(gunzip).pipe(outFile);}, 5000);
The above example is simple and uses the Zlib module, where the documentation is: https://nodejs.org/api/zlib.html.
Next, let's implement a transform stream that converts lowercase letters in the input data into uppercase letters. Our code is in uppertransform.js with the following content:
var fs = require (' FS '), var util = require (' util '), var stream = require (' stream '); Util.inherits ( Uppertransform, Stream. Transform); function Uppertransform (opt) {stream. Transform.call (this, opt);} Uppertransform.prototype._transform = function (chunk, encoding, callback) {var data = new Buffer (chunk.length); var str = chunk.tostring (' UTF8 '); for (var i = 0, offset=0; i < str.length; i++) {if (/^[a-z]+$/.test (Str[i])) {offset + = Data.write (str[i].toupper Case (), offset); }else{offset + = Data.write (Str[i], offset); }} this.push (data); Callback ();} Uppertransform.prototype._flush = function (CB) {CB ();} var upper = new Uppertransform (), var inFile = Fs.createreadstream (' Example.txt '), infile.setencoding (' utf8 '); var OutFile = Fs.createwritestream (' ExampleUpper.txt ', {defaultencoding: ' UTF8 '}); Infile.pipe (upper). pipe (outFile);
To implement a custom transform, you need to inherit the functionality of the transform stream first. The simplest way to do this is to use the Inherits () method of the Util module and then invoke the call method in your constructor to apply the parent object to the current object. The code is part of the following:
util.inherits(UpperTransform, stream.Transform);function UpperTransform(opt){ stream.Transform.call(this, opt);}
Inherits the stream. After transform, _transform and _flush can be implemented. In _transform, we first created a buffer and then converted the incoming data (chunk) to a string (wrote Dead to UTF8), then traversed the string, encountered a lowercase letter converted, wrote the created buffer, completed the conversion, called the Push method, The converted data is added to the internal data queue.
The rest is relatively simple. Note that as an example, we only convert UTF8 encoded text files.
Other articles:
- node. JS Development Primer--buffer Usage
- Introduction to node. JS Development--Speech synthesis example
- node. JS Development Primer--UDP Programming
- Get started with node. JS-access to the outside world using HTTP
- Getting Started with node. JS Development-SOCKET (socket) programming
- node. JS Development Primer--notepad++ for node. js
- Get started with node. JS-Use dialog box Ngdialog
- Introduction to node. JS Development--Introducing Uibootstrap
- Get started with node. JS-Transform Logindemo with MongoDB
- node. JS Development Primer--mongodb and Mongoose
- Get started with node. JS Development-Use cookies to stay signed in
- Getting Started with node. JS-Using ANGULARJS built-in services
- node. JS Development Primer--angular Simple Example
- Introduction to node. JS Development-Using ANGULARJS
- Getting Started with node. JS Development-Using the Jade template engine
- node. JS Development Starter--express Routing and middleware
- node. JS Development Primer--express Installation and use
- node. JS Development Primer--http File Server
- node. JS Development Primer--helloworld Re-analysis
- Introduction to node. JS Development--Environment building and HelloWorld
Copyright NOTICE: This article is Foruok original article, without BO Master permission cannot reprint.
node. JS Development Primer-stream Usage