Node. js development-Stream usage

Source: Internet
Author: User

Node. js development-Stream usage

StreamIt is a very important module in Node. js and is widely used. A stream is an interface capable of reading, writing, or reading and writing. Through these interfaces, we can interact with disk files, sockets, and HTTP requests, enables data to flow from one place to another.

All streams implement the EventEmitter interface and have the event capability. They send events to feedback the status of the stream. For example, an "error" event is triggered when an error occurs, and a "data" event is triggered when data can be read. In this way, we can register a listener to handle an event and achieve our goal.

Node. js defines four streams: Readable, Writable, Duplex, and Transform. Node. js has various modules that implement these streams separately. Let's pick them up and take a look at their usage. Of course, we can also implement our own Stream. You can refer to the Stream documentation or the implementation in Node. js we will mention.

Readable

Readable stream provides a mechanism to read data from external sources (such as files and sockets) to applications.

A readable stream has two modes: flow mode and pause mode. In flow mode, data automatically flows out from the source, similar to the old spring, until the source data is exhausted. In the pause mode, you have to take the initiative to request data through stream. read (), you want it to read from the source, you don't want it to wait for you.

The read-only stream is paused during creation. The pause mode and flow mode can be converted to each other.

There are three methods to switch from the suspend mode to the flow mode:

Associate the "data" event with a processor to explicitly call resume () and call pipe () to bridge the readable stream to a writable stream.

There are two ways to switch from the flow mode to the pause mode:

If the readable Stream does not bridge the writable stream to form the pipeline, call pause () directly. If the readable stream and several writable streams form the pipeline, remove all processors associated with the "data" event and call the unpipe () method to disconnect all pipelines.

Note that, for backward compatibility reasons, the processor for removing the "data" event will not automatically switch the readable stream from the flow mode to the pause mode. Also, pause cannot guarantee that the stream will be converted to the paused mode for the readable stream of the pipeline.

Some common examples of Readable stream are as follows:

The client's HTTP Response server's HTTP request fs reads the zlib stream crypto (encrypted) stream stdout of the TCP socket sub-process and stderr process. stdin

The Readable stream provides the following events:

Readable: it is issued when the data block can be read from the stream. The corresponding processor has no parameters. You can call the read ([size]) method in the processor to read data. Data: sent when data is readable. The corresponding processor has a parameter that represents data. If you only want to quickly read data from a stream, it is the most convenient way to associate data with a processor. The processor parameter is a Buffer object. If you call the setEncoding (encoding) method of Readable, the processor parameter is a String object. End: sent when data is read. The corresponding processor has no parameters. Close: When underlying resources, such as files, are closed. Not all Readable streams issue this event. The corresponding processor has no parameters. Error: an error occurs when receiving data. The corresponding processor parameter is an Error instance. Its message attribute describes the cause of the Error, and the stack attribute stores the stack information when an Error occurs.

Readable also provides some functions that can be used to read or operate the stream:

Read ([size]): If you pass a size parameter to the read method, it returns the specified amount of data. If the data is insufficient, it returns null. If you do not pass the parameter to the read method, it will return all data in the internal buffer. If there is no data, it will return null, which may indicate that the end of the file is encountered. The data returned by read may be a Buffer object or a String object. SetEncoding: sets an encoding format for the stream to decode the read data. After this method is called, The read ([size]) method returns a String object. Pause (): pause the readable stream and stop sending the data event resume (): resume the readable stream. continue issuing the data event pipe (destination, [options]): transmits the output of the readable stream to the Writable stream specified by destination. The two streams form a pipeline. Options is a JS object. This object has a Boolean end attribute. The default value is true. When the end is true, Writable is automatically ended when the Readable ends. Note: We can connect a Readable with several Writable to form multiple pipelines, and each Writable can get the same data. This method returns destination. If destination itself is a Readable stream, you can use pipe for cascading calling (for example, this will happen when gzip is used for compression and decompression, as will be mentioned soon ). Unpipe ([destination]): the pipe between the port and the specified destination. When destination is not passed, all pipelines that are lingering with this readable will be disconnected.

Okay, that's all. Let's take a simple example of using Readable. Take the fs module as an example.

Fs. ReadStream implements stream. Readable and provides an "open" event. You can associate the event with the processor. The processor parameter is the file descriptor (an integer ).

Fs. createReadStream (path [, options]) is used to open a readable file stream. It returns a fs. ReadStream object. The path parameter specifies the file path. The optional options is a JS object. You can specify some options, similar to the following:

{ flags: 'r',  encoding: 'utf8',  fd: null,  mode: 0666,  autoClose: true }

The flags attribute of options specifies the mode in which the file is opened. 'W' indicates writing, and 'R' indicates reading, similarly, there are 'r + ', 'W +', and 'A', which are similar to the read/write modes accepted by open functions in Linux. Encoding specifies the encoding format used to open the file. The default format is "utf8". You can also specify "ascii" or "base64" for the file ". The default fd attribute is null. If you specify this attribute, createReadableStream creates a stream based on the input fd and ignores the path. In addition, if you want to read a specific area of a file, you can configure the start and end attributes to specify the byte offsets of the start and end (inclusive. When the autoClose attribute is true (default action), the file descriptor is automatically disabled when an error occurs or the File Reading ends.

OK. The background is almost the same. You can use the code. The content of the readable. js file is as follows:

var fs = require('fs');var readable = fs.createReadStream('readable.js',{  flags: 'r',  encoding: 'utf8',  autoClose: true,  mode: 0666,});readable.on('open', function(fd){  console.log('file was opened, fd - ', fd);});readable.on('readable', function(){  console.log('received readable');});readable.on('data', function(chunk){  console.log('read %d bytes: %s', chunk.length, chunk);});readable.on('end', function(){  console.log('read end');});readable.on('close', function(){  console.log('file was closed.');});readable.on('error', function(err){  console.log('error occured: %s', err.message);});

The sample code reads the content of readable. js and associates it with various events to demonstrate the general usage of reading files.

Writable

The Writable stream provides an interface to write data to the target device (or memory. Some common examples of Writable stream:

The HTTP request from the client server. The HTTP Response fs writes the zlib stream crypto (encrypted) stream stdin process. stdout and process. stderr

The write (chunk [, encoding] [, callback]) method of the Writable stream can write data into the stream. The chunk is the data to be written, and is a Buffer or String object. This parameter is required, and other parameters are optional. If the chunk is a String object, encoding can be used to specify the encoding format of the String. write decodes the chunk from the encoding format to throttling and then writes it. Callback is the callback function executed when data is fully refreshed to the stream. The write method returns a Boolean value. If the data is completely processed, true is returned (not necessarily completely written to the device ).

The end ([chunk] [, encoding] [, callback]) method of the Writable stream can be used to end a Writable stream. The three parameters are optional. The meaning of chunk and encoding is similar to that of the write method. Callback is an optional callback. When you provide it, it will be associated with the Writable finish event, so that it will be called when the finish event is launched.

Writable also provides methods such as setDefaultEncoding. For more information, see the online documentation.

Now let's take a look at the public events of Writable:

Finish: After end () is called and all data is written to the underlying device, it is released. The corresponding processor function has no parameters. Pipe: when you call the pipe () method on the Readable stream, the Writable stream will launch this event. The corresponding processor function has a parameter of the Readable type, point to the Readable stream connected to it. Unpipe: when you call the unpipe () method on the Readable stream, the Writable stream will launch this event. The corresponding processor function has a parameter of the Readable type, point to the Readable stream that is just disconnected from it. Error: triggered when an Error occurs. The parameter of the corresponding processor function is the error object.

OK. Let's take two small examples. One is fs and the other is socket.

Fs. createWriteStream (path [, options]) is used to create a writable file stream. It returns the fs. WriteStream object. The first parameter path is the path, and the second Parameter options is the JS object. It is optional and specifies the options when the file is created, similar:

{ flags: 'w',  defaultEncoding: 'utf8',  fd: null,  mode: 0666 }

DefaultEncoding specifies the default text encoding. As mentioned earlier in fs. createReadStream.

The writeFile. js content is as follows:

Var fs = require ('fs'); var writable = fs.createWriteStream('example.txt ', {flags: 'w', defaultEncoding: 'utf8', mode: 0666,}); writable. on ('finish ', function () {console. log ('write finished'); process. exit (0) ;}); writable. on ('error', function (err) {console. log ('write error-% s', err. message) ;}); writable. write ('My name is a Fire Cloud Evil God ', 'utf8'); writable. end ();

A simple example. Note that the file encoding format of writeFile. js is UTF8.

The following shows an example of using TCP socket. The content of echoServer2.js is as follows:

var net = require(net);var server = net.createServer(function(sock){  sock.setEncoding('utf8');  sock.on('pipe', function(src){    console.log('piped');  });  sock.on('error', function(err){    console.log('error - %s', err.message);  });  sock.pipe(sock);});server.maxConnections = 10;server.listen(7, function(){  console.log('echo server bound at port - 7');});

The above echoServer functions are the same as the echoServer in socket programming, which we used to get started with Node. js development. The difference is that the pipe method is used here, and that version listens to data events and calls the write method to send the received data back to the client.

Sock. Socket is a Duplex stream that implements both Readable and Writable. Therefore, sock. pipe (sock) is a correct call.

Common Duplex streams include:

TCP socket zlib crypto

Duplex is the combination of Readable and Writable.

Transform

Transform extends the Duplex stream, which modifies the data you write using the Writable interface. When you read data using the Readable interface, the data has changed.

Typical Transform streams include:

Zlib crypto

Let's take a simple example. Use the zlib module to compress and decompress the package. The sample file is zlibFile. js with the following content:

var zlib = require(zlib);var gzip = zlib.createGzip();var fs = require('fs');var inFile = fs.createReadStream('readable.js');var outGzip = fs.createWriteStream('readable.gz');//inFile - Readable//gzip - Transform(Readable && Writable)//outFile - WritableinFile.pipe(gzip).pipe(outGzip);setTimeout(function(){  var gunzip = zlib.createUnzip({flush: zlib.Z_FULL_FLUSH});  var inGzip = fs.createReadStream('readable.gz');  var outFile = fs.createWriteStream('readable.unzipped');  inGzip.pipe(gunzip).pipe(outFile);}, 5000);

The above example is relatively simple. The zlib module is used. The document is here: https://nodejs.org/api/zlib.html.

Next we will implement a Transform stream to convert lowercase letters in the input data to uppercase letters. The code is in upperTransform. js and the content is as follows:

var fs = require('fs');var util = require('util');var stream = require('stream');util.inherits(UpperTransform, stream.Transform);function UpperTransform(opt){  stream.Transform.call(this, opt);}UpperTransform.prototype._transform = function(chunk, encoding, callback){  var data = new Buffer(chunk.length);  var str = chunk.toString('utf8');  for(var i = 0, offset=0; i < str.length; i++){    if(/^[a-z]+$/.test(str[i])){      offset += data.write(str[i].toUpperCase(), offset);    }else{      offset += data.write(str[i], offset);    }  }  this.push(data);  callback();}UpperTransform.prototype._flush = function(cb){  cb();}var upper = new UpperTransform();var inFile = fs.createReadStream('example.txt');inFile.setEncoding('utf8');var outFile = fs.createWriteStream('exampleUpper.txt',{defaultEncoding: 'utf8'});inFile.pipe(upper).pipe(outFile);

To implement custom Transform, you must first inherit the Transform stream function. The simplest way to achieve this is to use the inherits () method of the util module, and then call the call method in your constructor to apply the parent object to the current object. The Code is as follows:

util.inherits(UpperTransform, stream.Transform);function UpperTransform(opt){  stream.Transform.call(this, opt);}

After inheriting stream. Transform, implement _ transform and _ flush. In _ transform, we first create a buffer, then convert the input data (chunk) into a string (written as utf8), and then traverse the string, converts lowercase letters, writes them to the created buffer zone, and then calls the push method to add the converted data to the internal data queue.

Others are relatively simple. Note: As an example, we only convert utf8 encoded text files.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.