Node.js Getting Started Tutorial mini book, Node.js Web application Development Complete Example _ basics

Source: Internet
Author: User
Tags anonymous function definition html form http post php script readfile install node node server

The status of the book

What you are reading is already the final edition of this book. Therefore, updates are made only when error corrections are made and corresponding revisions are made to the new version node.js changes.

The code cases in this book have been tested in the Node.js version 0.6.11 and can work correctly.

Reader Object

This book is best for readers with a similar technical background: at least a certain amount of experience in an object-oriented language such as Ruby, Python, PHP, or Java; JavaScript is a beginner and a node.js novice.

This refers to developers who have some experience with other programming languages, meaning that the book does not introduce very basic concepts such as data types, variables, control structures, and so on. To read this book, these basic concepts I have tacitly you already have.

However, the book also gives a detailed description of functions and objects in JavaScript because they are very different from functions and objects in other languages of the same programming language.

The structure of the book

After reading this book, you will complete a complete Web application that allows users to browse the page and upload files.

Of course, the application itself is nothing great, but rather than the code itself to implement the feature, we are more concerned with creating a framework for cleanly stripping the different modules we apply. Is it iffy? You'll understand later.

This book begins by introducing the differences between JavaScript development in a node.js environment and JavaScript development in a browser environment.

Next, it will lead you to complete the most traditional "Hello World" application, which is also the most basic node.js application.

Finally, we will discuss how to design a "real" complete application, dissect the different modules that need to be implemented to complete the application, and explain how to implement these modules step-by-step.

To be sure, in this process, you'll learn some of the advanced concepts in JavaScript, how to use them, and why they can be used, and the same concepts in other programming languages cannot be implemented.

The application of all the source code can be through the book GitHub: Https://github.com/ManuelKiessling/NodeBeginnerBook/tree/master/code/application.

JavaScript and Node.js

JavaScript with you

Put aside the technology, let's talk about you and your relationship with JavaScript first. The main purpose of this chapter is to show you whether it is necessary for you to continue reading the following chapters.

If you're like me, then you start using HTML early to "develop", and that's why you're exposed to this interesting thing called JavaScript, and for JavaScript, you only have basic operations--adding interactivity to Web pages.

And what you really want is "dry goods," and you want to know how to build a complex web site--so you learn a programming language like PHP, Ruby, Java, and start writing "back end" code.

At the same time, you're always looking at JavaScript, and with the introduction of some technology like Jquery,prototype, you're getting to know a lot of the advanced skills in JavaScript, It also feels that JavaScript is not simply window.open ().

These are, after all, front-end technologies, although it always makes sense to use jquery when you want to enhance the page, but in the end, you're the most JavaScript user, not the JavaScript developer.

And then, there's the Node.js, the service-side JavaScript, how cool is that?

So you think it's time to pick up the familiar and unfamiliar JavaScript. But don't worry, writing node.js apps is one thing; understanding why they write in the way they write means-you have to understand JavaScript. It's a real game this time.

Here's the problem: since JavaScript is truly in two, or even three, forms (from the 90 's as a small toy for DHTML, to a strictly front-end technology like jquery, to the current service-side technology), it's hard to find a " The right way to learn JavaScript so that you can write node.js applications and feel like you're actually developing it instead of just using it.

Because that's the key: You're already an experienced developer, and you don't want to learn new technology by looking around for solutions (which may be incorrect) to make sure you learn the technology in the right way.

Of course, there are some excellent learning JavaScript articles out there. However, sometimes it is not enough to rely on those articles alone. What you need is guidance.

The goal of this book is to provide you with guidance.

Brief statement

The industry has very good javascript programmers. And I'm not one of them.

I was the one I described in the previous section. I'm familiar with how to develop back-end web apps, but I'm just new to "real" JavaScript and node.js. I've only recently learned some of the advanced concepts of JavaScript and have no practical experience.

Therefore, this book is not a "beginner to proficient" book, more like a "Beginner to Advanced" book.

If successful, then this book is the first I began to learn node.js most want to have the tutorial.

Service-side JavaScript

JavaScript is first run in the browser, but the browser simply provides a context that defines what can be done with JavaScript, but does not "say" too much about what the JavaScript language itself can do. In fact, JavaScript is a "complete" language: It can be used in different contexts, and its capabilities are more than those of other similar languages.

Node.js is actually another context that allows JavaScript code to run on the back end (out of the browser environment).

To implement JavaScript code running in the background, the code needs to be interpreted and executed correctly. Node.js's theory is that it uses Google's V8 virtual machine (JavaScript execution environment used by Google's Chrome browser) to interpret and execute JavaScript code.

In addition, there are many useful modules along with the Node.js, which can simplify a lot of repetitive work, such as outputting strings to terminals.

Therefore, Node.js is in fact both a run-time environment and a library.

To use Node.js, you first need to install. about how to install Node.js, here is not to repeat, you can directly refer to the official installation guide. After the installation is complete, go back and read the contents of the book below.

"Hello World"

OK, "nonsense" not much said, immediately began our first Node.js application: "Hello World".

Open your favorite editor and create a helloworld.js file. All we have to do is output "Hello world" to stdout, which is the code that implements the feature:

Copy Code code as follows:
Console.log ("Hello World");

Save the file and execute it through Node.js:

Copy Code code as follows:
Node Helloworld.js

Normally, it will output Hello World at the terminal.

Well, I admit the application is a bit boring, so let's go for some dry goods.

A complete Web application based on Node.js

Case

Let's set the goal to a simpler point, but it's also practical enough to do it:

1. Users can use our applications through the browser.
2. When the user requests Http://domain/start, you can see a welcome page with a file upload form on the page.
3. The user can select a picture and submit the form, then the file will be uploaded to Http://domain/upload, the page will be uploaded after uploading the picture on the page.
Almost, you can also go to Google Now, find something to mess up to complete the function. But we're not going to do this right now.

Further, in the process of accomplishing this goal, we need more than just basic code, regardless of the elegance of the code. We also need to abstract this to look for a way to build more complex node.js applications.

Application of different module analysis

Let's break down this application and what do we need to implement in order to implement the above use cases?

1. We need to provide a Web page, so we need an HTTP server
2. For different requests, depending on the URL of the request, our server needs to give a different response, so we need a route to correspond the request to the request handler (requests handler)
3. When the request is received by the server and passed through routing, it needs to be processed, so we need the final request handler
4. Routing should also be able to process post data and encapsulate the data in a more user-friendly format for delivery to the request processing program, requiring the request data processing function
5. Not only do we have to deal with URL-related requests, we also have to display the content, which means that we need some view logic to be used by the request handler to send the content to the user's browser
6. Finally, the user needs to upload the image, so we need to upload the processing function to deal with this detail
Let's start by thinking about how we would build this structure using PHP. Generally we will use an Apache HTTP server and match the MOD_PHP5 module.
From this perspective, the entire "Receive HTTP request and provide a Web page" requirement does not require PHP to process at all.

But for Node.js, the concept is completely different. When using Node.js, we not only implement an application, but also implement the entire HTTP server. In fact, our web applications and the corresponding Web servers are basically the same.

It sounds like there's a lot of work to do, but then we realize that it's not a problem for node.js.

Now we're going to start the implementation path, starting with the first part of the--http server.

Building the Application module

A base HTTP server

When I was ready to start writing my first "real" node.js application, I didn't know how to write Node.js code or how to organize the code.
Should I put everything in a file? There are many tutorials on the web that will teach you to put all your logic into a basic HTTP server written in Node.js. But what if I want to add more content and still want to keep the code readable?

In fact, it's fairly straightforward to keep code separate as long as you put code from different functions into different modules.

This approach allows you to have a clean Master file (main) that you can execute with Node.js, and you can have clean modules that can be invoked by the main file and other modules.

So, now we're going to create a master file that launches our application, and a module that holds our HTTP server code.

In my impression, it is more or less a standard format to call the main file index.js. The server module is well understood in the file called Server.js.

Let's start with the server module first. Create a file called Server.js in the root directory of your project and write the following code:

Copy Code code as follows:

var http = require ("http");

Http.createserver (function (request, response) {
Response.writehead ({"Content-type": "Text/plain"});
Response.Write ("Hello World");
Response.End ();
}). Listen (8888);




Get! You have just completed an HTTP server that can work. To prove this, let's run and test this code. First, execute your script with Node.js:

Node Server.js
Next, open the browser to visit http://localhost:8888/, and you will see a Web page with "Hello World".

It's funny, isn't it? Let's talk about the HTTP server first, how do you think about organizing the project? I promise that we will solve the problem later.

Analyzing HTTP Servers

So next, let's analyze the composition of this HTTP server.

The first line request (require) Node.js the self-contained HTTP module and assigns it to the HTTP variable.

Next we call the function provided by the HTTP module: Createserver. This function returns an object that has a method called Listen, which has a numeric parameter that specifies the port number that the HTTP server listens to.

Let's temporarily ignore the function definition in the Http.createserver bracket.

We could have used this code to start the server and listen on port 8888:

Copy Code code as follows:

var http = require ("http");

var server = Http.createserver ();
Server.listen (8888);




This code will only start a server that listens on port 8888, it does nothing else, and even requests do not answer.

The most interesting (and, if you used to use a more conservative language, such as PHP, which is odd) is the first parameter of Createsever (), a function definition.

In fact, this function definition is the first and only parameter of Createserver (). Because in JavaScript, functions can be passed as well as other variables.

Perform function pass

For example, you can do this:

Copy Code code as follows:

function say (word) {
Console.log (word);
}

function execute (someFunction, value) {
SomeFunction (value);
}

Execute (say, "Hello");
Please read this code carefully! Here, we pass the say function as the first variable of the Execute function. This returns not the return value of say, but the say itself!

In this way, say becomes the local variable someFunction in Execute, and execute can use the say function by calling SomeFunction () (in parentheses).

Of course, because say has a variable, execute can pass such a variable when calling SomeFunction.

We can, as we just did, pass a function as a variable with its name. But we don't necessarily have to go around this "define, then pass" circle, we can define and pass this function directly in the parentheses of another function:

Copy Code code as follows:

function execute (someFunction, value) {
SomeFunction (value);
}

Execute (function (word) {Console.log (Word)}, "Hello");
We directly define the function we are going to pass to execute when execute accepts the first argument.

In this way, we don't even have to name the function, which is why it is called an anonymous function.

This is our first intimate contact with "advanced" javascript, but we still have to step in. Now, let's accept this: in JavaScript, a function can receive a parameter as another function. We can define a function first and then pass it, or we can define the function directly in the place where the arguments are passed.

How function passes are made to work with HTTP servers

With this knowledge, let's take a look at our simple and simple HTTP servers:

Copy Code code as follows:

var http = require ("http");

Http.createserver (function (request, response) {
Response.writehead ({"Content-type": "Text/plain"});
Response.Write ("Hello World");
Response.End ();
}). Listen (8888);




Now it should look a lot clearer: we passed an anonymous function to the Createserver function.

This code can also be used to achieve the same purpose:

Copy Code code as follows:

var http = require ("http");

function onrequest (request, Response) {
Response.writehead ({"Content-type": "Text/plain"});
Response.Write ("Hello World");
Response.End ();
}

Http.createserver (ONrequest). Listen (8888);




Perhaps we should ask this question now: Why do we have to use this way?

Event-driven callbacks

It's a difficult question to answer (at least for me), but it's the way node.js works. It's an event-driven, which is why it's so fast.

You might want to spend some time reading Felix Geisendörfer's masterpiece Understanding Node.js, which introduces some background information.

It all boils down to the fact that "Node.js is an event-driven". Well, I'm not really sure what that means. But I'll try to explain why it makes sense for us to use Node.js to write network applications (WEB based application).

When we use the Http.createserver method, of course we don't just want a server that listens on a port, we also want it to do something when the server receives an HTTP request.

The problem is that this is asynchronous: Requests may arrive at any time, but our servers run in a single process.

When writing PHP apps, we don't worry about it at all: whenever a request comes in, the Web server (usually Apache) creates a new process for the request and starts executing the corresponding PHP script from start to finish.

So in our Node.js program, how do we control the process when a new request arrives at Port 8888?

Well, that's where Node.js/javascript's event-driven design can really help-though we have to learn some new concepts to master it. Let's take a look at how these concepts are applied in our server code.

We created the server and passed a function to the method that created it. Whenever our server receives a request, this function is invoked.

We don't know when this is going to happen, but we now have a place to process the request: It is the function that we pass past. It doesn't matter whether it's a predefined function or an anonymous function.

This is the legendary callback. We pass a function to a method that calls this function to invoke a callback when a corresponding event occurs.

At least for me, it takes some effort to understand it. If you're still unsure, read Felix's blog post.

Let us ponder over this new concept again. How do we prove that our code continues to work after the server is created, even if no HTTP requests come in and our callback functions are not invoked? Let's try this:

Copy Code code as follows:

var http = require ("http");

function onrequest (request, Response) {
Console.log ("Request received.");
Response.writehead ({"Content-type": "Text/plain"});
Response.Write ("Hello World");
Response.End ();
}

Http.createserver (ONrequest). Listen (8888);

Console.log ("Server has started.");




Note: where ONrequest (our callback function) triggers, I output a paragraph of text with Console.log. After the HTTP server has started working, a piece of text is also exported.

When we run the node server.js as usual, it will immediately output "server has started." On the command line. When we make a request to the server (in the browser to access http://localhost:8888/), "request received." This message will appear on the command line.

This is the event-driven asynchronous server-side JavaScript and its callback!

(Note that when we visit a Web page on the server, our server may output two times "Request received."). That's because most servers try to read Http://localhost:8888/favicon.ico when you access http://localhost:8888/

How the server handles requests

OK, so let's take a quick look at the rest of our server code, which is the main part of our callback function ONrequest ().

When the callback is started, when our onrequest () function is triggered, two parameters are passed in: request and response.

They are objects that you can use to handle the details of HTTP requests and respond to requests (such as sending back something to the requesting browser).

So our code is: When the request is received, use the Response.writehead () function to send a content type (content-type) of HTTP State 200 and HTTP headers, using the Response.Write () The function sends the text "Hello World" to the HTTP corresponding body.

Finally, we call Response.End () to complete the response.

For now, we don't care about the details of the request, so we're not using a request object.

Where is the module on the server?

OK, as I promised, we can now go back to how we organize the application. We now have a very basic HTTP server code in the Server.js file, and I mention that we usually have a file called Index.js to invoke and start the application by invoking the other modules (such as the HTTP Server module in server.js).

Let's talk now about how to turn server.js into a real node.js module so that it can be used by our Index.js main file.

You may have noticed that we have used the module in the code. Like this:

Copy Code code as follows:

var http = require ("http");

...

Http.createserver (...);




Node.js has a module called "http", which we request in our code and assign the return value to a local variable.

This turns our local variables into an object that owns the public methods provided by all HTTP modules.

It is a common practice to give this local variable a name that is the same as the module name, but you can also follow your own preferences:

Copy Code code as follows:

var foo = require ("http");

...

Foo.createserver (...);




Very well, how to use the Node.js internal module is already very clear. How do we create our own modules and how do we use them?

When we turn server.js into a real module, you can figure it out.

In fact, we don't have to make too many changes. Turning a piece of code into a module means we need to export the part of the function that we want to provide to the script that requested the module.

At present, our HTTP server needs to export the function is very simple, because the Request Server module script is only need to start the server.

We put our server script in a function called start, and we'll export this function.

Copy Code code as follows:

var http = require ("http");

function Start () {
function onrequest (request, Response) {
Console.log ("Request received.");
Response.writehead ({"Content-type": "Text/plain"});
Response.Write ("Hello World");
Response.End ();
}

Http.createserver (ONrequest). Listen (8888);
Console.log ("Server has started.");
}

Exports.start = start;




In this way, we can now create our main file Index.js and launch our HTTP in it, although the server's code is still in Server.js.

Create the Index.js file and write the following:

Copy Code code as follows:

var server = require ("./server");

Server.start ();




As you can see, we could use the server module like any other built-in module: request the file and point it to a variable, where the exported function can be used by us.

All right. We can now start our application from our main script, and it's still the same:

Copy Code code as follows:

Node Index.js



Very well, we can now put different parts of our application into different files and connect them to each other by building the modules.

We still only have the first part of the entire application: we can receive HTTP requests. But we have to do something--for different URL requests, the server should have a different response.

For a very simple application, you can do this directly in the callback function ONrequest (). But as I said, we should add some abstract elements to make our example more interesting.

Processing different HTTP requests is a different part of our code, called "Routing"--then we'll create a module called routing.

How to make a request for "routing"

We are going to provide the requested URL and other required get and post parameters for the route, and then the route needs to execute the corresponding code based on the data (here "code" corresponds to the third part of the entire application: a series of handlers that actually work when the request is received).

Therefore, we need to view the HTTP request from which to extract the requested URL and the Get/post parameter. Whether this feature should be a route or a server (even as a function of a module itself) is really worth exploring, but it is tentative for the functionality of our HTTP server.

All the data we need is contained in the request object, which is passed as the first parameter of the ONrequest () callback function. But in order to parse this data, we need additional node.js modules, which are URLs and QueryString modules respectively.

Copy Code code as follows:



Url.parse (string). Query





Url.parse (String). Pathname |





| |


------ -------------------


Http://localhost:8888/start?foo=bar&hello=world


---       -----





| |


QueryString (String) ["foo"] |





QueryString (String) ["Hello"]



Of course, we can also use the QueryString module to resolve the parameters in the POST request body, which will be demonstrated later.

Now let's add some logic to the onrequest () function to find the URL path for the browser request:

var http = require ("http");
var url = require ("url");

function Start () {
function onrequest (request, Response) {
var pathname = Url.parse (request.url). Pathname;
Console.log ("Request for" + Pathname + "received.");
Response.writehead ({"Content-type": "Text/plain"});
Response.Write ("Hello World");
Response.End ();
}

Http.createserver (ONrequest). Listen (8888);
Console.log ("Server has started.");
}

Exports.start = start;
Well, our application can now distinguish different requests by the URL path of the request--which allows us to use the route (not yet completed) to map the request to the handler using the URL path as the benchmark.

In the application we are going to build, this means that requests from/start and/upload can be handled using different code. Later on we'll see how the content is integrated together.

Now we can write the route, build a file called Router.js, and add the following:

Function route (pathname) {
Console.log ("About to route a request for" + pathname);
}

Exports.route = route;
As you can see, this code does nothing, but for now it should be. Before adding more logic, let's take a look at how to integrate Routing and server.

Our servers should be aware of the existence of routes and use them effectively. We can, of course, bind this dependency to the server in a hard-coded way, but the programming experience of other languages tells us it's going to be very painful, so we'll add the routing module loosely using dependency injection (you can read Martin Fowlers about Dependency Injection as background knowledge.

First, let's expand the server's start () function to pass the routing function as a parameter to the past:

Copy Code code as follows:

var http = require ("http");
var url = require ("url");

function Start (route) {
function onrequest (request, Response) {
var pathname = Url.parse (request.url). Pathname;
Console.log ("Request for" + Pathname + "received.");

Route (pathname);

Response.writehead ({"Content-type": "Text/plain"});
Response.Write ("Hello World");
Response.End ();
}

Http.createserver (ONrequest). Listen (8888);
Console.log ("Server has started.");
}

Exports.start = start;




At the same time, we will extend the index.js accordingly, so that the routing function can be injected into the server:


Copy Code code as follows:

var server = require ("./server");
var router = require ("./router");

Server.start (Router.route);




Here, the functions we pass are still doing nothing.

If you start the application now (node Index.js, always remember this command line), and then request a URL, you will see the corresponding information for the application output, indicating that our HTTP server is already using the routing module and will pass the requested path to the route:

Copy Code code as follows:

bash$ node Index.js
Request For/foo received.
About to route a request For/foo



(The above output has removed the more annoying/favicon.ico request-related parts).

Behavior-Driven execution

Allow me to get off the subject again and talk about functional programming here.

Passing functions as parameters is not just a technical consideration. For software design, this is actually a philosophical issue. Think of this scenario: in the index file, we can pass the router object in, and the server can then call the object's route function.

Like this, we pass something, and the server uses it to do something. Hey, that thing called routing, can you help me route this?

But the server doesn't really need anything like that. It just needs to get things done, but in order to get things done, you don't need anything at all, you need action. In other words, you don't need a noun, you need a verb.

After understanding the core and basic ideas of this concept, I naturally understand functional programming.

I was reading the death penalty in the Kingdom of Steve Yegge's writings. Understand function programming. You should also read this book, really. It was one of the happy books on software that had given me the pleasure of reading.

Route to the real request handler

Back to the point, now our HTTP server and request routing module has been as we expect, can communicate with each other, like a pair of intimate brothers.

Of course, this is far from enough, routing, as the name suggests, is that we have different ways to deal with different URLs. For example, dealing with/start's "business logic" should be different from dealing with/upload.

In the present implementation, the routing process "ends" in the routing module, and the routing module is not really a "take action" module for the request, or it will not scale well when our application becomes more complex.

We temporarily refer to a function as a routing target as a request handler. Now let's not rush to develop the routing module, because if the request handler is not ready, then how to perfect the routing module does not make much sense.

Applications need new parts, so adding new modules is no longer a novelty. Let's create a module called Requesthandlers, and for each request handler, add a placeholder function and then export these functions as a method of the module:

Copy Code code as follows:

function Start () {
Console.log ("Request handler ' start ' was called.");
}

function upload () {
Console.log ("Request handler ' upload ' was called.");
}

Exports.start = start;
Exports.upload = upload;




This allows us to connect the request handler with the routing module, giving way to "a path to find".

Here we have to make a decision: is the Requesthandlers module hard-coded into the route to use, or to add a bit of dependency injection? While dependency injection should not be used only for use, as with other schemas, in this case, dependency injection can give way to a more loosely coupled coupling between the request handlers and hence the reusability of the route.

This means that we have to pass the request handler from the server to the route, but it feels even more outrageous, and we have to pass the heap of request handlers from our main file to the server and then pass it to the route from the server.

So how do we pass these request handlers? Although now we only have 2 handlers, in a real application, the number of request handlers will increase, we certainly do not want to have a new URL or request handler every time, in order to complete the request in the route to the handler mapping and repeatedly toss. In addition, there is a large stack of If request = X then call handler y in the route that makes the system ugly.

Think about it, there's a whole bunch of things, each mapped to a string (the URL of the request)? It seems that associative arrays (associative array) can be perfectly competent.

But the result was a bit disappointing, and JavaScript didn't provide an associative array--or did it? In fact, in JavaScript, it's the object that really provides this kind of functionality.

In this regard, Http://msdn.microsoft.com/en-us/magazine/cc163419.aspx has a good introduction, I am here to extract a paragraph:

In C + + or C #, when we talk about objects, we refer to instances of classes or structs. Objects have different properties and methods depending on the template they instantiate (the so-called Class). But in JavaScript the object is not the concept. In JavaScript, an object is a collection of key/value pairs-You can think of JavaScript objects as a dictionary of string types.

But if the JavaScript object is just a collection of key/value pairs, how does it have a method? Well, the value here can be a string, a number, or ... Function!

Okay, finally, back to the code. We have now determined to pass a series of request handlers through an object, and we need to inject this object into the route () function in a loosely coupled manner.

Let's first introduce this object into the main file index.js:

Copy Code code as follows:

var server = require ("./server");
var router = require ("./router");
var requesthandlers = require ("./requesthandlers");

var handle = {}
handle["/"] = Requesthandlers.start;
handle["/start"] = Requesthandlers.start;
handle["/upload"] = requesthandlers.upload;

Server.start (Router.route, handle);




Although handle is not just a "thing" (a collection of some request handlers), I suggest a verb as its name, which allows us to use a more fluid expression in the route, which is later explained.

As you can see, it is easy to map different URLs to the same request handler: Simply add a property with a key of "/" to the object, and requesthandlers.start it so that we are able to configure the/start and/ Requests are processed by the START process.

After we have finished defining the object, we pass it as an additional parameter to the server, which modifies the server.js as follows:

Copy Code code as follows:

var http = require ("http");
var url = require ("url");

function start (route, handle) {
function onrequest (request, Response) {
var pathname = Url.parse (request.url). Pathname;
Console.log ("Request for" + Pathname + "received.");

Route (handle, pathname);

Response.writehead ({"Content-type": "Text/plain"});
Response.Write ("Hello World");
Response.End ();
}

Http.createserver (ONrequest). Listen (8888);
Console.log ("Server has started.");
}

Exports.start = start;




So we add the handle parameter to the start () function and pass the handle object as the first argument to the route () callback function.

We then modify the route () function in the Route.js file accordingly:

Copy Code code as follows:

function route (handle, pathname) {
Console.log ("About to route a request for" + pathname);
if (typeof handle[pathname] = = ' function ') {
Handle[pathname] ();
} else {
Console.log ("No request handler found for" + pathname);
}
}

Exports.route = route;




With the above code, we first examine whether the request handler for the given path exists, and if so, call the corresponding function directly. We can get the request handler function from the passed object in the same way that we get the element from the associative array. So there is a concise and fluent form like handle[pathname] (), the expression, the feeling is as mentioned in the front: "Hey, please help me deal with this path."

With this, we have the server, routing, and request handlers together. Now that we start the application and access Http://localhost:8888/start in the browser, the following logs can indicate that the system called the correct request handler:

Copy Code code as follows:

Server has started.
Request For/start received.
About to route a request For/start
Request handler ' start ' was called.



and open http://localhost:8888/in the browser to see that the request was also processed by the start request handler:


Copy Code code as follows:

Request for/received.
About to route a request for/
Request handler ' start ' was called.

Let the request handler respond

Very good. But now it would be better if the request handler could return some meaningful information to the browser instead of all "Hello world".

Keep in mind that the "Hello world" message that is obtained and displayed by the browser when it makes a request still comes from the onrequest function in our Server.js file.

In fact, "processing requests" is plainly "responding to requests", so we need to allow the request handler to "talk" to the browser like the ONrequest function.

Bad way to achieve

For developers like us with PHP or ruby technology backgrounds, the most straightforward implementation is actually not very reliable: seemingly effective, but not necessarily so.

What I mean by "straightforward implementation" here is to get the request handler to return directly via the ONrequest function (returns ()) the information they want to present to the user.

Let's do this first, and then see why it's not a good way to implement it.

Let's start by having the request handler return the information that needs to be displayed in the browser. We need to modify the requesthandler.js into the following form:

Copy Code code as follows:

function Start () {
Console.log ("Request handler ' start ' was called.");
Return "Hello Start";
}

function upload () {
Console.log ("Request handler ' upload ' was called.");
Return "Hello Upload";
}

Exports.start = start;
Exports.upload = upload;




Good. Similarly, a request route requires that the information returned to it by the request handler be returned to the server. Therefore, we need to modify the router.js into the following form:


Copy Code code as follows:

function route (handle, pathname) {
Console.log ("About to route a request for" + pathname);
if (typeof handle[pathname] = = ' function ') {
return Handle[pathname] ();
} else {
Console.log ("No request handler found for" + pathname);
Return "404 Not Found";
}
}

Exports.route = route;




As the above code shows, we also return some related error messages when the request cannot be routed.

Finally, we need to refactor our server.js so that it can respond to the browser by asking for the content returned by the request route, as follows:

Copy Code code as follows:

var http = require ("http");
var url = require ("url");

function start (route, handle) {
function onrequest (request, Response) {
var pathname = Url.parse (request.url). Pathname;
Console.log ("Request for" + Pathname + "received.");

Response.writehead ({"Content-type": "Text/plain"});
var content = route (handle, pathname)
Response.Write (content);
Response.End ();
}

Http.createserver (ONrequest). Listen (8888);
Console.log ("Server has started.");
}

Exports.start = start;




If we run the refactoring application, everything will work fine: request Http://localhost:8888/start, the browser will output "Hello start", request Http://localhost:8888/upload will output " Hello Upload ", and the request Http://localhost:8888/foo will output" 404 Not Found ".

Okay, so where's the problem? Simply put: When a request handler needs to do a non-blocking operation in the future, our application "hangs".

You don't understand? It doesn't matter, let's explain it in detail below.

Blocking and non-blocking

As mentioned earlier, the problem occurs when a non-blocking operation is included in the request handler. But before we say this, let's look at what a blocking operation is.

I don't want to explain the specifics of "blocking" and "non-blocking," and we'll see what happens when you add a blocking operation to the request handler.

Here, we modify the start request handler and we let it wait 10 seconds before returning to "Hello Start". Because there is no such thing as sleep () in JavaScript, there are only a few small hack to simulate the implementation.

Let's modify the Requesthandlers.js into the following form:

Copy Code code as follows:

function Start () {
Console.log ("Request handler ' start ' was called.");

function sleep (milliseconds) {
var starttime = new Date (). GetTime ();
while (new Date (). GetTime () < StartTime + milliseconds);
}

Sleep (10000);
Return "Hello Start";
}

function upload () {
Console.log ("Request handler ' upload ' was called.");
Return "Hello Upload";
}

Exports.start = start;
Exports.upload = upload;




In the code above, when the function start () is invoked, Node.js waits 10 seconds before returning "Hello Start". When upload () is invoked, it is returned immediately as before.

(Of course, this is only simulated hibernation 10 seconds, in the actual scene, such blocking operations have a lot of, for example, some long time calculation operations, etc.). )

Let's take a look at the changes that have been made to our changes.

As usual, we need to restart the server first. In order to see the effect, we have to do some relatively complex operations (follow me): First, open two browser windows or tab pages. Enter Http://localhost:8888/start in the address bar of the first browser window, but do not open it first!

In the Second browser window, enter the Address bar http://localhost:8888/upload, the same, do not open it!

Next, do the following: press ENTER in the first window ("/start"), and then quickly switch to the second window ("/upload") and press ENTER.

Notice what happened: the/start URL load took 10 seconds, as we expected. However, the/upload URL also took 10 seconds, and it did not resemble sleep () in the corresponding request handler!

What the hell is this about? The reason is that start () contains blocking operations. The image is said to be "it blocks all other processing work".

This is obviously a problem, because node has always been the way to advertise itself: "Everything except code in node is executed in parallel."

The meaning of this sentence is that node.js can still do parallel processing of tasks without adding a new amount of--node.js is single-threaded. It implements parallel operations through event polling, which we should take full advantage of--avoid blocking operations as much as possible, instead, and use non-blocking operations more often.

However, to use non-blocking operations, we need to use callbacks to pass functions as arguments to other functions that take time to process (for example, hibernate 10 seconds, or query the database, or do a lot of calculations).

For Node.js, it does this: "Hey, probablyexpensivefunction (translator: This refers to a function that takes time to process), you go on with your things, I (Node.js thread) First wait for you, I continue to deal with the code behind you, please provide a callbackfunction (), after you finish processing I will call the callback function, thank you! ”

(If you want to learn more about event polling details, you can read Mixu's blog post--Understanding Node.js event polling.) )

Next, we'll introduce a bad way to use non-blocking operations.

As we did last time, we exposed the problem by modifying our application.

This time we'll take the start request handler to "axe". Modify it to the following form:

Copy Code code as follows:

var exec = require ("child_process"). exec;

function Start () {
Console.log ("Request handler ' start ' was called.");
var content = "Empty";

EXEC ("Ls-lah", function (Error, stdout, stderr) {
Content = stdout;
});

return content;
}

function upload () {
Console.log ("Request handler ' upload ' was called.");
Return "Hello Upload";
}

Exports.start = start;
Exports.upload = upload;




In the above code, we introduce a new Node.js module, child_process. It is used to implement a simple and practical non-blocking operation: EXEC ().

What did exec () do? It executes a shell command from the node.js. In the example above, we use it to get all the files in the current directory ("Ls-lah") and then output the file information to the browser when the/starturl request.

The code above is very intuitive: Create a new variable content (the initial value is "empty"), execute the "Ls-lah" command, assign the result to the content, and finally return the content.

As always, we start the server and then visit "Http://localhost:8888/start".

It then loads a nice web page with the content "empty". What's going on?

At this point, you might have guessed that exec () had a magical effect on the non-blocking block. It's actually a good thing, with it, we can perform very time-consuming shell operations without forcing our application to stop and wait for the operation.

(If you want to prove this, replace "Ls-lah" with a more time-consuming operation like "Find/").

However, for the results of the browser display, we are not satisfied with our non-blocking operation, right?

OK, next, let's fix the problem. In this process, let's take a look at why the current approach doesn't work.

The problem is that exec () uses a callback function for non-blocking work.

In our case, the callback function is the anonymous function passed to exec () as the second argument:

Copy Code code as follows:

function (Error, stdout, stderr) {
Content = stdout;
}



Now it's the root of the problem: our code is executed synchronously, which means that after calling exec (), Node.js immediately executes the return content, at which point the content is still "empty" because it is passed to exec () 's callback function has not yet been executed-because the exec () operation is asynchronous.

Our "Ls-lah" operation is actually very fast (unless there are millions of files in the current directory). This is why the callback function will also be executed very quickly--however, it is asynchronous anyway.

To make this more obvious, we envision a more time-consuming command: "Find/", which takes about 1 minutes on my machine, however, although in the request handler I change "Ls-lah" to "Find/", and when I open the/start URL, You can still get an HTTP response immediately-obviously, when exec () is executing in the background, node.js itself will continue to execute the following code. And here we assume that the callback function passed to exec () will be invoked only after the "Find/" command has finished executing.

So how are we going to get the list of files in the current directory to be displayed to the user?

Well, after understanding this bad implementation, we'll show you how to get the request handler to respond to the browser request in the right way.

Request response with non-blocking operation

I've just mentioned a phrase--"the right way." In fact, usually the "right way" is generally not simple.

However, with Node.js there is such a solution: function delivery. Let's take a concrete look at how to achieve this below.

So far, our application has been able to pass the content returned by the request handler (the request handler will eventually be displayed to the user) by applying the method of passing values between tiers (Request handler-> request routing-> server) to the HTTP server.

Now we adopt this new implementation: the relative adoption of the content to the server, we use the server "pass" to the content of the way. From a practical standpoint, the response object (obtained from the server's callback function ONrequest ()) is passed to the request handler by a request route. The handler can then take a function on the object to respond to the request.

This is how it works, and then let's move on to it in a step-by-step way.

Start with the Server.js first:

Copy Code code as follows:

var http = require ("http");
var url = require ("url");

function start (route, handle) {
function onrequest (request, Response) {
var pathname = Url.parse (request.url). Pathname;
Console.log ("Request for" + Pathname + "received.");

Route (handle, pathname, response);
}

Http.createserver (ONrequest). Listen (8888);
Console.log ("Server has started.");
}

Exports.start = start;




Instead of getting the return value from the route () function This time, we pass the response object as the third argument to the route () function, and we remove all the functions that are related to response in the ONrequest () handler. Because we want this part of the work to be done by the route () function.

Here's a look at our router.js:

Copy Code code as follows:

function route (handle, pathname, response) {
Console.log ("About to route a request for" + pathname);
if (typeof handle[pathname] = = ' function ') {
Handle[pathname] (response);
} else {
Console.log ("No request handler found for" + pathname);
Response.writehead (404, {"Content-type": "Text/plain"});
Response.Write ("404 Not Found");
Response.End ();
}
}

Exports.route = route;




The same pattern: instead of getting the return value from the request handler earlier this time, the response object is passed directly.

If there is no corresponding request processor processing, we return the "404" error directly.

Finally, we change the requesthandler.js to the following form:

Copy Code code as follows:

var exec = require ("child_process"). exec;

function Start (response) {
Console.log ("Request handler ' start ' was called.");

EXEC ("Ls-lah", function (Error, stdout, stderr) {
Response.writehead ({"Content-type": "Text/plain"});
Response.Write (stdout);
Response.End ();
});
}

function upload (response) {
Console.log ("Request handler ' upload ' was called.");
Response.writehead ({"Content-type": "Text/plain"});
Response.Write ("Hello Upload");
Response.End ();
}

Exports.start = start;
Exports.upload = upload;




Our handler functions need to receive response parameters in order to respond directly to the request.

The start handler does a request-response operation in the anonymous callback function of exec (), and the upload handler is still a simple reply to "Hello world", only this time using the response object.

Then again we start the Application (node Index.js), and everything will work fine.

If you want to prove that the time-consuming operation in the/start handler does not block an immediate response to the/upload request, you can modify the Requesthandlers.js to the following form:

Copy Code code as follows:

var exec = require ("child_process"). exec;

function Start (response) {
Console.log ("Request handler ' start ' was called.");

EXEC ("Find/",
{timeout:10000, maxbuffer:20000*1024},
function (Error, stdout, stderr) {
Response.writehead ({"Content-type": "Text/plain"});
Response.Write (stdout);
Response.End ();
});
}

function upload (response) {
Console.log ("Request handler ' upload ' was called.");
Response.writehead ({"Content-type": "Text/plain"});
Response.Write ("Hello Upload");
Response.End ();
}

Exports.start = start;
Exports.upload = upload;




As a result, it takes 10 seconds to load when the request is Http://localhost:8888/start, and when the request Http://localhost:8888/upload, it responds immediately, even at this time The start response is still in process.

More useful scenes

So far, we have done very well, but our application has no practical use.

Server, request routing, and request handlers are complete, let's add interactivity to the site by following the previous use cases: the user selects a file, uploads the file, and then sees the uploaded file in the browser. To keep it simple, let's assume that the user will only upload the picture, and then we'll apply the picture to the browser.

OK, here's a step-by-step implementation, and given the extensive introduction of JavaScript's original rational technical content, this time we're going to speed it up a bit.

To implement this functionality, there are two steps: First, let's look at how to handle post requests (not file uploads), and then we use a node.js external module for file uploads. There are two reasons why this approach can be implemented.

First, although it is relatively simple to process the underlying post requests in Node.js, there is still a lot to learn in this process.
Second, the use of node.js to process file upload (multipart POST request) is more complicated, it is not in the scope of this book, but, how to use the external module is in the content of this book.

Process POST Requests

Consider a simple example: we display a text area (textarea) for user input and then submit to the server via a POST request. Finally, the server receives the request and displays the input in the browser through a handler.

The/start request handler is used to generate a form with a text area, so we modify the requesthandlers.js to the following form:

function Start (response) {
Console.log ("Request handler ' start ' was called.");

var body = ' ' "<meta http-equiv=" Content-type "content=" text/html; '+
' Charset=utf-8 '/> ' +
' ' <body> ' +
' <form action= '/upload ' method= ' post ' > ' +
' <textarea name= ' text ' rows= ' cols= ' ></textarea> ' +
' <input type= ' submit ' value= ' submit text '/> ' +
' </form> ' +
' </body> ' +
'

Response.writehead ({"Content-type": "Text/html"});
Response.Write (body);
Response.End ();
}

function upload (response) {
Console.log ("Request handler ' upload ' was called.");
Response.writehead ({"Content-type": "Text/plain"});
Response.Write ("Hello Upload");
Response.End ();
}

Exports.start = start;
Exports.upload = upload;
OK, now our application has been very perfect, can get Webby Award (Webby Awards), haha. (Translator Note: Webby Award is the International Academy of Digital Arts and Sciences for the award of the world's best web site awards, see detailed instructions) by accessing Http://localhost:8888/start in the browser can see a simple form, to remember to restart the server Oh!

You might say that this way of directly placing visual elements in the request handler is too ugly. That's right, but I don't want to introduce patterns like MVC in this book because it doesn't matter much about JavaScript or node.js environments.

For the rest of the space, let's explore a more interesting question: when the user submits the form, it triggers the/upload request handler to handle the POST request.

Now that we are experts in the novice, it is natural to think of asynchronous callbacks to implement non-blocking processing of data for post requests.

It is advisable to handle this in a non-blocking manner, since post requests are generally "heavy"-users may enter a large amount of content. Requests to handle large amounts of data in a blocking manner inevitably result in user manipulation blocking.

To make the entire process non-blocking, Node.js splits the post data into small chunks of data and then passes the small blocks of data to the callback function by triggering a specific event. The specific event here has a data event (representing the arrival of a new small block of data) and an end event (indicating that all data has been received).

We need to tell node.js which functions to callback when these events are triggered. How can I tell you? We register the listener implementation on the request object. The request object here is that each time an HTTP request is received, the object is passed to the ONrequest callback function.
As shown below:

Copy Code code as follows:

Request.addlistener ("Data", function (chunk) {
Called when a new chunk of data is received
});

Request.addlistener ("End", function () {
Called all chunks of data have been received
});




Here's the question, where is this part of the logic written? We are now just getting the request object on the server-we are not passing the request object to the requesting route and request handler, as we did before the response object.

In my opinion, getting all the data from the request and then processing the data to the application tier should be what the HTTP server does. Therefore, I suggest that we process the post data directly in the server and then pass the final data to the request routing and request processor for further processing.

So, the idea is to put the callback function of the data and end events directly in the server, collect all the post data in the data event callback, when all data is received, the end event is triggered, the callback function calls the request route, and the data is passed to it, and then the Request a route and then pass the data to the request handler.

What to wait for, immediately to achieve. Start with the Server.js first:

Copy Code code as follows:



var http = require ("http");


var url = require ("url");

function start (route, handle) {
function onrequest (request, Response) {
var postdata = "";
var pathname = Url.parse (request.url). Pathname;
Console.log ("Request for" + Pathname + "received.");

Request.setencoding ("UTF8");

Request.addlistener ("Data", function (Postdatachunk) {
PostData + = Postdatachunk;
Console.log ("Received POST data chunk '" +
Postdatachunk + "'.");
});

Request.addlistener ("End", function () {
Route (handle, pathname, response, postdata);
});

}

Http.createserver (ONrequest). Listen (8888);
Console.log ("Server has started.");
}

Exports.start = start;




The above code does three things: first, we set up the code format for receiving the data as UTF-8, then register the listener for the "data" event to collect the new block of data received each time and assign it to the PostData variable, and finally we move the call to the request route to the end event handler To ensure that it is triggered only once all data has been received, and only once. We also pass the post data to the request route because the data is used by the request handler.

The above code outputs a log when each block arrives, which is bad for the final production environment (the amount of data can be very large, remember?). , but it's useful in the development phase to help us see what's going on.

I suggest you try, try to enter a small piece of text, and a large section of content, when a large section of content, you will find that the data event will trigger many times.

A little more cool. We next/upload the page to show what the user has entered. To implement this feature, we need to pass postdata to the request handler and modify the Router.js to the following form:

Copy Code code as follows:

function route (handle, pathname, response, PostData) {
Console.log ("About to route a request for" + pathname);
if (typeof handle[pathname] = = ' function ') {
Handle[pathname] (response, postdata);
} else {
Console.log ("No request handler found for" + pathname);
Response.writehead (404, {"Content-type": "Text/plain"});
Response.Write ("404 Not Found");
Response.End ();
}
}

Exports.route = route;




Then, in Requesthandlers.js, we include the data in the response to the upload request:


Copy Code code as follows:



function Start (response, PostData) {


Console.log ("Request handler ' start ' was called.");

var body = ' ' "<meta http-equiv=" Content-type "content=" text/html; '+
' Charset=utf-8 '/> ' +
' ' <body> ' +
' <form action= '/upload ' method= ' post ' > ' +
' <textarea name= ' text ' rows= ' cols= ' ></textarea> ' +
' <input type= ' submit ' value= ' submit text '/> ' +
' </form> ' +
' </body> ' +
'

Response.writehead ({"Content-type": "Text/html"});
Response.Write (body);
Response.End ();
}

function upload (response, postdata) {
Console.log ("Request handler ' upload ' was called.");
Response.writehead ({"Content-type": "Text/plain"});
Response.Write ("You ' ve sent:" + postdata);
Response.End ();
}

Exports.start = start;
Exports.upload = upload;




OK, we can now receive the post data and process the data in the request handler.

The last thing we need to do is to pass the entire message body of the request to the request routing and request handler. We should only pass the post data to the request route and request handler for the part of which we are interested. In our case, we're interested in actually just the text field.

We can use the QueryString module described earlier to implement:

Copy Code code as follows:



var querystring = require ("querystring");

function Start (response, PostData) {
Console.log ("Request handler ' start ' was called.");

var body = ' ' "<meta http-equiv=" Content-type "content=" text/html; '+
' Charset=utf-8 '/> ' +
' ' <body> ' +
' <form action= '/upload ' method= ' post ' > ' +
' <textarea name= ' text ' rows= ' cols= ' ></textarea> ' +
' <input type= ' submit ' value= ' submit text '/> ' +
' </form> ' +
' </body> ' +
'

Response.writehead ({"Content-type": "Text/html"});
Response.Write (body);
Response.End ();
}

function upload (response, postdata) {
Console.log ("Request handler ' upload ' was called.");
Response.writehead ({"Content-type": "Text/plain"});
Response.Write ("You ' ve sent the text:" +
Querystring.parse (postdata). Text);
Response.End ();
}

Exports.start = start;
Exports.upload = upload;




Well, that's all about processing post data.

Process File Upload

Finally, let's implement our final use case: Allow the user to upload the image and display it in the browser.

Back in the 90 's, this use case was perfectly enough to meet the business model for IPOs, and now we can learn two things through it: How to install external node.js modules and how to apply them to our applications.

The external module we are going to use here is the Node-formidable module developed by Felix Geisendörfer. It does a good job of abstracting the data from the uploaded file. In fact, processing file upload "is" processing post data-but, the trouble is in the specific processing details, so, here, the use of ready-made plans more appropriate point.

With this module, you first need to install the module. Node.js has its own package manager, called NPM. It makes it easy to install node.js external modules. You can complete the installation of the module by using one of the following commands:

Copy Code code as follows:

NPM Install formidable



If the terminal outputs the following:


Copy Code code as follows:

NPM Info Build success:formidable@1.0.9
NPM OK



Indicates that the module has been installed successfully.

Now we can use the formidable module--similar to an internal module using an external module, and introduce it with a require statement:

Copy Code code as follows:

var formidable = require ("formidable");



What this module does is that the form that will be submitted via an HTTP POST request can be parsed in node.js. All we have to do is create a new incomingform, which is an abstract representation of the submitted form, which you can then use to parse the request object and get the data fields that are required in the form.

Node-formidable's official example shows how the two parts work together:

Copy Code code as follows:



var formidable = require (' formidable '),


HTTP = require (' http '),


SYS = require (' sys ');

Http.createserver (function (req, res) {
if (Req.url = = '/upload ' && req.method.toLowerCase () = = ' Post ') {
Parse a file upload
var form = new Formidable. Incomingform ();
Form.parse (req, Function (Err, fields, files) {
Res.writehead ({' Content-type ': ' Text/plain '});
Res.write (' received upload:\n\n ');
Res.end (Sys.inspect ({fields:fields, files:files}));
});
Return
}

Show a file Upload form
Res.writehead ({' Content-type ': ' text/html '});
Res.end (
' <form action= '/upload ' enctype= ' multipart/form-data ' +
' method= ' post ' > ' +
' <input type= ' text ' name= ' title ' ><br> ' +
' <input type= ' file ' name= ' upload ' multiple= ' multiple ' ><br> ' +
' <input type= ' submit ' value= ' Upload ' > ' +
' </form> '
);
}). Listen (8888);




If we save the above code in a file and execute it through node, we can do a simple form submission, including file uploads. You can then see the contents of the files object passed to the callback function by calling Form.parse, as follows:


Copy Code code as follows:

Received upload:

{fields: {title: ' Hello World '},
Files
{upload:
{size:1558,
Path: '/tmp/1c747974a27a6292743669e91f29350b ',
Name: ' Us-flag.png ',
Type: ' Image/png ',
Lastmodifieddate:tue, June 07:02:41 GMT,
_writestream: [Object],
Length: [Getter],
FileName: [Getter],
MIME: [Getter]}}}




In order to implement our functionality, we need to apply the above code to our application, and we will also consider how to display the contents of the uploaded file (saved in the/tmp directory) to the browser.

Let's get to the back of the question: how can you see it in the browser for files that are saved on your local hard drive?

Obviously, we need to read the file to our server and use a module called FS.

Let's add the/showurl request handler, which is directly hard-coded to display the file/tmp/test.png content into the browser. Of course, first you need to save the picture to this location.

Modify the Requesthandlers.js to the following form:

Copy Code code as follows:



var querystring = require ("querystring"),


FS = require ("FS");

function Start (response, PostData) {
Console.log ("Request handler ' start ' was called.");

var body = ' ' ' <meta http-equiv= ' content-type ' +
' Content= ' text/html; Charset=utf-8 "/> ' +
' ' <body> ' +
' <form action= '/upload ' method= ' post ' > ' +
' <textarea name= ' text ' rows= ' cols= ' ></textarea> ' +
' <input type= ' submit ' value= ' submit text '/> ' +
' </form> ' +
' </body> ' +
'

Response.writehead ({"Content-type": "Text/html"});
Response.Write (body);
Response.End ();
}

Function upload (response, postdata) {
  Console.log ("Request handler ' upload ' was called.");
  Response.writehead ({"Content-type": "Text/plain"});
  Response.Write ("You ve sent the text:" +
  Querystring.parse (postdata). text);
  Response.End ();
}

Function Show (response, PostData) {
Console.log ("Request handler ' show ' is called.");
Fs.readfile ("/tmp/test.png", "Binary", function (error, file) {
if (Error) {
Response.writehead (+, {"Content-type": "Text/plain"});
Response.Write (Error + "\ n");
Response.End ();
} else {
Response.writehead ({"Content-type": "Image/png"});
Response.Write (File, "binary");
Response.End ();
}
});
}

Exports.start = start;
Exports.upload = upload;
Exports.show = Show;




We also need to add this new request handler to the Routing mapping table in Index.js:


Copy Code code as follows:

var server = require ("./server");
var router = require ("./router");
var requesthandlers = require ("./requesthandlers");

var handle = {}
handle["/"] = Requesthandlers.start;
handle["/start"] = Requesthandlers.start;
handle["/upload"] = requesthandlers.upload;
handle["/show"] = requesthandlers.show;

Server.start (Router.route, handle);




After restarting the server, you can see the pictures saved in/tmp/test.png by accessing Http://localhost:8888/show.

Well, finally, what we want is:

Add a file upload element to a/start form
Integrate node-formidable into our upload request handler to save uploaded images to/tmp/test.png
Embed uploaded images into/uploadurl output html
The first is very simple. You only need to add a Multipart/form-data encoding type to the HTML form, remove the previous text area, add a file upload component, and change the copy of the submit button to "Upload file". As shown in the following requesthandler.js:

Copy Code code as follows:



var querystring = require ("querystring"),


FS = require ("FS");

function Start (response, PostData) {
Console.log ("Request handler ' start ' was called.");

var body = ' ' ' <meta http-equiv= ' content-type ' +
' Content= ' text/html; Charset=utf-8 "/> ' +
' ' <body> ' +
' <form action= '/upload ' enctype= ' multipart/form-data ' +
' method= ' post ' > ' +
' <input type= ' file ' name= ' upload ' > ' +
' <input type= ' submit ' value= ' Upload file '/> ' +
' </form> ' +
' </body> ' +
'

Response.writehead ({"Content-type": "Text/html"});
Response.Write (body);
Response.End ();
}

function upload (response, postdata) {
Console.log ("Request handler ' upload ' was called.");
Response.writehead ({"Content-type": "Text/plain"});
Response.Write ("You ' ve sent the text:" +
Querystring.parse (postdata). Text);
Response.End ();
}

Function Show (response, PostData) {
Console.log ("Request handler ' show ' is called.");
Fs.readfile ("/tmp/test.png", "Binary", function (error, file) {
if (Error) {
Response.writehead (+, {"Content-type": "Text/plain"});
Response.Write (Error + "\ n");
Response.End ();
} else {
Response.writehead ({"Content-type": "Image/png"});
Response.Write (File, "binary");
Response.End ();
}
});
}

Exports.start = start;
Exports.upload = upload;
Exports.show = Show;




Very good. The next step is relatively more complex. Here's the problem: we need to process the uploaded file in the upload handler so that we need to pass the request object to the Node-formidable form.parse function.

However, all we have is response objects and postdata arrays. It seems that we can only have to send the request object from the server to the request route and then pass it to the request handler. There may be a better plan, but, anyway, doing so at the moment can satisfy our needs.

Here, we can remove the postdata from the server and the request handler--on the one hand, there is no need for us to process the file, on the other hand, it may even raise the problem that we have "consumed" the data in the request object, which means that For Form.parse, there is nothing to gain when it wants to get the data. (because Node.js does not cache the data)

We start with server.js--removing the processing of postdata and request.setencoding (which the node-formidable itself handles), instead of passing the request object to the request route:

Copy Code code as follows:



var http = require ("http");


var url = require ("url");

function start (route, handle) {
function onrequest (request, Response) {
var pathname = Url.parse (request.url). Pathname;
Console.log ("Request for" + Pathname + "received.");
Route (handle, pathname, response, request);
}

Http.createserver (ONrequest). Listen (8888);
Console.log ("Server has started.");
}

Exports.start = start;
Next is router.js--we no longer need to pass the postdata, this time to pass the request object:

function route (handle, pathname, response, request) {
Console.log ("About to route a request for" + pathname);
if (typeof handle[pathname] = = ' function ') {
Handle[pathname] (response, request);
} else {
Console.log ("No request handler found for" + pathname);
Response.writehead (404, {"Content-type": "Text/html"});
Response.Write ("404 Not Found");
Response.End ();
}
}

Exports.route = route;




The request object can now be used in our upload request handler. Node-formidable will process the uploaded file to the local/tmp directory, and what we need to do is make sure the file is saved as/tmp/test.png. Yes, we keep it simple and assume that only PNG images are allowed to be uploaded.

This is accomplished using Fs.renamesync (PATH1,PATH2). Note that, as its name suggests, the method is executed synchronously, that is, if the renamed operation is time-consuming. We will not consider this piece first.

Next, we put together the processing of file uploads and renaming, as shown in the following requesthandlers.js:

Copy Code code as follows:



var querystring = require ("querystring"),


FS = require ("FS"),


Formidable = require ("formidable");

function Start (response) {
Console.log ("Request handler ' start ' was called.");

var body = ' ' "<meta http-equiv=" Content-type "content=" text/html; '+
' Charset=utf-8 '/> ' +
' ' <body> ' +
' <form action= '/upload ' enctype= ' multipart/form-data ' +
' method= ' post ' > ' +
' <input type= ' file ' name= ' upload ' multiple= ' multiple ' > ' +
' <input type= ' submit ' value= ' Upload file '/> ' +
' </form> ' +
' </body> ' +
'

Response.writehead ({"Content-type": "Text/html"});
Response.Write (body);
Response.End ();
}

function upload (response, request) {
Console.log ("Request handler ' upload ' was called.");

  var form = new Formidable. Incomingform ();
  Console.log ("About to parse");
  Form.parse (Request, function (Error, fields, files) {
    console.log ("parsing done");
    Fs.renamesync (Files.upload.path, "/tmp/test.png");
    response.writehead ({"Content-type": "Text/html"});
    Response.Write ("received image:<br/>");
    Response.Write ("     Response.End ();
 });
}

Function Show (response) {
Console.log ("Request handler ' show ' is called.");
Fs.readfile ("/tmp/test.png", "Binary", function (error, file) {
if (Error) {
Response.writehead (+, {"Content-type": "Text/plain"});
Response.Write (Error + "\ n");
Response.End ();
} else {
Response.writehead ({"Content-type": "Image/png"});
Response.Write (File, "binary");
Response.End ();
}
});
}

Exports.start = start;
Exports.upload = upload;
Exports.show = Show;




OK, reboot the server and we can use all the features. Select a local picture, upload it to the server, and then the browser will display the picture.

Summary and Prospect

Congratulations, our mission has been completed! We have developed a Node.js Web application that is small but "spite". During this period, we introduced a number of technical points: server-side JavaScript, functional programming, blocking and non-blocking, callbacks, events, internal and external modules, and so on.

Of course, there are many other books not covered: How to manipulate a database, how to do unit testing, how to develop node.js external modules, and some simple methods such as how to obtain a GET request.

But this book, after all, is just a tutorial for beginners--it's impossible to cover all the content.

Fortunately, the Node.js community is very active (an inappropriate metaphor is like a group of ADHD children together, can not be active?) , which means that there are a lot of resources about node.js, and there are questions you can ask the community to answer. The Node.js community wiki and Nodecloud are the best resources.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.