1.8 Extending the Web application1.8.1 Performance
For developers, they focus primarily on the response time and ductility of the program.
Response time is also one of the metrics that measure the efficiency of an application. If a request-to-response time goes beyond a good sense of scope, we can assume that the performance of this program is poor. In general, it's best to get the page you want in a two-second period.
Ductility, said that adding more hardware devices, the application can be linearly to withstand more requests. There are two ways to add Hardware:
L Scale up (vertical expansion)-increase the number of cups or add a faster cup in a single box (referred to as a machine)
L scale-out (horizontal expansion)-add multiple boxes (machines).
By adding more resources, the application can handle more requests without affecting the response time of the program, we say that the application behaves well. However, response time and ductility are not combined. The application may get a satisfactory response time, but cannot handle a certain amount of requests. Conversely, if you are more than enough to handle a certain number of requests, the response performance is poor. So, in the case of resources, we have to make a good balance between the response time and the number of concurrent response requests.
Capacity planning. The so-called capacity plan is a practice of solving the desired loading volume in a production environment, in which the required hardware areas are handled. Typically, it manages to improve the performance of the entire application with several machines, and achieves the desired results in a certain amount of concurrent testing.
1.8.2 Extension Architecture
If it is extensible in any layer of a multilayer architecture, we say that the architecture of the application is extensible. In the following diagram, it represents a linear extension of the entire application at any level.
Extended load Balancing. Load balancing is scaled horizontally, primarily by specifying DNS to multiple IP addresses and using the DNS Round Robin load algorithm for IP address lookups. Another option is to use load balancing on the front-end and distribute requests to the next level of load balancing. Multi-level load balancing is rarely seen on a single machine running Nginx or haproxy load balancer, this type of software load Balancing can be processed 20,000 times per environment of concurrent requests, compared to the Web application of the container thousands of times each time, it is more obvious advantage. Therefore, a single load balancer can well control the servers of several Web applications.
Extend the database. The expansion of the database is a very big problem we face. Adding stored procedures and functions brings additional administrative overhead and application complexity to the data persistence layer. The relational database (RDBMS) can be extended through the Master-slave mode, with read and write on the master machine, only on the Slave database node. However, some businesses using NoSQL can clearly provide the performance of the entire system. We know that in a concurrent environment, NoSQL cannot achieve the same data. Therefore, it is often used to improve the high availability of programs. The more popular NoSQL has Redis,mongodb and so on.
Detach the database. The database can be separated into vertical partitions or horizontal shards.
.................. adjourned