In the face of high concurrency, large traffic, high availability, massive data, a wide range of users, network complexity of such web systems how we respond???
First stage one server, no more servers.
1. Separation of applications from data services
put the application, database, files and other resources on a server, the face of massive users can only be collapsed and collapsed hanging off.
So
We know that the application server, database server, file server three of the requirements of the server is different, the application server needs a large CPU to handle complex business logic, the database server needs fast disk retrieval and data caching is to large memory, The file server requires faster and larger memory (on the SSD bar).
2. Come on, server cluster.
Load balancing is used here. Will the user's access request distributed to any server in the server cluster, if the user more than the increase of the server chant, this does not improve the load pressure on the server? Think about Google's servers. Users are hot, more and more, then on more application servers, so that the application server will not become the bottleneck of the website system.
The second phase of your hardware is so expensive, let's get some technology.
1. On-Cache
Most users of the business access to a small number of data, spicy we can put this small amount of data in memory, you have to use it from memory, the database pressure is much smaller, the system response speed is fast.
The cache is also divided into local cache and remote distributed cache, the local cache must be fast ah, but your application server memory is limited, the amount of data cache is limited, so think twice. Remote distributed cache, deploy a dedicated large memory server for the cache server, so come on users you come to put, I'm not afraid of large memory.
Again, your data access is no problem, but your single server processing request connection is limited Ah, during the peak of the site visit you are stressed, but the deployment of the server cluster? So it's not a problem.
2. Separating the read and write of the database
Configure the master-slave relationship between two databases, one specially read, and one specially written. But the synchronization of this data is still more troublesome.
3. Reverse Proxy Server
The essence is the cache, the request comes, I have this reverse proxy server, will directly return to you.
4. A wide variety of distributed
Split database, split file system, etc...
Technology for large technology websites (high concurrency, big data, high availability, distributed ....) ) (i)