Project background:
Read more than write, about 4:1 of the proportion bar, the user is more than million, concurrent 4000 or so (can be high can be low, high to 10K, low 1K)
the performance of several servers is similar, and load balancing can be evenly divided to each server
I'm letting them face the user directly from one to the other through load balancing (that is, ABCD can be accessed directly).
or let them do their own thing. (Suppose A, B is a memory server, C is a database, D is a picture processing server), let them receive user access layer by level.
Give me a suggestion.
Reply content:
Project background:
Read more than write, about 4:1 of the proportion bar, the user is more than million, concurrent 4000 or so (can be high can be low, high to 10K, low 1K)
The performance of several servers is similar, and load balancing can be evenly divided to each server
I'm letting them face the user directly from one to the other through load balancing (that is, ABCD can be accessed directly).
or let them do their own thing. (Suppose A, B is a memory server, C is a database, D is a picture processing server), let them receive user access layer by level.
Give me a suggestion.
Or better apart.
If not separated, seemingly can make full use of the resources of the machine, it is not. The deployment of different types of services adds complexity to the server environment, is prone to problems and is not easy to locate, and is also detrimental to performance optimization.
Different types of services to the resource requirements, memory server more attention to memory, image server focus on the disk, deployed together, if because of memory bottlenecks, will also affect the image server, resulting in a waste of disk resources.
So separate deployment, each kind of service environment is simple, resources can be allocated on demand.
The distributed cluster concept includes different parts of a system. The first scenario, just load balancing, and literally, it seems that each machine will be installed corresponding applications and database services, so that the synchronization of data and file synchronization problem can make you burn, so take the classic functional hardware subcontracting architecture (that is, you say the second) is a more appropriate method.
Access layer: Responsible for receiving distribution user requirements, install Load Balancing Service, server configuration for network, memory, CPU priority.
Application cache Layer: Caching of application data according to business and usage frequency, most of which is page caching. Network, IO priority.
Application layer: Responsible for receiving access layer distribution, processing requirements, installation of specific project applications, server configuration for memory, CPU, IO priority. (Service system words may also be subdivided into logical layer, service layer, persistence layer, etc., concrete project analysis it).
Data caching layer: Depending on your project background, you can place database indexes on this layer, or you can place some data dictionary tables according to business requirements and specific hardware configurations. HDD, IO priority.
Data layer: Install the database service and the file storage service, and the database service can be read-write separated configuration based on your description. If the amount of data is large, you also need to consider the database's library settings. configured for hard disk, IO priority.
On top of all of the above servers, each tier of servers should be disaster tolerant in accordance with the principle of master preparation.
Of course all the hardware outside the framework is empty talk, do not know what kind of server you are, but it seems to be some almost configuration, try to modify, according to the needs of each layer of the appropriate modification, the machine is not enough to consider cutting the virtual machine, but the virtual opportunity consumes a portion of the server resources, The specific configuration can achieve maximum performance, which depends on the pressure measurement and after the tuning.
The second option is recommended.
I am feeling separate bad, you are separated, any server collapsed you program on the collapse, the program first to ensure that the stability, as a server to run multiple programs to increase the complexity of the server, is to see how you deploy the problem, single service in the server is actually irrelevant, wrong what to do log on it can be, You four to do distributed + balanced, your service expansion and transfer are relatively easy, you have to make a single, you do not have the expansion of distributed systems, performance bottlenecks, you have to change the structure, change to distributed. Overall, I think the first option is much more flexible than the second.
For the service magnitude of the main feedback, it should be done in a separate way.
There are two main considerations:
Different types of services for the requirements and consumption of machine resources, you can customize the machine on demand. This is the other students upstairs said;
Further, the different services put together in particular may have potential impact on the deployment of services together, will improve the maintenance costs and instability of the system; For the extreme example, would you consider deploying the test environment and production environment together, adjusting the test environment parameters to affect the production environment is the last thing that should happen.
Further, it has been mentioned that deploying in multiple places actually has to cope with multi-point problem scenarios. I think that's the opposite.
First, the business layer is always unable to assume that the service layer is safe and stable, that there is a tolerance or even a fuse mechanism for service layer failure
Second, all services are placed on a single machine, if the stand-alone (such as disk full memory full CPU explosion, etc.) then all services are all hung. Scattered words are only individual service damage.
There is also an operational cost consideration that could be put together in the early days of Web server and back-end services. After the flow and pressure to come, it is certainly the division of labor and specialization of the direction to do.
The above is mainly personal understanding, please refer to ~
It is necessary. The server certainly has its own duties. The server wants a distributed deployment. Lateral expansion of the face flow pressure. There is even a need for a disaster recovery server. One server hangs up, and there are other servers to fill.
The essence of this problem is horizontal extensibility, and if you can access ABCD by load balancing then you give up the extensibility
So I suggest separate ABCD for independent service
And then specifically to analyze which piece is the bottleneck of the system, and then do the load balancer alone, expand the machine, distributed and so on
Better separate.
You're a growing application.
This is the stage now.
Even if you don't separate now
will also be separated in the future.