In cluster architecture, is it necessary for servers to perform their respective duties? Database servers, memory servers, image servers, etc.

Source: Internet
Author: User
Project Background: the ratio of reading to writing is about. the number of users is more than one million, and the concurrency is about 4000 (high or low, high to 10 K, and low to 1 K) the performance of several servers is almost the same, and load balancing can basically be evenly distributed to each server. I want them to directly one to one through load balancing... project background:
The ratio of reading to writing is about. the number of users is more than one million, and the concurrency is about 4000 (high or low, high to 10 K, and low to 1 K)
The performance of several servers is similar, and the server load balancer can be evenly distributed to each server.

I want them to face users one by one through server load balancer (that is, ABCD can be accessed directly ).

Let them perform their respective duties (assume A and B are memory servers, C is databases, and D is image processing servers), so that they can accept user access layer by layer.

Give me a suggestion.

Reply content:

Project background:
The ratio of reading to writing is about. the number of users is more than one million, and the concurrency is about 4000 (high or low, high to 10 K, and low to 1 K)
The performance of several servers is similar, and the server load balancer can be evenly distributed to each server.

I want them to face users one by one through server load balancer (that is, ABCD can be accessed directly ).

Let them perform their respective duties (assume A and B are memory servers, C is databases, and D is image processing servers), so that they can accept user access layer by layer.

Give me a suggestion.

It is better to separate them.
If they are not separated, it seems that they can make full use of the machine's resources. Different types of services are deployed together, which increases the complexity of the server environment and is prone to problems and difficult to locate. this is also not conducive to performance optimization.
Different types of services have different resource requirements. memory servers pay more attention to memory, while image servers pay more attention to disks and are deployed together. if memory bottlenecks occur, they will also affect image servers, this results in a waste of disk resources.
Therefore, the service is deployed separately. the environment where each service is located is simple and resources can be allocated as needed.

The distributed cluster concept includes different parts of a system. The first solution is only for server load balancer. literally, it seems that each machine will install corresponding applications and database services, in this case, data synchronization and file synchronization can be overwhelmed. Therefore, it is more appropriate to adopt a classic functional hardware subcontracting architecture (that is, the second one you mentioned.
Access Layer: responsible for accepting and distributing user requirements, installing the server load balancer service, and configuring the server as network, memory, and CPU.
Application Cache layer: caches application data based on business and usage frequency. most of these layers are page caches. Network and I/O are preferred.
Application layer: responsible for receiving distribution at the access layer, handling requirements, installing specific project applications, and configuring servers as memory, CPU, and IO are preferred. (The service-oriented system may be further subdivided into the logic layer, service layer, and persistence layer. for details about the specific project, see ).
Data cache layer: based on your project background, you can place database indexes on this layer, or place data dictionary tables based on business needs and specific hardware configurations. Hard disk and I/O are preferred.
Data Layer: install the database service and the file storage service. The database service can be configured for read/write splitting based on your description. If the data volume is large, you also need to consider database Shard settings. Hard disk and I/O are preferred.
On top of all the above servers, servers at each layer should perform disaster recovery according to the master-slave principle.
Of course, all the architectures outside of the hardware scope are empty talk. I don't know what kind of servers you are using, but it seems like there are some similar configurations. try to modify them as much as possible, appropriate modification based on the needs of each layer. if the machine is not enough, you can consider cutting the virtual machine, but the virtual opportunity consumes part of the server resources. how can we achieve maximum performance, this depends on stress testing and subsequent optimization.

The second solution is recommended.

I feel that it is not good to separate them. if a server crashes, your program crashes. The first thing you need to ensure is stability. As for how a server runs multiple programs, the server complexity is increased, it depends on how you deploy the service. a single service is not related to the server. you only need to configure the log for troubleshooting. you can perform distributed + balancing on the four servers, your service expansion and transfer are relatively easy. if you are a single server, you do not have the expansion of the distributed system. if your performance encounters a bottleneck, you must change your architecture to a distributed one. In general, I think the first solution is much more flexible than the second one.

The level of service provided by the subject should be separated.

There are two main considerations:

  • Different types of services have different requirements and consumption for various machine resources. machines can be customized as needed. Someone else said this;

  • Further, deploying different services together, especially those that may have potential impact, will increase system maintenance costs and instability. for example, do you want to deploy the test environment and the production environment together? adjusting the parameters of the Test environment affects the production environment is really the least thing that should happen.

In addition, it is mentioned that the deployment is deployed in multiple places, but in fact it is necessary to deal with the problem of multiple points. I think this is the opposite.

  • First, the business layer cannot always assume that the service layer is secure and stable, and must have a tolerance or even fusing mechanism for service layer failures.

  • Second, all services are placed on one machine. if a single machine crashes (for example, when the disk is full of memory and cpu usage), all services are suspended. Scattered words are only some service damages.

  • There is also an O & M cost consideration. in the early stage, both web server and backend services may be put together. After the traffic and pressure come up in the future, the division of labor and the direction of specialization must be done.

For more information, see ~

It is necessary. Of course, the server has to perform its own duties. The server must be deployed in a distributed manner. Horizontal scaling of traffic pressure. Disaster recovery servers are even required. One server is down, and other servers are added.

The essence of this problem is horizontal scalability. if you can access ABCD through server load balancer, you will not be able to expand the scalability.

Therefore, we recommend that you separate ABCD for independent services.

Then we need to analyze which one is the bottleneck of the system, and then separately perform load balancing, expand the machine, distributed, and so on.

It is best to separate
Your application is constantly growing.
Now we are at this stage.
Even if you do not separate
They will be separated in the future.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.