Part 2 middle layer Server Load balancer WCF
In the first part of the article, I briefly introduced how to perform load balancing on the web layer. The main software used is nginx. Why is the concept of the intermediate layer referenced here?
The simplest deployment method is web layer-> access to DB. The direct database connection method here is the layer-2 architecture. The web layer and DB can be placed on different servers. When the number of users and concurrency is large, the web layer and DB are under great pressure, and there is still a lack of scalability. Therefore, large architectures adopt a three-tier approach.
Three-tier architecture deployment: web layer-> middle layer-> dB layer. The web layer does not directly connect to databases, web layer, middle layer, and DB can be deployed on different servers. The advantage of referencing the middle layer is that it reduces the pressure on the web layer and DB. The middle layer focuses on processing logic-related services and improves website security, even if the web layer server is broken, you cannot obtain the account and data of the database. The three-tier architecture has the following functions:
Web layer:Only focus on the display of the interface, and obtain data by calling the intermediate layer. You cannot directly call the database to retrieve data.
Middle Layer:Focus only on the business logic and the data that calls the database
DB layer:Only deploy Databases
On the. NET platform, you can select WebService and WCF as the middle layer. For security reasons, WCF is a good choice.
WCF: if you are not clear about it, you can search for it online. It is similar to WebService. It may be troublesome for development and debugging.
The following figure shows the three-layer architecture of the. NET platform using WCF as the intermediate layer.
How can I achieve Load Balancing using WCF?
For example, the middle layer can be divided into three major business logics: Order Service (10001), commodity service (10002), and user service (10003 ). A Windows process that uses one port is deployed in Windows service mode.
Method 1:Through the distributed deployment of the web layer, the intermediate layer also performs the corresponding distributed deployment with the web layer. This method is the simplest, but not truly load balancing.
1. Middle Layer deployment
192.168.1.11 10001,10002, 10003 this server is deployed with three WCF ports: 10001,10002, 10003
192.168.1.12 10001,10002, 10003 the server is deployed with three WCF ports: 10001,10002, 10003
192.168.1.13 .....
.....
2. web layer call
Web layer server a: Configure three endpoints for 192.168.1.11
Web layer server B: Configure three endpoints whose endpoints are 192.168.1.12
Web layer server C :.....
.....
Method 2:The web layer dynamically loads the endpoint to call the load balancing function. I have not implemented this method, which is theoretically feasible.
1. Middle Layer deployment
192.168.1.11 10001,10002, 10003 this server is deployed with three WCF ports: 10001,10002, 10003
192.168.1.12 10001,10002, 10003 the server is deployed with three WCF ports: 10001,10002, 10003
192.168.1.13 .....
2. Each web layer reads the same configuration file. The configuration content is the interface name and IP address and port of each service, and an array is generated after loading.
String [] orderservice;
Orderservice [0] = "192.168.1.11: 10001 ";
Orderservice [1] = "192.168.1.12: 10001 ";
Orderservice [1] = "192.168.1.13: 10001 ";
....
3. web layer call
Obtain the IP address and port to be called using the random Algorithm
Int Index = new radmon (). Next (0, orderservice. Length );
String orderservice = orderservice [Index];
// Call the service on the corresponding IP address and port