Common solutions for Java high concurrency and java Solutions
Common solutions for high concurrency in Java
1. ConcurrencyWhat is high concurrency?
In the Internet era, high concurrency usually refers to the arrival of many accesses at a certain point in time.
High concurrency: What system indicators and business indicators do you usually care about?
QPS: The number of queries per second. In a broad sense, it usually refers to the number of requests per second.
Response time: The time it takes to send a request to receive the response. For example, it takes 100 ms for the system to process an HTTP request. The ms is the system response time.
Bandwidth: Pay attention to two metrics for calculating the bandwidth, peak traffic and average page size.
PV: The total Page views (Page views), that is, the Page views or clicks, usually focus on the number of pages accessed within 24 hours, that is, "Daily PV"
UV: Independent access (UniQue Visitor), that is, the number of users who access the service after deduplication, usually focus on users who access the service within 24 hours, that is, "Daily UV"
Ii. Three common optimization solutions for dealing with large concurrency
[Database cache]
Why is cache used?
Cache data is used to make the client seldom or even not access the database, reduce disk IO, increase the concurrency, and improve the response speed of application data.
[CDN acceleration]
What is CDN?
The full name of CDN is Content Delivery Network, the CDN system can redirect users' requests to the nearest service node in real time based on the network traffic and connection, load status, distance to the user, and other comprehensive information of each node.
What are the advantages of using CDN?
CDN is essentially a memory cache that allows nearby access. It increases the access speed of Enterprise websites (especially websites containing large numbers of images and static pages) and accelerates cross-carrier network acceleration, this ensures that users of different networks have excellent access quality.
At the same time, it reduces the bandwidth for remote access, shares network traffic, and reduces the load on the WEB server of the original site.
[Server clustering and Server Load balancer]
What is layer-7 Server Load balancer?
Layer-7 Server Load balancer is a load balancing service based on http and other application information. Nginx is the most commonly used Server Load balancer. It can automatically remove abnormal backend servers and upload files in asynchronous mode, multiple allocation policies are supported, and weights can be allocated in a flexible manner.
Built-in policies: IP Hash and Weighted Round Robin
Extension Policy: fair policy, General hash, consistent hash
What is a weighted round robin policy?
First, the request is distributed to a high-weight machine, and the request is assigned to the next high-weight machine only when the weight of the machine is lower than that of other machines, it also shows the round-robin.
3. New Year gift
At the beginning of 2018, I will share with you a welfare program-loving partner, free online knowledge sharing, irregular sharing of JAVA, cache redis, high concurrency solutions, self-developed MVC frameworks, and indexing Optimization of millions of data.