Optimization direction:
To intercept requests as far as possible upstream of the system
Take full advantage of caching
Site architecture
1. Client
JS level, limit the user can only submit one request within x seconds;
2. Site Layer
Use the UID. At the site level, the UID is requested to count and de-weigh, even without the need for uniform storage count, direct site-level memory storage. A uid,5 second is only allowed through 1 requests, which in turn stops 99% for loop requests.
Page cache, the same UID, limit the frequency of access, do page caching, in x seconds to reach the site layer of the request, all return to the same page
3. Service Layer
Write request, make request queue
Read request, do cache, memcached or Redis
Optimization of business rules, such as timeshare ticketing ... Release a batch every half hour: spread the flow evenly
Data granularity optimization, such as high traffic, do a coarse-grained "ticket" "No ticket" cache can be
The business logic of the asynchronous, the following single business and payment business separation
4. Data layer
Browser interception of 80%, the site layer intercepted 99.9% and do a page cache, the service layer has done a write request queue and data cache, every request to the database layer is controllable
Optimize the idea to intercept requests as much as possible in the upstream of the system read and write less frequently used cache (cache anti-read pressure)
Optimization of the business architecture of the second kill