Architecture Evolution-Initial Knowledge and initial knowledge of Architecture Evolution
Here are some of your own understandings. If you have any misunderstandings, thank you very much.
1. At the initial stage of website construction, users and access volume were very small, and application services + database + file services were all in one server.
2. Users and access traffic increase, application server computing efficiency is insufficient, file service storage capacity is insufficient, and database memory and hard disk requirements are insufficient.
The cause of the problem is that all services are on one server and multiple servers are used to solve the problem. The application server, database server, and file server appear.
The application server requires a cpu with strong logic computing capability, the database server requires a larger disk and memory, the file server requires a large disk, and each server occupies its own resources.
3. In order to reduce the pressure on the database, cache is introduced. Before obtaining data, the data is first retrieved from the cache, which can greatly reduce the pressure on the database to read data.
4. When traffic increases, more powerful servers may be replaced. For websites with high concurrency and massive data volumes, this cannot solve the problem,
At this time, you need to cluster the application server, share the load through the cluster and achieve high scalability.
The architecture at this time is: Cluster Application Server (Server Load balancer) + Cache Server + Database Server + File Server
5. As access traffic increases, although the cache bears the pressure on some databases, some of the data still needs to be directly accessed to the database (missed, expired data, and data needs to be inserted)
In this case, the read/write splitting of the database can be performed (supported by most databases)
6. Then add CDN and reverse proxy to accelerate website response and provide a better user experience. The two principles are cache:
CDN is deployed in the data center of the network provider so that users can obtain data from the data center of the nearest network provider during access.
Reverse Proxy: After you access the data center, you first access the reverse proxy server. If this server requires data caching, it will directly return it to the user from the cache.
7. for time-consuming queries on tables with a large amount of data, it is best to use the search engine or NoSQL database. Here, the data is stored with a search engine + NoSQL + cache + master-slave database,
It is best to have a unified module for data access.
8. If all services are in one system and are too complex, you need to divide the entire system into different systems based on the business.
Managing the same data storage system to associate it with a complete system makes it easier to manage the system.
Java enterprise-level general permission security framework source code SpringMVC mybatis or hibernate + ehcache shiro druid bootstrap HTML5
[Download java framework source code]
Cache Usage:
1. CDN: some static resources are cached at the network provider. When users access these resources, data is returned nearby, which improves the website speed. Such as video websites
2. Reverse Proxy: cache static resources. If a user accesses a website with a cache, it will return directly, reducing the pressure on the application server.
3. Local cache and distributed cache reduce the database access pressure