Large Concurrent Server Development learning Note _01 Large Concurrent Server architecture introduction

Source: Internet
Author: User
Tags value store

Introduction to large concurrent server architectures

First, the server design goals
(1) Performance (high performance): Fast request response;
(2) Highly available (high availability): Can work 7x24 hours, able to fail over;
(3) Scalability (Scalability): can communicate across the machine;
Second, the distributed
(1) Load is loaded
(2) Distributed storage
(3) Distributed computing

Third,c/s structure

     Any network system can be abstracted as C/s structure

四、一个 Typical server architecture

 Network I/O + Server high-performance programming technology + Database

1. Number of database connections exceeded:
(1) Description of the problem:

Database concurrent Connection Number 10, the application server side has 1000 concurrent requests, will have 990 requests to fail;

(2) Solution:

We can add a queue for queueing, the middle tier Dal data access Layer (queue service + connection pool).

2. Exceeding the time limit:

(1) Description of the problem:

The number of database concurrent connections 10, the database can handle 1000 requests within 1 seconds, the application server side has 10,000 concurrent requests, will appear 0-10 seconds of waiting;

(2) Solution:

The main business logic is moved to the Application server processing, the database only does the auxiliary business processing, the cache caches are utilized.

3. Cache updates (cache synchronization)

(1) If the cache fails (timeout), re-database query, this method of real-time poor;

(2) The hotspot data is stored in the cache, once the data in the database updates, immediately notify the front-end cache updates, this method of real-time high.

4. Cache page Change:

There is not enough memory to swap inactive data out of memory. Common swap-out algorithms are: FIFO (first-out) LRU (least Recently used) LFU (the least frequent use of swap out), which should be introduced in the course of your operating system.

5.nosql:

Key/value Store non-relational data, consistency requirements are not very high data, you can use NoSQL as a cache, such as distributed cache Open source software: Redis memcached and so on.

6. Database read/write separation:

  For most applications, the database read operation,> write operations, load balance the database, Master (master) Slave (slave) implementation of the master-slave mechanism, the main library is responsible for writing operations, from the library responsible for reading operations, using most of the database has replication mechanism.

7. Load balancing of the application server:

By adding a task server to implement, the task server query can monitor the current load on the application server. such as high CPU, IO high, high concurrency, memory paging high situation.

(1) Application Server passive receive task: After the query to this information, select the least load server allocation task;
(2) The application server actively receives the task: more fair.

8. Data partitioning (sub-Library, sub-table):
(1) Vertical partitioning: The database can be divided into different databases according to certain logic, such as user tables, business tables, basic tables;
(2) Horizontal partitioning: If 10 records of the user table are divided into 10 databases, each database is divided into one record. This method is more commonly used.

9. Four major killers of server performance:

(1) Data copy: Minimize the data copy, the server inside some cache resolution;

(2) environment switch (rational creation thread): Should not use multithreading, single-threaded Cheng or multithreading good, For example, single-core server (state machine programming, the best efficiency) with multi-threaded programming can not be concurrent, but increase the switching overhead between threads; multi-core server using multi-thread can fully play the performance of multi-core server;

(3) Memory allocation: memory pool;
(4) Lock competition: Minimize the competition of the lock;

Welcome everyone to learn to communicate, if there is insufficient to criticize correct, reproduced please indicate the source, thank you for your support. If you also like my blog can continue to follow me, let us grow together and progress together.

Life is wonderful. Remove impurities, leaving the search for the poor pole of programming pure pursuit.

Large concurrent Server Development learning Note _01 Large concurrent Server architecture introduction

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.