Return to index http://www.cnblogs.com/lovecindywang/archive/2012/12/23/2829828.html
System Level:
The so-called high availability means that by avoiding separate faults and rapid failover, once a physical server fails, the fault can be quickly restored. Generally, you can use two methods. If you can do business load balancing, you can use Server Load balancer to implement the cluster and then monitor each server. If a fault occurs, the cluster is removed; if the service only has a single point of entry, you can add a virtual IP address mechanism to the standby machine to implement fast failover for the active machine to transfer the virtual IP address to standby after the fault occurs. In general, keepalived or heartbeat can be used to achieve high availability (of course, hardware implementation is also acceptable, which will not be discussed here ).
The so-called high scalability refers to horizontal scalability. by expanding the number of machines rather than adding machine configurations, the system's processing capability can be expanded. Server Load balancer is a typical high-scalability architecture. In addition, it can also split services by different servers to implement different services. Generally, load balancing is easier for Stateless web services, and horizontal scaling is difficult at the database level, especially for database write operations. Generally, you can use LVS or haproxy to achieve Load Balancing (of course, you can also use hardware. We will not discuss it here ).
For the website front-end, reverse proxy is usually used to implement caching and load balancing for the server. This cache caches the output HTML or HTML fragments in memory or disk to reduce the load on the Web server. You can use squid or varnish to implement reverse proxy.
To further accelerate website page access, CDN can be used for static resources, images, and even dynamic resources. CDN providers have servers on Backbone nodes throughout the country, allowing users throughout the country to access these static resources at high speed, of course, for the first access to static resources, we need to use our static resource server, and then cache the resources on the CDN server for a period of time. CDN not only accelerates client access, but also reduces the pressure on servers. If the website page implements CDN again and uses reverse proxy cache, it will be troublesome to update it, in this case, cache may be available on the client, CDN server, and reverse proxy. In this case, you need to use some tools to determine which stage has a cache.
- Operating System Parameters
After obtaining the server, the operating system configuration may be default. In this case, check whether the operating system has modified system parameters such as the number of TCP connections and the maximum number of file handles, to avoid unavailability due to Operating System RestrictionsProgram.
Some parameter settings are available for Web servers such as nginx or Apache or Java servers such as Tomcat or JBoss. You need to modify parameters based on the server configuration and some online best practices, usually, the default configuration is not suitable for servers with relatively high configurations. For example, Java is a garbage collection-based language. A large heap may cause excessive garbage collection time, therefore, multiple 32bit JVMs are often configured for servers with large memory, instead of using a 64-bit JVM and allocating more than 16 GB of memory to it. We need to understand the significance of relevant parameters in the server and set parameters properly.