High-performance HTTP accelerator Varnish-3.0.3 setup, configuration and Optimization

Source: Internet
Author: User
Tags varnish
After a day of hard work, we finally deployed the Varnish Cache Server to the online server. Let's take a look at it. Varnish is a lightweight Cache and reverse proxy software. Advanced design concepts and sophisticated design frameworks are the main features of Varnish.

After a day of hard work, we finally deployed the Varnish Cache Server to the online server. Let's take a look at it. Varnish is a lightweight Cache and reverse proxy software. Advanced design concepts and sophisticated design frameworks are the main features of Varnish. The following are some features of Varnish:


  • Cache based on memory, data will disappear after restart;

  • Good I/O performance using virtual memory;

  • Supports setting 0 ~ Precise cache time in 60 seconds;

  • VCL configuration management is flexible;

  • The maximum size of cached files on 32-bit machines is 2 GB;

  • Powerful management functions;

  • Clever state machine design and clear structure;

  • Using the binary heap to manage cached files can actively delete the files;

Before installing Varnish, if the system does not install pcre, compile Varnish 2. if the PCRE Library is later than Version X, the system will prompt that the pcre Library cannot be found. The pcre Library is designed to be compatible with regular expressions, so you must first install the PCRE Library. The Installation Process of pcre is as follows:
First, download the pcre package:
Compress the software package for compilation and installation:
So far, the pcre Library has been installed. Create a Varnish user and user group, and create the Varnish cache directory and log directory.
You can install Varnish now. Here, install Varnish in the/usr/local/directory. The operations are as follows:
Download the latest Varnish-3.0.3 software package:
Set the installation parameters and then compile and install them:
Write the varnish configuration file and service to the system:
So far, Varnish installation is complete. Configure Varnish now. Before configuring Varnish, first understand the Varnish processing process:

Varnish processes HTTP requests in the following steps:
1> Receive status: indicates the status of the request processing entry. Based on the VCL rules, it determines whether the request should Pass or Pipe or enter Lookup (local query ).
2> Lookup status: after entering this status, data is searched in the hash table. If it is found, it enters the Hit status; otherwise, it enters the Miss status.
3> Fetch Status: In the Fetch status, the request is retrieved from the backend, sent to the request, obtained data, and stored locally.
4> Deliver status: Send the obtained data to the client, and then complete the request.
Now everyone understands how Varnish works. Configure an instance as follows. The Varnish configuration file may be written differently due to different versions. This configuration file is based on the Varnish version 3. x.
After Varnish is installed, the default configuration file is/usr/local/varnish/etc/varnish/default. vcl. By default, all files are commented out. Here, we use this file as the template to create a new file vcl. conf and put it in the/usr/local/varnish/etc directory. The vcl. conf file is as follows:







When installing Varnish, you have copied the Varnish management script to the corresponding directory. Make a slight modification here. First, modify the/etc/sysconfig/varnish file. The configured file is as follows:
It must be noted that, in a 32-bit operating system, a maximum of 2 GB cached file Varnish_cache.data is supported. If a larger cache file is required, a 64-bit operating system must be installed.
The file to be modified is/etc/init. d/varnish. Find the following data center and modify the corresponding path:
Here, exec is used to specify the Varnish path. You only need to change it to the Varnishd file under the Varnish installation path, and config is used to specify the path of the Varnish daemon configuration file.
After the two files are modified, You can authorize and run the/etc/init. d/varnish script. The execution process is as follows:
Start varnish as follows:
View the running status:
For example, Varnish is successfully started. Now you can test the role of Varnish. You can test it through Curl:
The URL link has been cached. The cache hit rate directly shows the running status and effect of Varnish. A high cache hit rate indicates that Varnish is in good running status, the performance of the Web server will also increase a lot. Conversely, a low cache hit rate indicates that the Varnish configuration may be faulty and needs to be adjusted. Therefore, an overall understanding of Varnish's hit rate and cache status is crucial for optimizing and adjusting Varnish.
Varnish provides a VarnishstatCommand, You can obtain a lot of important information. The cache status of a Varnish system is as follows:
After the varnishstat command is executed, the command is automatically redirected to a screen and cannot be seen. For your convenience, place the command at the bottom of the execution result. Note the following points:

  • "Client connections accepted": indicates the total number of HTTP requests successfully sent from the Client to the proxy server.
  • "Client requests received ed": indicates the total number of HTTP requests sent by the browser to the reverse proxy server. Because persistent connections are used, this value is generally greater than the value of "Client connections accepted.
  • "Cache hits": indicates the number of times the proxy server finds and hits the Cache in the Cache area.
  • "Cache misses": indicates the number of requests directly accessing the backend host, that is, the number of non-life requests.
  • "N struct object": indicates the number of cached objects.
  • "N expired objects": indicates the number of expired cache content.
  • "N lru moved objects": indicates the number of cached content that has been eliminated.

Installation and configuration of Varnish are complete. After installation, whether Varnish can run stably and quickly depends on Linux optimization and Varnish parameter settings. After installing and configuring Varnish, you must optimize the performance of the Varnish server from the operating system and Varnish configuration parameters to maximize the performance advantages of Varnish.
Kernel Parameters are an interface for interaction between the user's system kernel. Through this interface, you can dynamically update kernel configurations while the system is running. These kernel parameters exist through the Linux proc file system. Therefore, you can optimize Linux performance by adjusting the proc file system.
Modify the/etc/sysctl. conf file for optimization. The specific parameters are as follows:
Modify the ulimit setting. By default, the setting takes effect temporarily after the Ulimit setting is complete. When the machine is restarted next time, the Ulimit will become invalid. It will take effect the next time the machine is started. Put the ulimit settings under the/etc/rc. d/rc. local file. The specific parameters are as follows:
So far, the LINUX system has been optimized. Varnish is optimized now. Open the/etc/sysconfig/varnish STARTUP script. The optimization parameters are as follows:
So far, installation, configuration, and optimization of Varnish are complete. I hope the content shared by David will be useful to you. If you have any questions, I hope you can correct them! This is the end of today!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.