After a day's effort, the varnish cache server is finally deployed to the online server. Take advantage of the warm and quick to share with you. Varnish is a lightweight cache and reverse proxy software. The advanced design concept and the mature design frame are the main features of the varnish. Here are some of the features of varnish:
Cache based on memory, after restart data will disappear;
Using virtual memory mode, I/O performance is good;
Support to set 0-60 seconds accurate cache time;
VCL configuration management is more flexible;
32-bit machine cache file size is the maximum 2GB;
Has a strong management function;
The state machine design is ingenious, the structure is clear;
Using binary heap management to cache files can achieve the purpose of active deletion;
Before installing varnish, if the system does not have Pcre installed, and when compiling varnish version 2.X, the Pcre library is not found, and the Pcre Library is for compatible regular expressions, you must first install the Pcre library. The following is the PCRE installation process:
First, download the Pcre package:
To do the compression package, compile and install:
At this point, the Pcre Library has been installed. Next, establish the varnish user and the user group, and create the varnish cache directory and log directory.
You can now install varnish, where the varnish is installed to the/usr/local/directory, as follows:
Download the latest Varnish-3.0.3 package:
Set installation parameters, and then compile the installation:
To write varnish configuration files and services to the system:
At this point, the varnish installation is complete. Now start configuring varnish, and before you configure varnish, first understand the varnish process:
The process of varnish processing HTTP requests is roughly divided into the following steps:
1> Receive status: Requests to process the entry state, depending on the VCL rule to determine whether the request should be pass or pipe, or enter lookup (local query).
2> Lookup state: After entering this state, will look for data in the hash table, if found, then enter the hit state, otherwise enter the Miss state.
3> fetch status: In the fetch state, fetch the request, send the request, obtain the data, and store it locally.
4> deliver state: Sends the acquired data to the client and completes this request.
Now that the principle of varnish is now clear, then configure an instance below. Due to the different versions, the varnish configuration file has a certain difference in the wording, this profile is Varnish 3.x version as the benchmark.
After the varnish installation is complete, the default profile is/USR/LOCAL/VARNISH/ETC/VARNISH/DEFAULT.VCL, and the contents of this file are all commented out by default. Here, with this file as a template, create a new file vcl.conf and place it under the/usr/local/varnish/etc directory. The vcl.conf files that are configured to complete are as follows:
When you install varnish, you have copied the Varnish management script to the appropriate directory, where you can modify it slightly. First modify the/etc/sysconfig/varnish file. The configured files are as follows:
It should be explained here that, under 32-bit operating systems, the maximum support for 2GB cache file Varnish_cache.data, if you need a larger cache file, you need to install a 64-bit operating system.
The next modified file is/etc/init.d/varnish, find the following room, modify the appropriate path:
Where exec is used to specify the path of the varnish, you only need to modify the Varnishd file that wins the varnish installation path; config Specifies the varnish daemon configuration file path.
After two files have been modified, you can authorize and run the/etc/init.d/varnish script. The implementation process is as follows:
Finally, start the varnish as follows:
To view the running status:
As shown above, you can learn that varnish has been successfully started. It is now possible to test the effect of varnish, which can be tested by curl:
The above figure, you can learn that the URL link has been cached, the cache hit rate directly describes the varnish of the running state and effect, a high cache hit rate shows that the varnish run well, the performance of the Web server will also improve a lot; A low cache hit rate indicates that the configuration of the varnish may be problematic and needs to be adjusted. Therefore, the overall understanding of the varnish hit rate and cache state is essential for optimizing and tuning varnish.
Varnish provides a varnishstat command that allows you to get a lot of important information. The following is a cache state for a varnish system:
Since the Varnishstat command has been executed, the execution command is not visible because it automatically jumps to a screen. To facilitate understanding, place commands at the bottom of the execution results. Here are a few things to watch out for:
"Client connections Accepted": Represents the total number of successful HTTP requests sent by the client to the directional proxy server. "Client requests Received": represents the cumulative number of HTTP requests sent by the browser to the reverse proxy server so far. Because long joins are used, this value is generally greater than the value of the "Client connections accepted". "Cache Hits": The number of times the direction proxy server looks up and hits the cache in its buffer. "Cache misses": represents the number of direct access back-end host requests, which is the number of misses. "N struct Object": Represents the quantity currently cached. "N Expired Objects": Indicates the number of cached content expired. "N LRU moved objects": The number of cached content that is eliminated.
The installation and configuration of the
varnish is basically complete. After installation, varnish can run stably and quickly, which has a great relationship with the optimization of Linux itself and the setting of varnish parameters. After installing the varnish, you must also optimize the varnish server from two directions of the operating system and varnish configuration parameters to maximize varnish performance advantages. The
kernel parameter is an interface between the user system cores, through which the user can dynamically update the kernel configuration while the system is running, and these kernel parameters are present through the Linux proc file system. Therefore, you can optimize Linux performance by adjusting the proc file system.
Modify the/etc/sysctl.conf file for optimization, as follows:
Modify the Ulimit setting, which will take effect temporarily after the Ulimit setting is completed by default. The next time the machine is restarted, the ulimit will fail. The next time the machine starts, it will take effect. Place the Ulimit settings under the/etc/rc.d/rc.local file. The specific parameters are as follows:
At this point, the Linux system optimization is complete. Now it's still optimized varnish. Open the/etc/sysconfig/varnish startup script, and the tuning parameters are as follows:
At this point, the installation, configuration, and optimization of varnish are almost complete. I hope that David's share of the content is useful to everyone, if there is any problem, but also hope that everyone correct me! That's it for today!