Experience in configuring virtual hosts on squid2.6 WEB acceleration

Source: Internet
Author: User
Kanwi.cn has a web server with a daily traffic of about 0.1 million. There are several virtual hosts on it. recently, Squid2.6 has been installed for WEB acceleration. Squid and Apache are both on the same server. The effect is very obvious, many people on the forum asked how to configure squid2.6 support. The VM now posts the installation process with everyone.

Kanwi.cn

I am a web server with a daily traffic of about 0.1 million. There are several virtual hosts on it. recently, Squid 2.6 has been installed for WEB acceleration. Squid and Apache are both on the same server. The effect is very obvious, many people on the forum asked how to configure squid2.6 support and virtual hosts.
Now I will share the installation process with you and give the Cainiao a chance to learn and criticize and correct the old birds.

Host Configuration: CPU: AMD64 Sempron 3100 Memory: 2 GB RAM

Download: wget http://www.squid-cache.org/Versions/v2/2.6/squid-2.6.STABLE6.tar.bz2

Tar jxvf squid-2.6.STABLE6.tar.bz2

Install:./configure -- with-maxfd = 65536

This -- with-maxfd parameter is used to increase the squid file descriptor to 65536

After installation, configure/usr/local/squid/etc/squid. conf.
 
Visible_hostname www.yoursite.com
Http_port xx. xx: 80 vhost vport

# Xx. xx is the IP address of this server.

Icp_port 0

Cache_mem 400 MB

# Set the memory used by Squid to 400 MB, which varies from person to person

Cache_swap_low 90
Cache_swap_high 95

Maximum_object_size 20000 KB

# Maximum cached file size. if this value is exceeded, the file is not cached. this value varies from person to person.

Maximum_object_size_in_memory 4096 KB

# Size of files loaded into the memory cache. this value has a great impact on Squid's performance. because the default value is 8 kB, files larger than 8 kb are not loaded into the memory, in actual applications, many web pages and images exceed 8 kB. I personally think that if the cache is not loaded into memory and there is a disk, the performance is no different from that of apache to directly read disk files, it is even better to directly access apache. now, files smaller than 4 MB can be loaded into the memory cache.

Cache_dir ufs/tmp1 10000 16 256

# Disk cache type and Directory, size, level 1 and Level 2 directory settings. here the disk cache size is 10 GB

Cache_store_log none

# This setting does not record store. log

Emulate_httpd_log on

# Enable the emulate_httpd_log option, so that Squid will follow the log format of Aapche.

Logformat combined %> a % ui % un [% tl] "% rm % ru HTTP/% rv" % Hs % H "" % {User-Agent}> h "% Ss: % Sh

# Setting the log format combined

Pid_filename/var/log/squid. pid
Cache_log/var/log/squid/cache. log
Access_log/var/log/squid/access. log combined

# Here is the location of the pid and log file, which varies from person to person, and the log format is combined. awstats can directly call and analyze

Acl all src 0.0.0.0/0.0.0.0

Acl QUERY urlpath_regex cgi-bin. php. cgi. avi. wmv. rm. ram. mpg. mpeg. zip. exe
Cache deny QUERY

# Set directories or file types that do not want to be cached


Acl picurl url_regex-I. bmp $. png $. jpg $. gif $. jpeg $
Acl mystie1 referer_regex-I aaa
Http_access allow mystie1 picurl
Acl mystie2 referer_regex-I bbb
Http_access allow mystie2 picurl

# Set anti-image leeching. aaa and bbb are the virtual host domain names respectively. referer must contain aaa or bbb domain names to access images.

Acl nullref referer_regex-I ^ $
Http_access allow nullref
Acl hasref referer_regex-I. +
Http_access deny hasref picurl

# Allow direct access to images and deny access to images without aaa or bbb in referer

Cache_peer xx. xx parent 81 0 no-query originserver login = PASS

# Xx. xx. xx. xx is the IP address of the local server, and 81 is the apache Port. if your VM has a directory protected by a user name and password, login = PASS must be set; otherwise, authentication will fail.

Cache_inclutive_user nobody
Cache_inclutive_group nobody

# User Group and user name used by squid

Squid Configuration complete!

Create a cache and log directory, and change the permission to enable squid to write data.
Mkdir/tmp1
Mkdir/var/log/squid
Chown-R nobody: nobody/tmp1
Chmod 666/tmp1
Chown-R nobody: nobody/var/log/squid
-----------------------
Configuration to be modified by Apache
 
LISTEN 81

# Change the port to 81

NameVirtualHost xx. xx: 81
# Host IP address and port

Virtual host configuration

ServerAdmin xxx@yahoo.com
DocumentRoot/home/aaa/www
ServerName aaa.com
ServerAlias www.aaa.com
ScriptAlias/cgi-bin/"/home/aaa/cgi-bin /"

Options shortdes FollowSymLinks
AllowOverride All

If there are other virtual hosts, refer to the settings above

---------------------------------------------------------

Restart apache: apachectl restart

----------------------------------------------------------
To run squid for the first time, you must first create a cache.

/Usr/local/squid/sbin/squid-z

Start squid

Echo "65535" type = "codeph" text = "codeph">/proc/sys/fs/file-max
Ulimit-HSn 65535
/Usr/local/squid/sbin/squid

You 'd better put these words in the squid startup script to get the 65536 file descriptor.

It is better to edit the/etc/hosts file.
Add the following content

Xx. xx aaa.com www.aaa.com bbb.com www.bbb.com

This eliminates the need to query DNS, which is faster.


Now everyone is anxious to open a browser to visit your website to see the effect. In fact, there is no change. it will only take effect after the squid loads all the files into the memory. You can use the top command to observe the memory usage of squid or use

Cat/var/log/squid/access. log | grep TCP_MEM_HIT

If you see a lot of TCP_MEM_HIT, this indicates that the file is read from the memory cache, and squid has taken effect! You can use a browser to open the file. it should be like lightning .. Oh, you're done! There are other types of HIT, such as TCP_HIT, which are read from the disk. I think acceleration is of little significance, but it only relieves the pressure on apache.

Note: my servers are mostly static web pages with a daily access volume of 10 million. apache is often unable to bear the burden, and the number of tasks often reaches 300 or even 400, later, squid2.6 was installed to take over most apache servers. the server was relieved, which not only improved the speed, but also reduced the system load, and the number of tasks was stable between 100 and ~ Between 120, the server is still standing. However, squid is more memory-consuming. if the server can add 4 GB of memory, it will be much better.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.