Chapter 1 squid Chinese authoritative guide

Source: Internet
Author: User
Tags tru64
From: http://bbs.chinaunix.net/viewthread.php? Tid = 586242

Preface:
I have maintained several squid servers at work. I have read Duane Wessels (he is also the founder of squid) for many times. His original title is "Squid: the definitive guide ", published by o'reilly. I have translated it into Chinese in my spare time, hoping to help Chinese squid users. For common Internet users, squid can be used as a contemporary server, while squid acts as a Web Accelerator for large sites such as Sina and Netease. Both roles play exceptionally well. The open-source world is as beautiful as the stars, and squid is one of the dazzling stars.
Please contact me if you have any questions about this edition. My email is: yonghua_peng@yahoo.com.cn Peng Yonghua

Bytes --------------------------------------------------------------------------------------

Chapter 2 high level disk cache topic

8.1 is there a disk I/O bottleneck?

The Web Cache tool, such as squid, usually does not correctly reflect and inform you when disk I/O becomes a bottleneck. Instead, the response time and/or hit rate will be more inefficient as the load increases. Of course, the response time and hit rate may change for other reasons, such as network latency and change in customer request methods.

Maybe the best way to detect cache performance bottlenecks is to do stress testing, such as web polygraph. The premise of stress testing is that you have full control over the environment and eliminate unknown factors. You can use different cache configurations to repeat the same test. Unfortunately, stress testing usually takes a lot of time and requires idle systems (maybe they are in use ).

If you have resources to perform squid stress testing, start with a standard cache workload. When you increase the load, you can see a significant decrease in response latency and/or hit rate at some points. Once you see this performance degradation, disable the disk cache and test it again. You can configure squid to never cache any response (use the null storage mechanism, see Chapter 8.7 ). Instead, you can configure the work load to 100% and cannot cache the response. If the average response time is significantly better without cache, you can confirm that disk I/O is the bottleneck of the horizontal throughput.
If you do not have time or resources to perform the squid stress test, you can check the squid runtime statistics to find the disk I/O bottleneck. The cache manager's general runtime information page (see chapter 14) displays the cache hit and cache loss median response time.
Median service times (seconds) 5 min 60 min:
HTTP requests (all): 0.39928 0.35832
Cache misses: 0.42149 0.39928
Cache hits: 0.12783 0.11465
Near hits: 0.37825 0.39928
Not-modified replies: 0.07825 0.07409

For the robust Squid cache, hit is obviously faster than loss. The average hit response time is typically less than 0.5 seconds or less. I strongly recommend that you use SNMP or other network monitoring tools to collect regular measurements from the Squid cache. If the average hit response time increases too much, it means that the system has a disk I/0 bottleneck.

If you think the product cache faces such problems, you can use the same technology mentioned above to verify your speculation. Configuring squid does not cache any response, thus avoiding all disk I/O. Then carefully observe the cache loss response time. If it falls down, you guess it is correct.

Once you confirm that the disk throughput is the performance bottleneck of squid, you can do a lot to improve it. Some of these methods require recompilation of squid, while others are relatively simple. You only need to adjust the UNIX file system.

8.2 File System Adjustment options

First, raid is never used in the Squid cache directory. In my experience, raid always reduces the performance of the file system used by squid. It is better to have many independent file systems, each of which uses a separate disk drive.

I found four simple methods to improve squid's UFS performance. Some of the operating systems, such as BSD and Linux, are not suitable for your platform:

1. Some UFS support a noatime mount option. When the noatime option is used to mount a file system, the access time of the corresponding I node is not updated during reading. The easiest way to use this option is to add the following lines in/etc/fstab:
# Device mountpoint fstype options dump pass #
/Dev/ad1s1c/cache0 ufs rw, noatime 0 0

2. Check the async option in the manpage of Mount. If this option is set, specific I/O operations (such as updating Directories) are executed asynchronously. Some system documents indicate that this is a dangerous label. If your system crashes one day, you may lose the entire file system. For many squid installations, the improvement in execution performance is worth the risk. If you don't mind losing the entire cache content, you can use this option. If the cache data is very valuable, The async option may not be suitable for you.

3. BSD has a function called soft update. Soft update is a substitute for BSD for journaling file systems. On FreeBSD, you can use the tunefs command to activate the option in a file system without mount:
# Umount/cache0
# Tunefs-N enable/cache0
# Mount/cache0

4. You can run the tunefs command once on each file system. When the system restarts, soft updates are automatically activated in the file system.
In OpenBSD and NetBSD, you can use the softdep mount option:
# Device mountpoint fstype options dump pass #
/Dev/sd0f/usr ffs rw, softdep 1 2

If you are like me, you may want to know what is the difference between the async option and the soft update option. An important difference is that soft update Code is designed to maintain file system consistency in system crash events, but async does not. This may make you infer that the async execution performance is better than soft update. However, as I pointed out in appendix D, the opposite is true.

Previously I mentioned that UFS performance, especially write performance, depends on the number of idle disks. Writing to a disk of an empty file system is much faster than that of a full file system. This is one of the reasons behind UFS's minimum free space parameter and space/time optimization. If the cache disk is full and the squid execution performance looks bad, try to reduce the capacity value of cache_dir so that more free spaces can be used. Of course, reducing the cache size will also reduce the hit rate, but the improvement in response time may be worth doing so. If you configure a new device for the Squid cache, consider using a larger disk than you need, and use only half of the space.

8.3 selectable file systems

Some Operating Systems Support file systems different from UFS (or ext2fs. Journaling file system is a common choice. The main difference between ufs and journaling file systems is the way they process updates. In UFS, updates are real-time. For example, when you change a file and store it to a disk, the new data replaces the old data. When you delete a file, UFS directly updates the directory.

In contrast to the journaling file system, it writes updates to an independent accounting system or log file. Typically, you can choose whether to record file changes, metadata changes, or both. A backend process reads the account at idle time and performs the actual change operation. The journaling file system typically recovers faster than UFS after a system crash. After a system crash, the journaling file system simply reads the accounting and submits all significant changes.

The main drawback of journaling file systems is that they require additional disk write operations. The change is first written to the log file before being written to the actual file or directory. This has a particularly significant impact on the Web cache, because the Web cache tends to perform more disk write operations first.

Journaling file systems are available for many operating systems. On Linux, you can select ext3fs, reiserfs, XFS, and others. XFS is also available in SGI/Irix, which was originally developed here. Solaris users can use the Veritas File System product. Tru64 (formerly Digital UNIX) Advanced File System (advfs) supports journaling.
You can use the journaling file system without changing any configurations of squid. Simply create and mount the file system described in the operating system documentation, without changing the cache_dir line in the squid. cf configuration file.

Use a command similar to the following to create a reiserfs File System in Linux:
#/Sbin/mkreiserfs/dev/sda2

For XFS, use:
# Mkfs-t xfs-F/dev/sda2

Note that ext3fs activates the ext2fs for billing. When creating the file system, use the-J option for mke2fs:
#/Sbin/mke2fs-J/dev/sda2

Refer to other operating system documents.

8.4 aufs storage mechanism

The aufs storage mechanism has evolved beyond the initial attempt to improve squid disk I/O response time. "A" represents asynchronous I/O. The only difference between the default ufs and aufs lies in whether I/O is executed by the squid main process. The data format is the same, so you can easily choose between them without losing any cache data.

Aufs uses a large number of threads for disk I/O operations. Each time squid needs to be read/written, open or closed, or delete a cache file, I/O requests are distributed to one of these threads. After the thread completes I/O, it sends a signal to the squid master process and returns a status code. In squid2.5, some file operations are not executed asynchronously by default. Most obviously, disk write is always executed synchronously. You can modify the src/fs/aufs/store_asyncufs.h file, set async_write to 1, and recompile squid.

The aufs Code requires the pthreads library. This is the standard thread interface defined by POSIX. Although many UNIX systems support the pthreads library, I often encounter compatibility problems. The aufs storage system seems to only run well on Linux and Solaris. In other operating systems, code compilation may cause serious problems.

To use aufs, you can add an option in./configure:
%./Configure -- enable-storeio = aufs, UFS

Strictly speaking, you do not have to specify ufs in the storeio module list. However, if you do not like aufs in the future, you need to specify UFS to re-use the stable UFS storage mechanism.
If you want to, you can also use the-with-AIO-threads = n option. If you ignore it, squid automatically calculates the number of available threads based on the number of aufs cache_dir. Table 8-1 shows the default number of threads in 1-6 cache directories.
Table 8-1. default number of threads for up to six cache Directories
Cache_dirs threads
1 16
2 26
3 32
4 36
5 40
6 44

After compiling aufs support into squid, you can specify it in the cache_dir line in the squid. conf file:
Cache_dir aufs/cache0 4096 16 256

After activating Aufs and starting squid, make sure everything works normally. You can run tail-F store. log for a moment to confirm that the cache target is switched to the disk. You can also run tail-F cache. log and observe any new errors or warnings.

8.4.1 how aufs works

Squid creates a large number of threads by calling pthread_create. All threads are created on top of any disk activity. In this way, you can see all the threads even if squid is idle.
At any time, squid wants to perform some disk I/O operations (such as opening a file read). It allocates a pair of data structures and puts I/O requests into the queue. The thread cyclically reads the queue, retrieves I/O requests, and executes them. Because the Request queue is shared to all threads, squid uses the exclusive lock to ensure that only one thread can update the queue within a given time.

I/O operations are congested until they are completed. Then, put the operation status in a Completion queue. As a complete operation, the squid master process periodically checks and completes the queue. The module requesting disk I/O is notified that the operation has been completed and the result is obtained.

You may have guessed that aufs has more obvious advantages in multiple CPU Systems. The unique Lock operation occurs in the request and result queue. However, all other function executions are independent. When the main process is executed on one CPU, other CPUs process actual I/O system calls.

8.4.2 aufs release
The interesting feature of threads is that all threads share the same resources, including memory and file descriptors. For example, if a thread opens a file with a file descriptor of 27, all other threads can access the file with the same file descriptor. You may already know that the shortage of file descriptors is a common problem when you manage squid for the first time. The Unix kernel typically has two file descriptor restrictions:
Process-level and system-level restrictions. You may think that each process has enough 256 file descriptors (because the thread is used), but not so. In this case, all threads share a small number of file descriptors. Make sure to add the system's process file descriptor limit to 4096 or higher, especially when using aufs.

Adjusting the number of threads is tricky. In some cases, you can see the following warning in cache. log:
13:42:47 | squidaio_queue_request: Warning-disk I/O Overloading

This means that squid has a large number of I/O operation requests filled with queues, waiting for available threads. You first want to increase the number of threads, but I suggest you reduce the number of threads.

Increasing the number of threads also increases the queue size. It does not improve the load capacity of aufs. It only means that more operations become queues. A long queue leads to a longer response time, which is definitely not what you want.

Reducing the number of threads and queue size means that squid can detect load conditions faster. When a cache_dir is overloaded, it is removed from the selection algorithm (See Chapter 7.4 ). Then, select another cache_dir for squid or simply do not store the response to the disk. This may be a good solution. Despite the decrease in the hit rate, the response time remains relatively low.

8.4.3 monitoring aufs operations

The async Io counters option in the cache manager Menu displays statistics related to aufs. It displays the number of opened, closed, read/write, stat, and deleted requests. For example:
% Squidclient Mgr: squidaio_counts
...
Async Io counters:
Operation # requests
Open15318822
Close 15318813
Cancel 15318813
Write 0
Read 19237139
Stat 0
Unlink 2484325
Check_callback 311678364
Queue 0

Canceling a (cancel) Counter is equivalent to closing a (close) counter. This is because the close function always calls the cancel function to confirm that any pending I/O operations are ignored.
The write counter is 0 because squid of this version performs synchronous write operations, even for aufs.

The check_callbak counter shows how many times the squid master process has checked the completed queue.

The queue value shows the length of the current request queue. Under normal circumstances, the queue length is less than 5 times the number of threads. If you continuously observe that the queue length is greater than this value, it indicates that the squid configuration is incorrect. Adding more threads may be helpful, but it is only within a specific range.

8.5 diskd storage mechanism

Diskd (short name of the disk Daemon) is similar to aufs. Disk I/O is executed by external processes. Unlike aufs, diskd does not use threads. Instead, it implements communication between internal processes through message queues and shared memory.

Message Queue is a standard feature of modern Unix operating systems. Many years ago, they were implemented on at&t's UNIX System V version 1. Messages in the queue between processes are transmitted in a small number of bytes: 32-40 bytes. Each diskd process uses one queue to accept requests from squid and uses another queue to send back requests.

8.5.1 how diskd works

Squid creates a diskd process for each cache_dir. Unlike aufs, aufs uses a large thread pool for all cache_dir. For each I/O operation, squid sends a message to the corresponding diskd process. After this operation is completed, the diskd process returns a status message to squid. The squid and diskd processes maintain the order of messages in the queue. In this way, you do not have to worry about unordered I/O execution.

For read and write operations, squid and diskd processes use the shared memory area. The two processes can read and write data in the same memory area. For example, when squid generates a read request, it tells the diskd process where data is stored in the memory. Diskd transmits the memory location to the read () System Call and notifies squid of this process by sending a queue message. Squid then accesses the nearest readable data from the shared storage area.

Both diskd and aufs support SQUID non-blocking disk I/O in essence. When the diskd process is blocked in the I/O operation, squid is free to process other tasks. This works well when the diskd process can keep up with the load. Because the squid main process can do more work now, of course, it may increase the load of diskd. Diskd has two functions to help solve this problem.
First, squid waits for the diskd process to capture whether the queue exceeds a certain limit. The default value is 64 queued messages. If the value obtained by the diskd process is much greater than this value, squid will sleep and wait for diskd to complete some pending operations. This essentially enables squid to enter the blocking I/O mode. It also makes more CPU time available to the diskd process. By specifying the value of the Q2 parameter of the cache_dir row, you can configure this limit value:
Cache_dir diskd/cache0 7000 16 256 q2 = 50

Second, if the number of queuing operations reaches another limit, squid will stop asking the diskd process to open the file. The default value is 72 messages. If squid wants to open a disk file for reading or writing, but the selected cache_dir has too many unfinished operations, the opening request will fail. When the file is read, the cache is lost. When a file is opened, the response from the squid storage cache is blocked. In both cases, the user can still receive valid responses. The only actual effect is the decrease in the hit rate of squid. This limit is configured with the q1 parameter:
Cache_dir diskd/cache0 7000 16 256 Q1 = 60 q2 = 50

Note that in some versions of squid, Q1 and Q2 parameters are mixed in the default configuration file. The best choice is that Q1 should be later than Q2.

8.5.2 compile and configure diskd

To use diskd, you must add one after the -- enable-storeio list when running./configure:
%./Configure -- enable-storeio = UFS, diskd

Diskd seems portable, since shared memory and message queue are widely supported in modern Unix systems. However, you may need to adjust the kernel limits related to the two. The typical kernel has the following available parameters:

Msgmnb
Maximum byte limit for each message queue. The actual limit on diskd is that about 100 messages are queued in each queue. The messages sent by squid are 32-40 bytes, depending on your CPU system. In this way, msgmnb should be 4000 or more. For security, we recommend that you set it to 8192.

Msgmni
The maximum number of message queues of the entire system. Squid uses two queues for each cache_dir. If you have 10 disks, there are 20 queues. You may need to add more because other applications also need to use message queues. The recommended value is 40.

Msggsz
The size (in bytes) of the message segment ). Messages greater than this value are divided into multiple segments. I usually set this value to 64 so that diskd messages are not divided into multiple fragments.

Msgseg
The maximum number of message segments that can exist in a single queue. Under normal circumstances, squid limits the queue length to 100 queue messages. Remember, in a 64-bit system, if you do not increase the value of msgssz to 64, each message is divided into more than one segment. For security reasons, we recommend that you set this value to 512.

Msgtql
The maximum number of messages in the system. At least 100 times the number of cache_dir. In the case of 10 cache directories, we recommend setting to 2048.

Msgmax
The maximum size of a single message. For squid, 64 bytes is enough. However, other applications in your system may use larger messages. In some operating systems such as BSD, you do not need to set this. BSD automatically sets it to msgssz * msgseg. In other operating systems, you may need to change the default value of this parameter. You can set it to be the same as msgmnb.

Shmseg
The maximum number of shared memory segments for each process. Squid uses one shared memory tag for each cache_dir. I recommend setting it to 16 or higher.

Shmmni
System-level limit on the number of shared memory segments. In most cases, the value is 40 enough.

Shmmax
The maximum size of a single shared memory segment. By default, squid uses approximately 409600 bytes for each segment.
To ensure security, we recommend that you set it to 2 MB or 2097152.

Shmall
System-level limit on the number of allocable shared memory. In some systems, Shmall may indicate the number of pages rather than the number of bytes. On 10 cache_dir systems, setting this value to 16 MB (4096 pages) is sufficient and is retained to other applications.

Configure the message queue on BSD and add the following options to the Kernel configuration file:

# System V message queues and tunable Parameters
Options sysvmsg # include support for message queues
Options msgmnb = 8192 # Max characters per message queue
Options msgmni = 40 # Max number of Message Queue identifiers
Options msgseg = 512 # Max number of message segments per queue
Options msgssz = 64 # size of a message segment must be power of 2
Options msgtql = 2048 # Max number of messages in the system
Options sysvshm
Options shmseg = 16 # Max shared mem segments per Process
Options shmmni = 32 # Max shared mem segments in the system
Options shmmax = 2097152 # Max size of a shared mem segment
Options Shmall = 4096 # Max size of all shared memory (pages)

Configure the message queue in Linux and add the following lines to/etc/sysctl. conf:

Kernel. msgmnb = 8192
Kernel. msgmni = 40
Kernel. msgmax = 8192
Kernel. Shmall = 2097152
Kernel. shmmni = 32
Kernel. shmmax = 16777216

In addition, if you need more control, you can manually edit include/Linux/msg. h and include/Linux/SHM. h In the kernel resource file.

On Solaris, add the following lines to/etc/system and restart:

Set msgsys: msginfo_msgmax = 8192
Set msgsys: msginfo_msgmnb = 8192
Set msgsys: msginfo_msgmni = 40
Set msgsys: msginfo_msgssz = 64
Set msgsys: msginfo_msgtql = 2048
Set shmsys: shminfo_shmmax = 2097152
Set shmsys: shminfo_shmmni = 32
Set shmsys: shminfo_shmseg = 16

On Digital UNIX (Tru64), you can add corresponding lines to the BSD-style Kernel configuration file, as described above. In addition, you can use the sysconfig command. First, create the following IPC. Stanza file:

IPC:
MSG-max = 2048
MSG-mni = 40
MSG-tql = 2048
MSG-MNB = 1, 8192
SHM-seg = 16
SHM-mni = 32
SHM-max = 2097152
SHM-max = 4096

Then, run this command and restart:
# Sysconfigdb-a-f IPC. Stanza

Once you have configured message queues and shared memory in the operating system, you can add the following cache_dir lines in Squid. conf:
Cache_dir diskd/cache0 7000 16 256 Q1 = 72 q2 = 64
Cache_dir diskd/cache1 7000 16 256 Q1 = 72 q2 = 64
...

8.5.3 monitoring diskd

The best way to monitor diskd running is to use the cache manager. Request the diskd page, for example:

% Squidclient Mgr: diskd
...
Sent_count: 755627
Recv_count: 755627
Max_away: 14
Max_shmuse: 14
Open_fail_queue_len: 0
Block_queue_len: 0

Ops success fail
Open 51534 51530 4
Create 67232 67232 0
Close 118762 118762 0
Unlink 56527 56526 1
Read 98157 98153 0
Write 363415 363415 0

For details about the output, see Chapter 14.2.1.6.

8.6 Coss storage mechanism

Cyclic Object Storage Scheme (CoSS) attempts to create a new file system for squid. Under the Basic UFS mechanism, the main performance bottlenecks come from frequent open () and unlink () system calls. Because each cache response is stored in an independent disk file, squid is always enabling, disabling, and deleting files.

In contrast, Coss uses a large file to store all responses. In this case, it is a small customized File System for squid. Coss implements the normal functions of many underlying file systems, such as allocating space for new data and remembering free space.

Unfortunately, Coss is still not fully developed. Coss development has slowed down over the past few years. Even so, based on the fact that someone is adventurous, I will describe it here.

8.6.1 how Coss works

On a disk, each Coss cache_dir is a large file. The file size increases until it reaches the maximum size. In this way, squid overwrites any data stored here starting from the beginning of the file. Then, the new target is always stored at the end of the file.

Squid does not immediately write new target data to the disk. Instead, the data is copied to the 1 MB memory buffer, called stripe. After the stripe is full, it is written to the disk. Coss uses asynchronous write operations so that the squid master process does not block disk I/O.

Like other file systems, Coss also uses the block size concept. In Chapter 7.1.4, I talked about the file number. Each cache target has a file number so that squid can be used to locate data in the disk. For Coss, the file number is the same as the block number. For example, if a cache target has a swap file number equal to 112, it starts from 112nd in the Coss file system. Because this Coss does not assign a file number. Some file numbers are unavailable because the cache target usually occupies more than one block in the Coss file.

The Coss block size is configured in the cache_dir option. Because the squid file number is only 24 bits, the block size determines the maximum size of the Coss cache directory: size = block size X (24 power of 2 ). For example, you can store 8 GB Data in the Coss cache_dir file for the block size of 512 bytes.

Coss does not execute any normal Squid cache replacement algorithm (See Chapter 7.5 ). Instead, the cache hit is "Moved" to the end of the loop file. This is essentially the LRU algorithm. Unfortunately, it does mean that the cache hit results in disk write operations, although indirect.

In Coss, there is no need to delete the cache target. Squid simply forgets the space allocated by useless targets. When the destination of the cyclic file reaches the space again, It is reused.

8.6.2 compile and configure Coss

To use Coss, you must add it to the -- enable-storeio list when running./configure:
%./Configure -- enable-storeio = UFS, Coss...

The Coss cache directory requires the max-Size Option. The value must be less than the stripe size (1 MB by default, but can be configured with the -- enable-Coss-membuf-Size Option ). Note that you must ignore L1 and L2 values, which are used by UFS-based file systems. The following is an example:

Cache_dir Coss/cache0/Coss 7000 max-size = 1000000
Cache_dir Coss/cache1/Coss 7000 max-size = 1000000
Cache_dir Coss/cache2/Coss 7000 max-size = 1000000
Cache_dir Coss/cache3/Coss 7000 max-size = 1000000
Cache_dir Coss/cache4/Coss 7000 max-size = 1000000

You can use the block-Size Option to change the default Coss block size.
Cache_dir Coss/cache0/Coss 30000 max-size = 1000000 block-size = 2048

The tricky thing about Coss is that the cache_dir DIRECTORY parameter (for example,/cache0/CoSS) is actually not a directory, it is a regular file opened or created by squid. Therefore, you can use a bare device as a Coss file. If you create a Coss file as a directory by mistake, you can see the following error at SQUID startup:

18:51:42 |/usr/local/squid/var/cache: (21) is a directory
Fatal: storecossdirinit: failed to open a Coss file.

Because the cache_dir parameter is not a directory, you must use the cache_swap_log command (See Chapter 13.6 ). Otherwise, squid tries to create the swap. State file in the cache_dir directory. In this case, you can see the following error:

18:53:38 |/usr/local/squid/var/Cache/Coss/swap. State:
(2) No such file or directory
Fatal: storecossdiropenswaplog: failed to open swap log.

Coss uses asynchronous I/O to achieve better performance. In fact, it is called by the aio_read () and aio_write () systems. This may not be available in all operating systems. Before they are available in FreeBSD, Solaris, and Linux. If the Coss code looks to be compiled normally, but you get the "function not implemented" error message, you must activate these system calls in the kernel. On FreeBSD, you must have the following options in the Kernel configuration file:
Options vfs_aio

8.6.3 Coss release

Coss is also an experimental function. The stability of source code in daily use is not fully verified. If you want to test it, please prepare for data loss stored in Coss cache_dir. On the other hand, the preliminary performance test of Coss is outstanding. For an example, see Appendix D.

Coss does not support rebuilding cache data from the disk. When you restart squid, you may find that reading data from the swap. State file fails, and all the cached data is lost. Even after the squid is restarted, it cannot remember its location in the circular file. It always starts from scratch.

Coss uses a non-standard method for target replacement. Compared with other storage mechanisms, this may cause lower hit rates.

Some operating systems may have problems when a single file is larger than 2 GB. If this happens, you can create more small Coss areas. For example:
Cache_dir Coss/cache0/coss0 1900 max-size = 1000000 block-size = 128
Cache_dir Coss/cache0/coss1 1900 max-size = 1000000 block-size = 128
Cache_dir Coss/cache0/coss2 1900 max-size = 1000000 block-size = 128
Cache_dir Coss/cache0/coss3 1900 max-size = 1000000 block-size = 128

Using bare disk devices (such as/dev/da0s1c) won't work well either. One of the reasons is that disk devices usually require that I/O occur at the block boundary of 512 bytes (Note: block Device Access ). In addition, direct disk access bypasses the system's high-speed cache, which may reduce performance. However, many of today's disk drives have built-in high-speed cache.

8.7 null storage mechanism
Squid has 5th storage mechanisms called null. As the name implies, this is the least robust mechanism. Files written to null cache_dir are not actually written to the disk.
Most people have no reason to use the null storage system. Null is useful when you want to completely disable the squid disk cache. You cannot simply delete all cache_dir rows from the squid. conf file, because squid will add the default UFS cache_dir. The null storage system is useful for testing squid and stress testing sometimes. Since the file system is a typical performance bottleneck, the use of the null storage mechanism can obtain the maximum performance of squid based on the current hardware.

To use this mechanism, you must specify it in the -- enable-storeio list before running./configure:
%./Configure -- enable-storeio = UFS, null...

Then create the cache_dir type in Squid. conf to be null:
Cache_dir/tmp null

It may seem a bit strange. You must specify a directory to the null storage mechanism. Squid uses the directory name as the cache_dir identifier. For example, you can see it in the cache manager output.
8.8 which one suits me the most?

Squid's storage mechanism selection seems a bit confusing and confusing. Is aufs better than diskd? Does my system support aufs or Coss? If I use a new mechanism, will data be lost? Can I use the storage mechanism together?

First, the default UFS storage mechanism is sufficient if squid is used lightly (that is, less than five requests per second. In such a low request rate, using other storage mechanisms, you will not observe significant performance improvements.

If you want to determine which mechanism is worth a try, your operating system may be a deciding factor. For example, aufs runs well on Linux and Solaris, but it seems that there are problems in other systems. In addition, the functions used by Coss Code are currently unavailable in some operating systems (such as NetBSD ).

From my point of view, high-performance storage mechanisms are more susceptible to data loss in system crash events. This is the trade-off between the pursuit of the best performance. However, for most people, the relative value of cache data is relatively low. If the Squid cache is damaged due to a system crash, you will find that this is very easy. You only need to fill up the cache with a simple newfs disk partition. If you think it is difficult or costly to replace the Squid cache content, you should use a low-speed, but trusted file system and storage mechanism.

Recently, squid allows you to use different file systems and storage mechanisms for each cache_dir. However, this practice is rare. If all cache_dir uses the same size and storage mechanism, there may be fewer conflicts.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.