EAccelerator and memcached are two mainstream Cache Acceleration tools available in PHP. eAccelerator is specially developed for PHP, while memcached is not only used in PHP, but can be used in all other languages. main functions of eAccelerator: 1. cache php file execution code: When the cached code is called again, "> <
EAccelerator and memcached are two mainstream Cache Acceleration tools available in PHP.
EAccelerator is specially developed for PHP, while memcached is not only used in PHP, but can be used in all other languages.
Main functions of eAccelerator:
1. cache PHP file execution code: When the cached code is called again, it will be directly read from the memory, which greatly speeds up PHP running.
2. provides shared memory operation functions: you can save your common non-resource images to the memory and read them at any time.
Memcached features:
Provides shared memory operation functions to save and read data.
What they have in common:
Common feature: Shared memory operation functions are provided to store and read data.
Differences between the two:
As PHP's extended inventory, eAccelerator can operate and read/write shared memory only when PHP is running. generally, it can only be called by programs that operate on shared memory.
At the same time, the eAccelerator can cache the execution code of the PHP program, improving the transfer and execution speed of the program.
Memcached is mainly used as a shared memory server. its PHP Extension library is only used as the connection stock between PHP and memcached, similar to the MySQL extension Library. therefore, memcached can be completely isolated from PHP, and its shared data can be called by different programs.
Depending on the two, we use them where they really need them:
EAccelerator is mainly used to accelerate the speed of PHP on a single machine and cache intermediate data. it is very practical when the real-time performance is high but the data operation volume is small.
Memcached is used in distributed or cluster systems. multiple servers can share data. it is very useful for scenarios with high real-time performance and large data operations.
Correct understanding of MemCached
At the beginning, I heard that MemCached was used to cache data to the memory and then operate on the data (here the operations include, query and update). It sounds great. In this way, you do not need to operate the database within a certain period of time. That's great.
Then I keep thinking about the problem: the query is indeed OK, but how does the memory update process concurrency? Does our MemCached have such a function? if so, it's amazing.
However, this is not what we say. This understanding of MemCached is incorrect.
MemCache is the same as other caches. after the data is updated, the cached items are the out date items.
After reading this on the Internet, our predecessors described MemCached and explained this.
Therefore, instead of directly updating MemCached, we should omit the database.
Previously, I thought that the set method provided by him was used to update the database. at that time, I was confused.
In fact, this method caches records in the database to MemCached and specifies the validity period of the records.
Now I have thought about why the content in our MemCached has never changed, even if I have deleted this record.
When we set (), we didn't set its expiration time, so the default value is 0, that is, it never expires. As long as the MemCached server is not restarted, it will always exist.
In this way, in our ROR project, cache is used to reduce database retrieval, and we cannot expect MemCached to free us from even updating the database.
If you do not even need to update the database, you will have entered the non-database era, haha. Probably not. If we can ensure that the user is in the queue mode, one by one.
Another way is to reduce the pressure on updates.