In the previous article, it was mentioned that the Nginx memcached module application scenario, mainly as a file cache. Then a problem is found, when large files are cached in a byte array, the cached data is compressed, which can cause problems when reading.
(This article welcomes reprint, respect for others labor, reproduced please specify the Source: http://blog.csdn.net/poechant/article/details/7177603)
The workaround is simply to set the compression threshold on the memcachedclient side. If you are using the Net.spy.memcached API, you can set the following:
- int expire_seconds = 18000;
- Serializingtranscoder transcoder = new Serializingtranscoder ();
- Transcoder.setcompressionthreshold (5242880);
- Filecache.set (Key, Expire_seconds, value, transcoder);
If you are using the Net.rubyeye.xmemcached API, you can set the following:
- int expire_seconds = 18000;
- Baseserializingtranscoder transcoder = new Baseserializingtranscoder ();
- Transcoder.setcompressionthreshold (5242880);
- Client = set (key, Expire_seconds, value, transcoder);
If you are using Danga. Memcached API, you can set the following:
- int expire_seconds = 18000;
- Memcachedclient.setcompressthreshold (5242880);
- Memcachedclient.set (key, value, new Date (System.currenttimemillis () + expire_seconds * 1000L));
Configuration and deployment of high-performance Web server Nginx (12) Application module memcached problems caused by compressing file cache