Twemproxy is a proxy server that can be used to reduce the number of connections opened by the memcached or redis server.
What is the purpose of twemproxy? It can:
- Reduce the number of cache server connections by proxy
- Automatically share data among multiple cache servers
- Consistent hashing is supported through different policies and Hash Functions
- Disable failed nodes through Configuration
- Running on multiple instances, the client can connect to the first available Proxy Server
- Supports stream and batch processing of requests, thus reducing the consumption of back and forth
Salvatore sanfilippo (@ antirez), creator of redis, wrote an article about how to use twemproxy to enable the redis-cluster feature and let the redis cluster play a role, in most cases, the performance will not be lost too much:
Twemproxy is powerful in that it can be configured to disable failed nodes, retry after a period of time, or use the specified key-> server ing. This means that when redis is used as a data storage, it can partition redis data sets (disable node eviction); When redis is used as a cache, it can enable node eviction to achieve simple high availability.
Twemproxy is fast and really fast. It is almost as fast as accessing redis directly. I dare say that in the worst case, performance only loses 20%.
My only idea of performance problems is that when I use commands on multiple instances, I think there is still room for improvements in mget.
As early as the beginning of this year, twemproxy was open-source by Twitter. It initially supported memcached and recently added support for redis. Twitter uses a large number of cache servers and sends K Tweets per second. You can take a look at this introduction of real-time delivery architecture at Twitter for more information.
Original article: twemproxy-proxy for memcached and redis
Twemproxy, A redis Proxy from Twitter
Twemproxy-Twitter's open-source redis proxy