As soon as the project was launched, there were a lot of problems, such as local testing and OK. After the project was launched, it could not survive when there were a large number of users. Summarize the problems found in a week.
The project is a framework such as netty4.0, reids2.8, and nginx. Currently, there are four proxy servers and one core server. reids are only deployed on the core server. Each proxy server shares redis data.
When a large number of users access a proxy server that is close to themselves, the proxy requests the core server at the same time. When the concurrency reaches, the request is often stuck and the webpage request does not return, starting from netty's HTTP processing concurrency, netty's official website, netty's optimization, and configuration have been modified, but it still does not work. After blocking redis, no problem is found when netty handles concurrent requests. The problem is determined on redis.
Then, proceed to redis optimization, redis pool optimization, configuration, no effect, the background found that local access to redis concurrency 1 W, very fast, but access to other servers redis special card, the cause is that the connection to redis across servers may result in concurrent requests to redis over the network or across servers, instead of using the proxy server to remotely call the core server reids, nginx changes all heartbeat requests to cross the proxy server and directly go through the core server. This is equivalent to local access to redis, and finally worried about the concurrency of the core server, currently, two services are enabled to process all concurrency and the reids problem is solved.
Conclusion: There is no problem with the reids performance, and the processing concurrency is OK, that is, when the reids of other servers are remotely accessed across servers, the concurrency is high and the network latency will occur.
redis concurrent processing is slow