Game server programs design the so-called tick mechanism in a unified manner. This mechanism generally has two purposes. One is that many business modules have regular processing needs, such: skill buff timing, time item check, timing to give players benefits (Meditation timing back to blood and blue, dancing timing back to experience, etc.); second, there will be some daily active drive events inside the game server, for example, tick needs to be used to deal with such requirements as monster movement and player movement. To put it bluntly, most of the game server business logic processing needs to involve tick processing.
How is this very important tick mechanism implemented? At present, we use the main program in a large loop, each cycle takes the current system time and retains the time when the last entry into the tick processing, and then compare the two time, if the interval is greater than the interval set by Tick (for example, 200 ms), tick function processing is enabled. This tick function registers the interface that all business modules of the system need to regularly process. Therefore, every tick call in a Large Loop will process all the scheduled interfaces of the system. So the problem came along ......
The obvious problem is that all the scheduled processing is put together for calling, which causes the peak CPU to increase instantly and the server's message processing capability to decrease (because of the time of tick processing, in essence, it can be used as the server's delay processing time for messages). For this problem, we also have an obvious solution that classifies the scheduled time of each business module, it can be roughly divided into 200 ms, 500 ms, 1 s, 3 S, 20 s, 30 s, 60 s, and other regular intervals suitable for various business needs, so that each tick call, there will be no more modules to process at the same time, which can smooth the CPU processing curve to a certain extent. However, it cannot be avoided that many (or even all) business modules are regularly processed at a certain point in time. So, I wonder if there are other better solutions?
The first thing that comes to mind is to classify the regular intervals of various business modules more accurately and reasonably. The ideal situation is to make the intervals mutually exclusive, that is to say, there should be no divisible relationship between each interval, such as 300 ms, 500 ms, and ms. You can perform a test and check the modification, whether the CPU processing is smoother.
In linux, the system timer is implemented by the SIGALRM signal, while in general, the game server shields many linux signal operations for stability and other reasons, generally, a custom timer is used to implement the tick mechanism. Is there a better way to implement this mechanism? This is indeed a question worth thinking about. After all, tick processing accounts for a considerable proportion of the game server.
Of course, to minimize the CPU consumption of tick processing, we only need to minimize the processing time of each service module!