1. What happens when a worker dies? When a worker dies, the supervisor restarts the worker. If it always fails to start, it cannot send heartbeat to Nimbus, so Nimbus will allocate this worker to another machine. 2. What will happen when a node dies? The task assigned to this machine will time out, and Nimbus will re-allocate the task to other machines. 3. What happens when the nimbus or supervisor instance dies? Nimbus and supervisor are designed to be fast failed (the process stops itself when any unexpected happens) and stateless (all statuses exist in zookeeper or on disk ). When a storm cluster is started, the nimbus and supervisor processes run a daemon using process tools or monitoring tools. Therefore, when nimbus and supervisor processes die, the daemon will restart them as if nothing had ever happened. It is worth noting that no worker will be affected by the death of nimbus and supervisor. Compared with hadoop, jobtracker will die. All running jobs will be lost. 4. Is Nimbus a single point of failure? If the nimbus node is lost, the worker continues to run the program, and the supervisors continues to restart the failed worker. However, if there is no Nimbus, the worker cannot be assigned to other machines as needed (for example, if one worker machine is lost ). Therefore, the answer is that Nimbus is almost a single point of failure. In practice, this is not a big problem. When the nimbus node is lost, there will be no disaster. We plan to improve the high availability of nimbus in the future. 5. How does storm ensure data processing? Storm provides a mechanism to ensure that the processing can continue even if the node dies or the message is lost. (It should be the data replay mechanism)
A Brief Introduction to the concept of storm cluster Fault Tolerance