The RABBITMQ in the development environment will always hang, and the execution of the RABBITMQ is SSH telnet to execute the command:
Rabbitmq-server &
Consider adding & The process will be executed in the background and will not be affected by the terminal. So I don't know what the problem is, and later I thought about using strace to see what the process was going through when it exited.
Using strace to record the log, to the next day is sure RMQ again down. To view the log, see the last line of content:
---SIGHUP (hangup) @ 0 (0)---
Well, this is the ' last words ' of the process. It appears to have been killed by a sighup signal sent by a process. "The system's default handling of SIGHUP signals is to terminate the process of receiving this signal. So if the signal is not captured in the program, the process exits when the signal is received. 】
The following information to know:
- The kernel-driven discovery terminal (or pseudo-terminal) shuts down, sending SIGHUP to the control process (bash) that corresponds to the terminal
- When bash receives SIGHUP, it sends SIGHUP to each job (including the front and back) and exits
- Each job in the front background receives a SIGHUP from bash and exits (if the program processes SIGHUP, it does not exit)
It seems that the ' cause of death ' of our poor process is this, a little stirring.
Then we can now take two ways (I think) to make the subsequent process no longer affected by ' cause of death ':
- Using the Nohup command
Nohup Rabbitmq-server &
- Using the SETSID command
Setsid Rabbitmq-server &
We can see through strace that the RMQ process receives a SIGHUP signal when using the Nohup command, but the process is not affected. When you use the Setsid command, the RMQ process does not receive the SIGHUP process.
Related information: After closing the terminal, Background job exit analysis
RABBITMQ Hang up problem handling