This is the case: a Linux server we hosted in a place, suddenly received a computer room phone saying it was our machine paralyzed the entire IDC network. External machines cannot access IDC.
Hung up the phone: I began to consider that the host machine room is hard to prevent, I myself a machine how can cause such a big impact. And then contact the computer room hard-to-prevent manufacturers to inquire about the situation, the final answer is that I managed the server outgoing broadcast messages, the computer room hard anti-session are stained with, other equipment there is no available session. Processing: When the situation is understood, my first reaction is to go to the remote SSH to see. As a result, I was greatly disappointed by the Connect and the password was changed and could not be accessed. The feeling at that time was to wall. Hehe After a while, I remembered that the machine had reserved a super user at that time, so try again from the outside network, It turns out that the Internet interface is inaccessible to this machine. What about this? I suddenly remembered that when the machine has the configuration intranet, without further ado immediately from the intranet machine ssh into, and finally the command prompt appeared. Resolve: 1 First View last command, recently logged in user, Found there are more than n I do not know the IP login too. Caught out a few of the discoveries have Russian IP, with the United States. At this point I'm sure it was a foreigner hanging a horse. 2 ps-ln to view the current running process. This view does not matter to find that there are more than n wget to remotely download executable programs. This gas. 3 without further ado kill the thread number first. 4 View network ports, NETSTAT-ATUNLP & nbsp "5" Monitoring network packet, found a few JPG suffixes of the program outward
Contract。 The roots are stuck. A Kill "6" rm jpg deletes these executables. "7" Re-monitoring found that the external network can be normal access. "8" After a normal visit, just think about how they attacked me. Analyzed for two days also did not find traces. Log also has no record of "9" finally can only the firewall more tightly blocked into the bureau and outbound routes. Only the necessary 5060 (telephone softswitch), 369 ssh login and other "10" observed a week no more large-scale broadcast packets. From this failure analysis, it is possible that there are several situations where 1.80 ports have been exploited by people to attack 2. The password settings are too simple to be exploded. 3.SSH telnet port is exploited to optimize for these faults, SSH does not use the default 22,80 port shield, password high strength. This is the end of the fault.
Once processing CentOS server is attacked outgoing broadcast packet