Servers constitute the most basic elements in the network infrastructure and are also the most critical part of network security. Although Unix servers are well known for their security and reliability, they do not mean they are impeccable. In addition, the security of UNIX servers affects or determines the security of the basic network structure.
In general, we pay special attention to the security of specific hosts, but forget the overall network security design. In fact, the security design of the entire network is different from host-based security. We do not intend to discuss this issue here.
Below we can define three types of servers, of course, based on your actual needs, these servers can be further classified.
Public server, which can access the Internet.
Login server, which allows non-super users to log on.
Other servers, such as MySQL, internal LDAP or NFS servers, can only arrive from the internal network.
According to the servers we define, the overall network layout and the firewall rules we want to use are also obvious.
The problem is how we manage these servers to ensure their security. This is one of the challenges of our design, because a fragile design may mean a disaster.
From a more advanced perspective, we know that some servers are more important than others. One or more servers must be trusted by other servers to ensure automatic changes. Account creation monitors the integrity of the host according to the Tripwire or Samhain method, and even the backup of the configuration file must be configured and maintained from a server, the server can access other servers as root users.
Such a server can be called as the master server, and only a Super User account can be used for login access. The superuser password must be different from the password of other servers, and the master server should not provide services to the outside world. Damage to public servers should not affect the security of the primary server. When a seemingly unimportant machine is damaged, a hacker-type logon will become part of a rootkit, which will expose the user account password. That's why sudo is not a good idea: It gives your user the access permission to the root directory. A corrupted su may leak the root user's password, which is why the master server is so important.
The master server should be able to log on to all other servers using SSH and as the root user, but only through the SSH key. Password-Based Root User logon is never allowed through SSH. If the security of the primary server is compromised, any other server will end up in the same way. Therefore, the master server is a bastion host that only runs the SSH service and connects to other machines. Configuration file backup and host integrity database can be stored on the master server.
Servers that can be accessed by the public are the most vulnerable because they run applications, but login to servers can also cause problems, and they are also vulnerable in many other aspects. Its users, developers, students, and customers, do not care much about security issues. They will run any desired applications, including SQL servers and PHP-based WEB applications (only using poorly secure security records ), and anything else that looks useful. To prevent unknown users from entering the system through these program vulnerabilities, you 'd better install the latest patch for your operating system.
Patching an operating system is not an optional option, and it is not a matter that can be taken lightly. It must be repeatedly stressed that when a security update is available, all servers must be updated in a timely manner. For users who only patch UNIX servers over the weekend, they have to take a serious responsibility, because malicious users can easily access the root directory, because malicious exploitation of system vulnerabilities is extremely fast. In addition, there are some new malicious exploitation, which are terrible. SELlinux or a correctly configured Unix computer can be of great help in preventing malicious exploitation of vulnerabilities. Therefore, we believe that the overall security architecture is the most critical.
Insecure servers may allow users to log on. Assuming that we need to share the home directory, it may be difficult to support the environment described here. NFS sharing output to insecure clients requires careful check, especially when developers or researchers need root directory permissions on their machines.
Since NFS means no security, authorizing an uncontrolled client to access NFS sharing is terrible. Essentially, you must assume that everything in the shared file system may be compromised, because the root user may easily SU to anyone who happens to own the file. The old standard workspace is to move these types of sharing to its own partition and share it with harmful clients. AFS does not support enterprise-level features, so you 'd better not use it, and it does not support snapshots or compatible ACLs.
We can propose various optimal methods and defects for system security. From the perspective of architecture, the general point of view is to minimize risks in two ways: one is to make it difficult to penetrate into the system, and the other is to spread out once it enters the system. With proper monitoring, you should be able to quickly detect any intrusion and block it.
No matter whether you have 300 or 3000 servers, the basic principles are the same. This seems simple, but it is the most basic thing to configure a new or damaged server. Remember:
Minimize services on exposed servers;
Minimize the number of exposed servers;
Be extremely careful when giving NFS client any permission;