Linux system log management

Source: Internet
Author: User
Tags rsyslog
1. the connection time log of the connection time is generally recorded by the/var/log/wtmp and/var/run/utmp files, however, neither of these files can be directly viewed using the tail or cat command. The file is automatically updated by the system. Linux provides logs such as w, who, finger, id, last, l... 1. connection time
 
The connection time log is generally recorded by the/var/log/wtmp and/var/run/utmp files. However, neither of these files can be directly viewed using the tail or cat commands. The file is automatically updated by the system. Linux provides commands such as w, who, finger, id, last, lastlog, and ac to read the information.
 
 
 
1
Ggd543 @ ubuntu:/home/test $ w # shows which users have logged on to the system and what are currently being done
2
23:20:44 up 1 day, 2 users, load average: 0.00, 0.01, 0.05
3
User tty from login @ IDLE JCPU PCPU WHAT
4
Test pts/0 192.168.195.1 2: 59 m 0.45 s 0.45 s-bash
5
Test pts/1 192.168.195.1 0.00 s 1.45 s 0.07 s sshd: test [priv]
1
Ggd543 @ ubuntu :~ $ Who # display which users have logged on to the system
2
Ggd543 pts/0 Feb 25 23: 22 (192.168.195.1)
3
Test pts/1 Feb 25 23: 00 (192.168.195.1)
1
Ggd543 @ ubuntu :~ $ Finger # works with The whom command
2
Login Name Tty Idle Login Time Office Phone
3
Ggd543 ggd543 pts/0 Feb 25 23:22 (192.168.195.1)
4
Test pts/1 Feb 25 23: 00 (192.168.195.1)
01
Ggd543 @ ubuntu:/home/test $ ac-p # view the cumulative time of each user connecting to the system in the system (unit: hours)
02
Test 1, 39.37
03
Portaluser 33.44
04
Ggd543 88.59
05
 
06
Ggd543 @ ubuntu:/home/test $ ac-a # displays the total time for all users in the system to connect to the system (unit: hours)
07
Total 161.42
08
 
09
 
10
 
11
Ggd543 @ ubuntu:/home/test $ ac-d # view the total time of all users connecting to the system on a daily basis (unit: hours)
12
Feb 4 total 1.55
13
Feb 7 total 33.50
14
Feb 8 total 29.55
15
Feb 9 total 26.46
16
Feb 10 total 4.00
17
Feb 11 total 9.33
18
Feb 20 total 12.04
19
Feb 21 total 18.26
20
Feb 24 total 1.64
21
Today total 25.12
 
For more instructions on these commands, see the man manual.
 
2. process monitoring logs
The process statistical monitoring log is very effective for monitoring user operation commands. When the server is shut down without reason or files are deleted without reason, you can view the log to analyze some causes.
1
Ggd543 @ ubuntu:/home/test $ sudo accton # enable process log statistics
2
Turning on process accounting, file set to the default '/var/log/account/pacct '.
3
Ggd543 @ ubuntu:/home/test $ sudo accton off # Disable process log statistics
4
Turning off process accounting.
5
Ggd543 @ ubuntu:/home/test $ sudo accton on
6
Turning on process accounting, file set to the default '/var/log/account/pacct '.
7
Ggd543 @ ubuntu:/home/test $ lastcomm # View process statistics logs
You can use sudo accton $ log_file to specify the process statistics log.
 
3. system and service logs
 
The system log service is managed by a service named syslog. for example, the following log files are all driven by the syslog log service:
/Var/log/lastlog: records information such as the time when the last user successfully logs in and the logon IP address.
/Var/log/messages: Records common system and service error messages of Linux operating systems.
/Var/log/secure: Linux system security log, which records the deterioration of users and working groups and user login authentication.
/Var/log/btmp: records the users, times, and remote IP addresses that failed Linux login.
/Var/log/cron: records the service execution of crond scheduled tasks.
The syslog service consists of the configuration file/etc/syslog. conf (ubuntu 11.10 is/etc/rsyslog. conf, and the file will contain/etc/rsyslog. d directory ). /etc/syslog. the content format of conf is:
1
Message Type. Error-level action domain
Message Type: auth, authpriv, security, cron, daemon, kern, lpr, mail, news, syslog, user, uucp, local0 ~ Local7
Error Level (8 levels, from low to high): debug, info, notice, warning | warn; err | error; crit, alert, emerg | panic
Action domain: file, user, console, @ remote_ip
 
For example
 
 
1
*. Info; mail. none; authpriv. none; cron. none/var/log/messages
Any message at the info level is sent to the/var/log/messages log file, except for the error level information of the email system, verification system, and scheduled task, do not send (none indicates disabled ). For example:
 
1
Cron. */var/log/cron
Indicates that all levels of cron information are sent to the/var/log/cron file. While
1
*. Emerg *
Indicates that all message types of the emerg error level (dangerous status) are sent to all users.
If you need to send all info-level auth information on a ubuntu system (assuming the IP address is 10.123.76.11) to a log server (assuming the system is RHEL 5 ), in ubuntu, the/etc/rsyslog is required. add a configuration line in d/50-default.conf:
 
1
Auth.info @ 10.123.76.11
Add the following configuration in/etc/syslog. conf of the Log server:
1
SYSLOGD_OPTIONS = "-m 0-r" // just add "-r" here.
2
KLOGD_OPTIONS = "-x"
3
SYSLOG_UMASK = 077
Then restart the syslog service of the Log server.
1
Service syslog restart
Over time, log files will become larger and larger. when the size of log files exceeds a certain hour, it will affect the system performance. sometimes, the system also backs up the log files. Therefore, we need to dump logs. Log dump methods include annual dump, monthly dump, weekly dump, or reaching a certain size dump. Use the logrotate command in Linux to perform log dumping. You can use the cron scheduled task to easily implement the scheduled dump of the log file./etc/logrotate. conf provides the configuration of the log dump. The following describes some settings of the configuration file:
 
01
[Root @ xhot ~] # Cat/etc/logrotate. conf
02
# See "man logrotate" for details // You can view the help documentation.
03
# Rotate log files weekly
04
Weekly // dump by week
05
# Keep 4 weeks worth of backlogs
06
Rotate 4 // up to 4 dump times. since each dump is performed by week, only the log content of the last 4 weeks is retained.
07
# Create new (empty) log files after rotating old ones
08
Create // create a dump when the dumped file does not exist
09
# Uncomment this if you want your log files compressed
10
# Compress // compress the dump
11
# RPM packages drop log rotation information into this directory
12
Include/etc/logrotate. d // Other log file dump mode, which is included in this directory (custom log dump)
13
 
14
# No packages own wtmp -- we'll rotate them here
15
/Var/log/wtmp {// Set/var/log/wtmp log file dump parameters
16
Monthly // monthly dump
17
Create 0664 root utmp // create it when the file does not exist after the dump, the file owner is root,
18
The group is utmp and the corresponding permission is 0664.
19
Rotate 1 // dump once
20
}
21
 
22
# System-specific logs may be also be configured here.
Here are two examples:
1) set dump parameters for all files in the/var/log/news/Directory. dump is performed twice a week, place the old log file in the/var/log/news/old directory during dump. if the log file does not exist, skip this step. After the dump is completed, restart the news newsgroup service without compression. You can add the following at the end of the/etc/logrotate. conf file:
 
01
/Var/log/news /*{
02
Monthly
03
Rotate 2
04
Olddir/var/log/news/old
05
Missingok
06
Postrotate
07
Kill-HUP 'cat/var/run/inn. Pi'
08
Endscript
09
Nocompress
10
}
2): Set the dump parameters for/var/log/httpd/access. log and/var/log/httpd/error. log logs. Five dump times. an email is sent to the root @ localhost user when the log file reaches KB. after the dump, restart the httpd service. you can directly log on to/etc/logrotate. add the conf file as follows:
1
/Var/log/httpd/access. log/var/log/http/error. log {
2
Rotate 5
3
Mail root @ localhost
4
Size = 100 k
5
Sharedscripts
6
/Sbin/killall-HUP httpd
7
Endscript
8
}
Remember to restart the syslog service.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.