The shell can often be seen: >/dev/null 2>&1
The result of the command can be defined in the form of a%> output
/dev/null represents an empty device file
Where does the > delegate redirect to, for example: echo "123" >/home/123.txt
1 means stdout standard output, the system default is 1, so ">/dev/null" is equivalent to "1>/dev/null"
2 indicates stderr standard error
& means equivalent to, 2>&1, 2 output redirect equals 1
Then the statement in the title of this article:
1>/dev/null first indicates that the standard output is redirected to an empty device file, that is, not outputting any information to the terminal, which is plainly not displaying any information.
2>&1 then, the standard error output redirection is equivalent to the standard output because the standard error output is redirected to the empty device file because the standard output was previously redirected to an empty device file.
A. 1>/dev/null indicates that the standard output of the command is redirected to/dev/null2>/dev/null to redirect the error output of the command to/dev/null1-denotes stdout (standardized output) 2 -Denotes stderr (standard error)/dev/null is quite similar to the Recycle Bin in Windows, just go in and can't come out again. >/dev/null is the standard output and standard error information masking is not displayed
B.>/dev/null 2>&1 also can write as 1>/dev/null 2>&1-stdout redirect To/dev/null (no stdout ), and redirect stderr to stdout (stderr gone as well). End up it turns both stderr and stdout off
C.A little practice to Undstand above. #ls/usr/nothing #ls/usr/nothing 2>/dev/null #ls/usr/nothing >/dev/null 2>&1
We often see something like "2>&1″" in some scripts under Unix systems, such as "/path/to/prog 2>&1 >/dev/null &", what is the specific meaning of it?
UNIX has several input and output streams that correspond to several numbers, namely: 0-Standard input stream (stdin), 1-standard output stream (stdout), 2-standard error stream (stderr). "2>&1″ means redirecting stderr to stdout and showing it together on the screen." If you do not add numbers, the default redirect action is for stdout (1), such as "ls-l > result" equivalent to "Ls-l 1 > result". This makes it easier for us to understand the redirection process more universally.
The following examples illustrate:
#cat std.sh
#!/bin/sh
echo "stdout"
echo "stderr" >&2
#/bin/sh std.sh 2>&1 >/dev/null
StdErr
#/bin/sh std.sh >/dev/null 2>&1
The output of the first command is stderr, because stdout and stderr are redirected to/dev/null together, but stderr is not cleared, so it will still appear on the screen, and the second command has no output because when STDOUT is redirected to/dev/null , STDERR was redirected to stdout, so stderr was also exported to/dev/null.
Today, while doing routine work, I found that the sendmail process on the machine was extremely odd and the machine Io seemed to be slow. Later found LS almost dead in the/var/spool/clientmqueue directory-at least 100,000 files
Ps|grep SendMail Look at these sendmail processes there are/var/spool/clientmqueue
CD used to open a file to look at, found that I crontab inside the execution of the program's exception, is estimated to be my crontab every execution, Linux is trying to send mail to crontab users but not with sendmail, so things are thrown to/ Var/spool/clientmqueue down. Then I understand why the previous people wrote crontab to add >/dev/null 2>&1, so that will not be executed every time crontab the results or excetion e-mail.
After the 100,000 files have been deleted, everything is back to normal.
Problem phenomenon:
A large number of files exist in the/var/spool/clientmqueue/directory in the Linux operating system.
Cause Analysis: There are users in the system to open cron, and Cron executes the program has output content, the output will be sent to the user in the mail, cron, and SendMail did not start to produce these files;
Workaround: 1, add the command behind the crontab >/dev/null 2>&1
2, Knowledge points:
2>: redirect error.
2>&1: Redirect the error to where the output is to be sent. The result of the execution of the above command is redirected to the/dev/null, that is, discard, and also discard the resulting error.
3, the specific code:
(1), # crontab-u Cvsroot-l
* * * */opt/bak/backup
* * * * */opt/bak/backup2
(2), # Vi/opt/bak/backup
#!/bin/sh
CD/
Getfacl-r Repository >/opt/bak/backup.acl
(3), # vi/opt/bak/backup2
#!/bin/sh
week= ' Date +%w '
Tar zcvfp/opt/bak/cvs$week/cvs.tar.gz/repository >/dev/null 2>&1
4. Clear the files under the/var/spool/clientmqueue/directory:
# Cd/var/spool/clientmqueue
# RM-RF *
If the file is too large and takes up too much space, use the above command to remove the slow, then execute the following command:
# Cd/var/spool/clientmqueue
# ls | Xargs rm-f
On a wind and a beautiful night, I was sitting at home watching TV, and then the phone came up, and the result was that Yang teacher discovered that a host computer was being created, and that the/var/spool/mqueue of the server was stuffed with a bunch of letters that had not been sent, and that at the time there were no/var/spool separate, So it also affects the system root (/) block, only 600 MB can be used, this time I think there are a few possibilities.
This server has a school PC to send letters to, so it may be a letter in the mail.
The machine that uses this server to send mail, may be poisoned, and then send out the letter.
It's only for two reasons, but you can spit out the swallowed space, so you're going to cut all mail queues first, and, of course, stop the Mail service.
While chopping down these line-up letters, one thing is that there are too many files in the file, using the LS command to become super-delay, no response, use MAILQ to see if the letter is the queue to live also can not do, and then think about it, had to cut the whole, do not play, after, very smooth hand under the RM- RF * So, there was a very strange thing, unexpectedly file too much can not be deleted, the first time I heard RM in complain (I heard, Yang teacher is the author, so he saw ^^).
The error is: bash:/bin/rm:argument list too long
Although it could not be removed, but Yang not to abandon, to the main machine, opened the X Window after the use of the linuxer most commonly used parrots parrots (nautilus) to open to/var/spool/mqueue. Oh, you can use X Window to delete it! Then I want to say that X Window has such a great ability, then use it to delete the other queue files, so it is hanging on the phone, put Yang brother a person trying to delete in the machine room ...
And, of course, I didn't have any leisure, and the TV show just finished, so I started working with my partner, and once again, the Web dive boat ... Rang, suddenly thought, why not use Find to remove the look? To delete the history file, one of the commands is find./| Xargs RM-RF Tens do not underestimate this little instruction, because shortly after I read it, Yang brother came in, said that has been deleted, this is also the evening 10 points, so I recommend this directive, well, very good, all deleted, but also undesired fast ...
Well, it's not that why it was deleted because Nautilus in the Load record, is a batch, not a full read, so a big about is a few thousand in reading, deleted, did not expect to come out again a few thousand seal ... It's scary, and then the theory is that it should be a partial relationship.
It's under Find./| After the Xargs RM-RF, still in the surprise difference fast surplus, found the time is not much, the school also must concern door, therefore first say bye bye, in the present poor of Yang brother also went home to rest.
Analysis:
RM has the largest number of delete attempts, so when there are too many files or records in a catalog, the bug is that the younger brother is trying to be under two and using Find./| The purpose of Xargs RM-RF is to first use Find to list the files, and then to Xargs,xargs to the RM, where the Xargs will be in batches according to the maximum number of RM fed to RM, then you can delete the file.
。 And the real reason, it could be the RM version or the file system, and I'm not going to keep chasing it, anyway.
Here's a little shell script that was tested by the younger brother.
Download:
mk-file.sh
(This shell script will produce 20,000 files.) )
And then we'll do a little test:
Root # mkfile.sh
Root #
will produce 20,000 small files, called test-file-{1~19999}
To delete directly using RM:
Root # RM-RF test-file-*
-bash:/bin/rm:argument list too long (will respond to an extension message)
Change to find to delete
Root # Find/-iname ' test-file-* ' | Xargs RM-RF
Root # ls
mk-file.sh
Root #
This will be erased.
---------------------------------
#tool_action
4 * * */bin/sh/data/stat/crontab/exec_tool_action_analysis_db.sh >>/data/stat/logs/exec_tool_action_ Analysis_db.sh.log >/dev/null 2>&1
5 * * */bin/sh/data/stat/crontab/exec_tool_action_analysis_user.sh >>/data/stat/logs/exec_tool_action_ Analysis_user.sh.log >/dev/null 2>&1
Otherwise, the following files will be produced under/var/spool/clientmqueue:
-RW-RW----1 Smmsp smmsp 975 Jan 10:50 qfq0h2o4ei031197
Linux-->/dev/null 2>&1