Transfer from http://blog.chinaunix.net/uid-25100840-id-3271224.html a few days ago interview has such a question, more strange: Explain the meaning of >/dev/null 2>&1: I was scratching, also did not think out, wrote a clear buffer, hehe online search, as follows: http://wangqiaowqo.iteye.com/blog/1354226 but I only looked at the following paragraph, the back did not see {shell may often see: >/dev/null 2>&1
The result of the command can be defined as the output in the form of%>
/dev/null represents empty device files
> represents where to redirect, for example: echo "123" >/home/123.txt
1 indicates stdout standard output, the system default is 1, so ">/dev/null" is equivalent to "1>/dev/null"
2 indicates stderr standard error
& means equal to, 2>&1, indicating 2 output redirection equals 1
Then the statement of the title of this article:
1>/dev/null first indicates that the standard output is redirected to an empty device file, that is, not outputting any information to the terminal, which means no information is displayed.
2>&1 then, standard error output redirection is equivalent to standard output, because the standard output is redirected to an empty device file, so the normal error output is redirected to the empty device file. (This is enough, the back of the time to see)
A. 1>/dev/null indicates that the standard output of the command is redirected to/dev/null2>/dev/null to redirect the error output of the command to/dev/null1-denotes stdout (standard output) 2 -Denotes stderr (standard error)/dev/null is quite similar to the Recycle Bin in Windows, just go in and can't come out again. >/dev/null is to mask the standard output and standard error information without displaying
B.>/dev/null 2>&1 also can write as 1>/dev/null 2>&1-stdout redirect To/dev/null (no stdout ), and redirect stderr to stdout (stderr gone as OK). End up it turns both stderr and stdout off
C.A little practice to Undstand above. #ls/usr/nothing #ls/usr/nothing 2>/dev/null #ls/usr/nothing >/dev/null 2>&1
We often see a similar "2>&1″" usage in some scripts under the UNIX system, such as "/path/to/prog 2>&1 >/dev/null &", and what exactly does it mean.
UNIX has several input and output streams, which correspond to several numbers: 0-standard input stream (stdin), 1-standard output stream (stdout), 2-standard error stream (stderr). "2>&1″ means to redirect stderr to stdout and display them together on the screen. If you do not add a number, the default redirection action is for stdout (1), such as "ls-l > result" equivalent to "Ls-l 1 > result." This makes it easier for us to understand the redirection process more universally.
The following examples illustrate:
#cat std.sh
#!/bin/sh
echo "stdout"
echo "stderr" >&2
#/bin/sh std.sh 2>&1 >/dev/null
StdErr
#/bin/sh std.sh >/dev/null 2>&1
The output of the first command is stderr, because the stdout and stderr are redirected to/dev/null together, but the stderr is not purged, so it is still displayed on the screen, and the second command has no output because when stdout redirects to/dev/null , STDERR is redirected to the stdout, so stderr is also exported to/dev/null.
Today, while doing routine work, I found that the sendmail process on the machine was incredibly high and the machine Io seemed slow. It was later found that LS is almost dead in the/var/spool/clientmqueue directory-at least 100,000 files
Ps|grep SendMail Look at these sendmail processes there are/var/spool/clientmqueue
CD used to open a file to read, found that I crontab inside the implementation of the program exception, it is estimated that my crontab every time the implementation, Linux tried to send e-mail to crontab users but did not match sendmail, so everything was thrown into the Var/spool/clientmqueue under the. Then I understand why other people wrote crontab to add >/dev/null 2>&1, so it will not every time the implementation of CRONTAB will be the results or excetion e-mail.
After the 100,000 files have been deleted, everything is back to normal.
Problem phenomenon:
There are a large number of files in the/var/spool/clientmqueue/directory in the Linux operating system.
Cause Analysis: The system has a user to open cron, and cron executed in the program has output content, the output will be sent to the users of the cron, and SendMail did not start so produced these files;
Solution: 1, the crontab inside the command after the addition of >/dev/null 2>&1
2, Knowledge points:
2>: redirect error.
2>&1: Redirect the error to the place where the output is to be sent. That is, the implementation of the above orders redirected to/dev/null, that is, abandoned, at the same time, the resulting errors are discarded.
3, the specific code:
(1), # crontab-u Cvsroot-l
* * */opt/bak/backup
* * */opt/bak/backup2
(2), # Vi/opt/bak/backup
#!/bin/sh
CD/
Getfacl-r Repository >/opt/bak/backup.acl
(3), # vi/opt/bak/backup2
#!/bin/sh
week= ' Date +%w '
Tar zcvfp/opt/bak/cvs$week/cvs.tar.gz/repository >/dev/null 2>&1
4, clear the/var/spool/clientmqueue/directory of files:
# Cd/var/spool/clientmqueue
# RM-RF *
If you have too many files, take up too much space, and use the above command to remove the slow, execute the following command:
# Cd/var/spool/clientmqueue
# ls | Xargs rm-f
On a wind and a beautiful night, I sit at home watching TV, and then the machine came to an effect, and the result was that the Yang found a master machine, and the server's/var/spool/mqueue was stuffed with a bunch of letters that were not sent out, and there was no separate/var/spool. So it also affects the system root (/) block, with only 600 MB left to use, and there are a few possibilities.
This server has a computer for the school to send mail, so it may be the advertising letter in the mail.
The machine that uses this server to mail a letter, may be poisoned, and will send out letters.
The first thing to think of these two reasons is to spit out the swallowed space, so you plan to chop down all mail queue, and, of course, stop the Mail service.
When you chop down these lines of letters, one thing is that there are so many files in it that you get super late with the LS command, there is no response, using MAILQ to see exactly is that the letter was a queue to live also no way, and then think about it, had to cut the whole, do not play down, after, very smooth hand under the RM- RF * Now, a very strange thing happened, incredibly file too much can not be erased, the first time to hear RM in complain (I heard, Yang teacher is the author, so he has seen ^^).
The error is: bash:/bin/rm:argument list too long
Although it could not be removed, brother Yang to the main machine before opening the X Window and using the linuxer most commonly used Shan Shan snail (Nautilus) to open the/var/spool/mqueue. Oh, you can use X Window to delete it! And then you want to say that X Window has such a great ability, then use it to delete the other queue files is good, so hanging on the phone, put brother Yang a person diligently in the machine room to delete ...
Of course I was not idle, TV drama just finished, so I opened my work partner, once again the Internet submarine boat ... Rang, suddenly thought, why not use Find to delete to see? To delete the history file, find a command is found./| Xargs RM-RF Tens don't underestimate this little command, because shortly after I read it, Brother Yang came in and said it had been deleted and it was 10 o ' clock at night, so I recommended this instruction, well, well, all of it, and Aleppo fast ...
Oh, haven't said why to delete the hand software, is because Nautilus in the Load record, is a batch, not a full read, so a big about is a few thousand in reading, after the delete, did not expect to come out and a few thousand seals ... It's really scary, and then it's supposed to be a partial relationship.
In the next find./| After Xargs Rm-rf, still surprised quickly the rest, discovered time is not many, the school also wants to close the door, therefore first say bye bye, in the scene the Hardy Yang brother also goes home to rest.
Analysis:
RM has the largest number of removals, so when there are too many files or records in a record, there is a mistake, the younger brother should be under two, and use Find./| The purpose of Xargs RM-RF is to use Find to list files, then to Xargs,xargs to the RM, where Xargs will feed RM in batches according to the maximum number of RM, and then you can delete the file.
。 And the real reason, maybe it's the RM version or the file system problem, and I'm not going to go after it, anyway./dev/null 2>&1 Detailed ">
Here's a little shell script that was tested by the younger brother
Download:
mk-file.sh
(This shell script has a record to produce 20,000 files.) )
Let's take a little quiz:
Root # mkfile.sh
Root #
will produce 20,000 small files called test-file-{1~19999}
Use RM directly to delete:
Root # RM-RF test-file-*
-bash:/bin/rm:argument list too long (will respond to the quoted message)
Change with Find to delete
Root # Find./-iname ' test-file-* ' | Xargs RM-RF
Root # ls
mk-file.sh
Root #
This would be erased.
---------------------------------
#tool_action
4 * * */bin/sh/data/stat/crontab/exec_tool_action_analysis_db.sh >>/data/stat/logs/exec_tool_action_ Analysis_db.sh.log >/dev/null 2>&1