Written summary--linux "continuous update" __linux

Source: Internet
Author: User
Tags ack auth ip number mutex posix stdin time interval dmesg

Due to my written test practice the correct rate is really miserable, so there is a "written summary" This series, is expected to update to the autumn recruit work = =
process communication Methods in Unix

(1) Pipelines (Pipe): Pipelines can be used for communication between relational processes, allowing one process to communicate with another process with which it has a common ancestor.
(2) Named pipe (named pipe): The named pipe overcomes the restriction that the pipe has no name, so it allows communication between unrelated processes in addition to the functionality that the pipe has. Named pipes have corresponding file names in the file system. Named pipes are created by command Mkfifo or system call Mkfifo.
(3) signal (Signal): signal is a more complex means of communication, used to inform the acceptance process has some kind of event, in addition to communication between processes, the process can also send signals to the process itself; Linux in addition to supporting the UNIX early signal semantic function Sigal outside, It also supports the signal function sigaction of semantics conforming to POSIX.1 standard (in fact, the function is based on BSD, BSD in order to achieve reliable signal mechanism, but also can unify the external interface, with the Sigaction function to realize the signal function).
(4) Message queue: Message Queuing is a linked table of messages, including POSIX Message Queuing system V Message Queuing. A process with sufficient permissions can add messages to the queue, and processes that are given Read permission can read messages in the queue. Message Queuing overcomes the lack of signal-carrying information, the pipeline can only host unformatted byte streams, and the buffer size is restricted.
(5) Shared memory: Allows multiple processes to access the same memory space, is the fastest available IPC form. is designed for the low efficiency of other communication mechanisms. It is often used in conjunction with other communication mechanisms, such as semaphores, to achieve synchronization and mutual exclusion between processes.
(6) Memory mapping (mapped memory): Memory mappings allow any number of interprocess communications, and each process that uses the mechanism implements it by mapping a shared file to its own process address space.
(7) Semaphore (semaphore): primarily as a synchronization between processes and different threads of the same process.
(8) Socket: A more general interprocess communication mechanism that can be used for interprocess communication between different machines. Originally developed by the BSD branch of the UNIX system, it is now generally possible to migrate to other Unix-like systems: Linux and System V variants support sockets.

Compared to

Windows has: Inter-process: pipelines, signals, message queues, shared memory, semaphores, socket interfaces. Between threads: Critical area, mutex, semaphore, sleeve interface. the contents of each log record in Linux

/var/log/messages-Includes overall system information, which also contains logs during system startup. In addition, content such as Mail,cron,daemon,kern and Auth is also recorded in the Var/log/messages log. /VAR/LOG/DMESG-Contains kernel buffering information (kernel ring buffer). When the system starts, a lot of hardware-related information is displayed on the screen. You can view them with DMESG./var/log/auth.log-Contains system authorization information, including user login and permission mechanisms to use./var/log/boot.log-Contains logs when the system starts./var/log/daemon.log-Contains a variety of system daemon log information./var/log/dpkg.log– Include a log of the install or DPKG command cleanup package./var/log/kern.log– Contains logs generated by the kernel to help resolve problems while customizing the kernel./var/log/lastlog-Record the most recent information for all users. This is not an ASCII file, so you need to view the content with the Lastlog command./var/log/maillog/var/log/mail.log-Contains the log information for the system to run the e-mail server. For example, the SendMail log information is all sent to this file./var/log/user.log-Log all levels of user information./var/log/xorg.x.log-Log information from X./var/log/alternatives.log– Update alternate information is recorded in this file./var/log/btmp– Log all failed logon information. Use the last command to view the Btmp file. For example, "Last-f/var/log/btmp | More "./var/log/cups-logs that involve all printed information./var/log/anaconda.log-All installation information is stored in this file when Linux is installed./var/log/yum.log-Contains package information that is installed using Yum./var/log/cron-When a cron process starts a job, it records the relevant information in the file./var/log/secure-Contains authentication and authorization information. For example, SSHD will have all the information records (including failed logins) here./var/log/wtmp or/var/log/utmp-Contains login information. Use Wtmp to find out who is logging into the system, who uses commands to display this file or information./var/log/faillog– Contains user logon failure information. In addition, the error login command is also recorded in this file.

In addition to the log file above,/var/log also includes the following subdirectories based on the specific application of the system:/var/log/httpd/or/var/log/apache2-contains server Access_log and error_log information. /var/log/lighttpd/-contains Access_log and error_log of light httpd. /var/log/mail/– This subdirectory contains additional logs for the mail server. /var/log/prelink/-contains information that the. So file is PreLink modified. /var/log/audit/-contains information stored by the Linux audit daemon. /var/log/samba/– contains information stored by Samba. /var/log/sa/-contains the SAR files that are collected daily by the Sysstat package. /var/log/sssd/– is used for daemon security services. Network related Instructions

2. The nestat should be netstat:
3. Route:
4. Tracert is Windows and Traceroute is Linux:
The Ifconfig command is used to detect and set the native network interface
Android DVM processes and Linux processes, application processes

Every DVM is a process in Linux
First come to the concept: Let's look at how each Android process is generated, and here's a better understanding of the process of zygote process hatching: The zygote process calls the fork () function to create zygote ' subprocess, child process Zygote '     Share the code area and connection information for the parent process zygote. As shown in the following illustration, the Fork () Orange arrow to the left is the zygote process, the right is the zygote ' subprocess created, and the zygote ' subprocess hands the execution process to the application A,android program to start running. The newly generated application a uses the connection information of the library with the resource that already has the zygote parent process, so it runs very fast.

In addition, for the above illustration, Zygote starts up and runs the DVM, then loads the required classes and resources into memory.    The fork () is then called to create the zygote ' subprocess, and the child processes dynamically load and run application A. The running application a uses the DVM code that zygote has initialized and started to run, speeding up by using classes and resources that have been loaded into memory.
Let's look at the Android process model.

Linux starts the kernel by calling the Start_kernel function, and when the kernel boot module is completed, the first process--init process of user space is launched, and the following figure is the process Model diagram for the Android system:

As you can see from the diagram above, the Linux kernel creates a kernel process called Kthreadd, pid=2, to create other processes in the kernel space, and creates the first user space init process, which is PID = 1, to start some local processes, such as the zygote process , and the zygote process is also a local process dedicated to incubating the Java process, which clearly describes the process model for the entire Android systemrules for Nagle algorithms:(1) If the length of the package reaches MSS, is allowed to send, (2) if the containing fin is allowed to be sent, (3) If the Tcp_nodelay option is set, and (4) The Tcp_cork option is not set, if all outgoing packets (packet lengths smaller than MSS) are confirmed, is allowed to send, (5) None of the above conditions are met, but a timeout (typically 200ms) occurs, sent immediately.
The Nagle algorithm only allows an ACK-only packet to exist on the network, not the size of the packet, so it is in fact an extended stop-and-wait protocol, except that it is based on the packet stop-and so on, rather than on byte stop-and so on. Nagle algorithm is entirely determined by the ACK mechanism of the TCP protocol, this will bring some problems, for example, if the end ACK reply quickly, Nagle in fact will not splice too many packets, although avoid network congestion, the overall utilization of the network is still very low. The Nagle algorithm is a half set of the Silly Window syndrome (SWS) prevention algorithm. SWS algorithm to prevent the sending of small amounts of data, Nagle algorithm is its implementation in the sender, and the receiver to do is not to advertise the small buffer space growth, do not notify the small window, unless the buffer space has a significant increase. The significant growth here is defined as a fully sized segment (MSS) or up to half the maximum window.
Note: The BSD implementation is to allow the last small segment of a large write operation to be sent over the idle link, that is, when more than 1 MSS data is sent, the kernel first sends the N MSS packets sequentially and then sends the small tail packet, which is no longer waiting. (Assuming the network is not blocked and the receiving window is large enough) for example, for example, the previous blog in the experiment, the first client calls the socket of the write operation of an int-type data (called a block) to write to the network, because the connection is idle at this time (that is, there is no unacknowledged small segment) , so this int data is sent to the server side immediately, and then the client side calls the write operation to write ' \r\ N ' (b block), this time, block a ACK did not return, so you can think that there is an unconfirmed small segment, so B block is not immediately sent, waiting for a block ACK received (about 40ms after), B block to be sent. The whole process looks like this: there is also a problem hidden here, which is why the ACK of block a data is received after 40ms. This is because TCP/IP has not only a Nagle algorithm, but also a TCP acknowledgement delay mechanism. When the server side receives the data, it does not immediately send an ACK to the client side, but instead sends the ACK for a period of time (assuming T), it expects the server side to send the answer data to the client at t time, so the ACK can be sent with the answer data, It's like answering data with an ACK in the past. In my previous time, T was about 40ms. This explains why ' \ r \ n ' (Block B) is always sent after block a 40ms.
Of course, TCP acknowledges that the delay of 40ms is not always the same, and the delay acknowledgement time for TCP connections is typically initialized to a minimum of 40ms, which is then constantly adjusted based on the connection's retransmission timeout (RTO), last received packet, and the time interval of this receive packet. You can also cancel the acknowledgment delay by setting the Tcp_quickack option.
Shutdown and reboot

In the Linux command reboot is restarted,shutdown-r now stop and then reboot, said they are the same, in fact, there is a certain difference .

The Shutdown command can safely shut down or restart the Linux system, prompting a warning message to all logged-in users on the system before the system shuts down. The command also allows the user to specify a time parameter, either as a precise time or as a time period from now onwards.

The exact time format is hh:mm, which represents hours and minutes, and the time period is represented by + and minute numbers. Data synchronization is done automatically after the system executes the command.

General format for this command: shutdown [options] [TIME] [warning message]

The meanings of the options in the command are:

-K does not really shut down but only sends a warning message to all users

-R reboot immediately after shutdown

-H do not reboot after shutdown

-F Skip fsck during fast shutdown reboot

-N Fast shutdown without init program

-C cancels an already running shutdown

Specifically, this command is only available to Superuser.

Example 1, the system shuts down in 10 minutes and restarts immediately: # Shutdown–r +10

Example 2, the system shuts down immediately and does not reboot: # shutdown–h now

Halt is the simplest shutdown command, which is actually invoking the Shutdown-h command . When halt executes, the application process is killed, and the kernel is stopped when the file system write operation completes.

Some of the parameters of the halt command are as follows:

[-f] forced shutdown or reboot without calling shutdown

[-i] turn off all network interfaces before shutting down or restarting

[-p] Call Poweroff when shutdown, this option is the default option


reboot 's work process is similar to halt, and its role is to reboot, while Halt is shutdown. Its parameters are similar to those of halt. The reboot command restarts the system by deleting all processes rather than terminating them smoothly. Therefore, using the reboot command allows you to quickly shut down the system, but it can cause data loss if there are other users working on the system. Therefore, the use of the reboot command is mainly in Single-user mode .

Init is the ancestor of all processes, and its process number is always 1. Init is used to toggle the running level of the system, and switching is done immediately. The init 0 command is used to switch the system run level immediately to 0, the shutdown, and the init 6 command to switch the system run level to 6, or reboot functions to transfer into the kernel:

Fopen and Exitfopen are functions that open files, files can also be viewed as a device, and opening a device will cause an IRP to be sent to the driver to which the device belongs, while drivers related to the real hardware are running in the kernel. The exit function is the function that ends the process, and the end process requires access (Process Control Blocks) and TCB (thread control blocks), and so on, which exist in the kernel. The reason is very simple memcpy and strlen we can write it directly without calling any function. This function is certainly not implemented in the kernel
limit the number of concurrent connections to Linux servers

The first of the


is the number of IP addresses, the more system IP number, the more connections are established.
Second, memory. From the profile point of view, open the configuration file with vim: #vim/etc/sysctl.conf net.ipv4.tcp_mem[0]: Below this value, TCP has no memory pressure. NET.IPV4.TCP_MEM[1]: Under this value, enter the memory pressure phase. NET.IPV4.TCP_MEM[2]: higher than this value, TCP refuses to allocate socket. The above memory units are pages, not bytes. The memory size affects the network

third. Other, the maximum number of handles affects the upper bound of network in the VFS operation.
sed command: sed command for Add/remove rows, replace/display lines, search and replace, modify files directly. -N: Use Quiet (silent) mode. In general sed usage, all data from STDIN is generally listed on the screen. However, if you add the-n parameter, only the line that has been specially processed by SED will be listed. P: Print, printing a selection of data. Normally p will be used with the parameter sed-n D: Delete A: New,

fp=fopen ("File", "W"); can only write unreadable
The common pthrea of multithreaded programming under Linux The function name and meaning provided by the D library

A pthread_create Create a thread B pthread_join to wait for the end of a thread C Pthread_mutex_init initialize a thread mutex D pthread_exit end a thread pthread_join () The function waits for the thread specified by the thread to terminate. If the thread has terminated, then Pthread_join () returns immediately. The specified thread must be able to splice threads.

If retval is not null, then Pthread_join () exits the replica target thread of the state (that is,. Target thread provides value Pthread_exit (3)) * RetVal the location pointed to. If the pitch volume thread is canceled, then the pthread_canceled is placed in the * retval.

If multiple threads try to join the same thread at the same time, the result is undefined. If the thread calls Pthread_join () Cancel, then the target can still splice the thread (that is, the. , it will not be separated).


This topic examines the input/output redirection under Linux

In Linux, each open file is given a file descriptor (descriptor), including standard input (stdin), standard output (stdout), and standard error output (STDERR), described separately by 0,1,2.

A option, command &> file indicates that standard output (STDOUT) and standard error output (stderr) are redirected to the specified file files.

b option, syntax error. The correct syntax is that M >& n,m and N are file descriptors, and M is a file descriptor 1 without specifying the default.

The C option, command > File 2>&1, is made up of two parts. First, command>file indicates that the standard output (stdout) is redirected to file files. The next 2>&1 indicates that the standard error output (stderr) is output to the location specified by the file descriptor 1, which is the location of the standard output (stdout), and the standard error output is redirected to file files because the standard output has been directed to file files.

The D option, command 2> file 1> file, can also be viewed as being made up of two parts. First command 2> file, which means that the standard error output (stderr) is redirected to file;1> files, which means that standard output (stdout) is redirected to file files. The final file will not contain the standard error output (stderr) information because it will be overwritten by the later standard output (stdout). Bulk Delete files in the current directory with a suffix named. C

Find. -name "*.c"-maxdepth 1 | Xargs RM

RM *.c

RM *.c about Linux system load (load)

The measurement of load on the Linux system for the current CPU workload. Simply put, the length of the process queue .

Load Average is the average load for a period of time (1 minutes, 5 minutes, 15 minutes).

View current load average condition with system command "W"

E.g load:2.5,1.3,1.1 indicates that the system's load pressure is gradually reduced (wrong) reason: see net friend explain did not understand, think a kind of popular explanation method, have wrong to point out: the first five minutes load average is 1.3, we hypothesized out one is called the load quantity value = Load average x time, The first 5 minutes load = 6.5, the first 15 minutes load = 16.5 16.5-6.5 is 5-15 minutes of load 6.5-2.5 is 1-5 minutes of load. The average again, certainly not diminished about ext2 ext3 ext4 file System

ext2 and Ext3

The Linux ext2/ext3 file system uses index nodes to log file information, acting like a Windows file allocation table. An index node is a structure that contains information about the length of a file, the time it was created and modified, the permissions, the affiliation, and the location on the disk. A file system maintains an array of index nodes, each of which corresponds to the only element in an array of indexed nodes. The system assigns a number to each index node, which is the index number of the node in the array, called the index node number. The Linux file system keeps the file index node number and filename in the directory at the same time. So, the directory simply combines the name of the file with its index node number, which is called a connection for each file name and index node in the table of contents. For a file, there is a unique index node number corresponding to it, for an index node number, but can have more than one file name corresponding to it. Therefore, the same file on the disk can be accessed through a different path.
The file system used by default before Linux is Ext2,ext2 file system is indeed efficient and stable. However, with the Linux system in the key business applications, the weakness of the Linux file system is gradually revealed: the system defaults to use the Ext2 file system is not a log file system. This application in key industries is a fatal weakness. This article introduces you to Linux using the ext3 log file system application.
EXT3 file system is directly from the Ext2 file system development, the current EXT3 file system has been very stable and reliable. It is fully compatible with the Ext2 file system. The user can transition smoothly to a log-capable file system. This is actually the original design of the ext3 log file system.

However:

The inode is divided into the inode in the store and the inode in the file system, in order to avoid confusion, we call the former the VFS Inode, and the latter is represented by EXT2, which we call the Ext2 inode. The following is a simple description of the VFS Inodee and the Ext2 inode:
1. The VFS inode includes file accessibility, owner, group, size, generation time, interview time, and last modified time. It is the most basic unit of the Linux management file system and is the bridge of the file system that connects any subdirectories and files. The meditative information in the inode structure is taken from the file system on the physical facility and is written by the function specified by the file system, which exists only in the store and can be used to access the inode. Although each file has an associated inode, it is only in the time required that the system creates the corresponding Inode data structure in the store, the set of Inode structure will form a chain, we can pass through this link to get the file we need, VFS is also used as a buffer for allocated inode structures and a hash table to improve system performance. The struct inode_operations *i_op in the inode structure provides us with a list of inode operations, through which the functions provided by this list allow us to do all kinds of operations on the VFS Inode. Each inode structure has an i-I_ino, where each I-point is unique in the same file system.

2, EXT2 Inode used to define the structure of the file system and the management information describing each file in the system, each file has and only one inode, even if there is no data in the file, its index is also there. Each file is described by a single EXT2 inode structure, and each inode has a unique logo. The EXT2 Inode provides the basic information of the file for the stored inode structure, and the system updates the contents of the Ext2 inode as the inode structure changes in the store. The EXT2 inode corresponds to the ext2_inode structure.
The EXT2 inode does not contain the creation time of the file, ext3 the inode contains
A record with no access rights. Access permissions are based on the permissions of the user you belong to.

EXT3 The characteristics of the log file system
1, high availability
After the system has used the Ext3 file system, the system does not need to check the file system even after the shutdown is abnormal. After the outage occurs, it takes only 10 seconds to recover the ext3 file system.
2, data integrity:
Ext3 file system can greatly improve the integrity of the file system, to avoid accidental downtime of the file system damage. There are 2 modes of ext3 file systems to choose from to ensure data integrity. One of these is the "Keep file system and data consistency" mode. In this way, you will never see the junk files that are stored on disk because of an abnormal shutdown.
3, File system speed:
Although it is sometimes possible to write data more than once while using the Ext3 file system, Ext3 is better than ext2 performance in general. This is because the ext3 log function optimizes the drive read-write headers of the disk. Therefore, the performance of the file system is not reduced compared to the Ext2 file system.
4, Data conversion
  conversion from ext2 file system to ext3 file system is very easy, simply type two commands to complete the conversion process, users do not have to spend time to back up, restore, format partitions and so on. With a small tool provided by a ext3 file system TUNE2FS, it can easily convert ext2 file system to ext3 log file system. In addition, the Ext3 file system can be loaded directly into the ext2 file system without any changes.
5, multiple log modes
  EXT3 has a variety of logging modes, one mode of operation is to log all file data and metadata (the data that defines the data in the file system, that is, the data) (data=journal mode) Another mode of work is to log only the metadata, not the data, or the so-called data=ordered or Data=writeback mode. System managers can choose between the working speed of the system and the consistency of the file data according to the actual work requirements of the system. Advantages of Ext3
Why do you need to migrate from ext2 to ext3? Here are four main reasons: availability, data integrity, speed, ease of migration.

The latest EXT4
Linux Kernel formally supports the new file system EXT4 since 2.6.28. EXT4 is an improved version of EXT3 that modifies some of the important data structures in EXT3, not just as Ext3 to Ext2, but only adds a log function. EXT4 can provide better performance and reliability, as well as richer functionality:

1. Compatible with EXT3. By executing several commands, you can migrate from Ext3 online to EXT4 without reformatting the disk or reinstalling the system. The original EXT3 data structure is still retained, EXT4 action on new data, of course, the entire file system so that the EXT4 is supported by a larger capacity.
2. Larger file systems and larger files. EXT4 supports 1EB (1,048,576TB, 1EB=1024PB, 1PB=1024TB) file systems, as well as 16TB files, compared to the maximum 16TB file system and maximum 2TB files currently supported by EXT3.
3. An unlimited number of subdirectories. EXT3 currently supports only 32,000 subdirectories, while EXT4 supports an unlimited number of subdirectories.
4. Extents. EXT3 uses indirect block mapping, which is extremely inefficient when manipulating large files. For example, a 100MB size file, in the EXT3 to create 25,600 blocks (each block size of 4KB) mapping table. And EXT4 introduced the popular extents concept in modern file system, each extent is a set of successive blocks of data, the above file is expressed as "the file data is stored in the next 25,600 blocks", improve a lot of efficiency.
5. Multi-block distribution. When the data is written to the Ext3 file system, the EXT3 data block allocator can allocate only one 4KB block at a time, and a 100MB file will call 25,600 times the data block allocator, while EXT4 multiple splitter "Multiblock allocator" (Mballoc Supports the allocation of multiple blocks of data at one time.
6. Delayed distribution. EXT3 's data block allocation strategy is distributed as soon as possible, the strategy of EXT4 and other modern file operating systems is to delay allocation as much as possible until the file is written in cache to begin allocating blocks of data and writing to disk, thus optimizing the allocation of chunks of the entire file, which can significantly improve performance with the first two features.
7. Fast fsck. It is slow to perform the first step of fsck, because it checks all the inode, now EXT4 adds a list of unused inode to the Inode table for each group, and the fsck EXT4 file system can skip over them and check out the inode that is in use.
8. Log check. Logging is the most commonly used section and can easily lead to disk hardware failures, while recovering data from corrupted logs can result in more data corruption. The EXT4 log check feature makes it easy to determine whether log data is corrupted, and it merges the EXT3 two-stage logging mechanism into one phase, improving performance while increasing security.
9. "No Log" (no journaling) mode. The log has some overhead, and EXT4 allows logging to be turned off so that some users with special needs can improve performance.
10. Online defragmentation. Although deferred allocations, multiple-block allocations, and extents can effectively reduce file system fragmentation, fragmentation is inevitable. EXT4 supports online defragmentation and will provide e4defrag tools for defragmenting individual files or entire file systems.
One. Inode related characteristics. EXT4 supports larger inode sizes, which are 128 bytes larger than the default inode size of EXT3, EXT4 the default inode size is 256 bytes in order to accommodate more extended attributes (such as nanosecond timestamp or inode version) in the Inode. EXT4 also supports fast extended properties (fast extended attributes) and Inode retention (inodes reservation).
12. Persistent pre-distribution (persistent preallocation). Peer-to-peer software to ensure that the download file has enough space to store, often in advance to create an empty file with the same size as the downloaded file, so that in the next few hours or days of insufficient disk space cause download failure. EXT4 implements persistent prefetching at the file system level and provides the appropriate APIs (Posix_fallocate () in libc) that are more efficient than the application software itself.
13. Enable barrier by default. The disk is equipped with an internal cache to readjust the write order of bulk data and optimize write performance, so the file system must write a commit record after the log data has been written to disk, and if the commit record is written first and the log is potentially corrupted, data integrity is affected. EXT4 barrier is enabled by default, the data after barrier can be written only when all the data before barrier is written to disk. (This attribute can be disabled by the "mount-o barrier=0" command.) ) Goto: http://blog.csdn.net/macrossdzh/article/details/5973639
what directives are used to display the current directory

PWD or echo $ (PWD) can be

$pwd is not possible, because $ represents the result of referencing PWD, but there is no processing of this result; either echo $ (PWD) or Echo $PWD, which represents the output of a reference pwd command and prints to the screen, the pwd of which is an environment variable;
where the default environment variables are placed when using the shell.

~/.profile

/etc/profile: This file sets the system's environment information for each user, and the file is executed when the user logs on for the first time. and from/ETC/PROFILE.DThe configuration file for the directory collects the Shell's settings.
/ETC/BASHRC: This file is executed for each user running the bash shell. When the bash shell is opened, the file is read.
~/.bash_profile: Each user can use this file to enter shell information that is specific to their own use, and when the user logs on, the
The file is only executed once! By default, he sets some environment variables to execute the user's. bashrc file.
~/.BASHRC: This file contains bash information dedicated to your bash shell, and when you log in and each time you open a new shell, the
The file is read.
~/.bash_logout: Executes the file every time you exit the system (the Bash shell is exited).
Bash_profile. BASHRC, and. Bash_logout
The above three files are the user Environment profile of the bash shell, located in the user's home directory.. Bash_profileIs the most important configuration file that is read every time the user logs on to the system, and all commands inside it are executed by bash. Profile (used by Bourne Shell and Korn shell) and. Login (used by C shell) two files are. bash_ A synonym for profile, designed to be compatible with other shells.use. profile files in Debian instead of. bash_profile files. . BASHRCThe file will read when the bash shell calls another Bashshell, which is when you type the bash command in the shell to start a new shell. This effectively separates the environments required for logins and child shells. In general, however, the. BASHRC script is invoked in the. Bash_profile to uniformly configure the user environment.
. Bash_logoutis read when exiting the shell. So we can put some cleanup orders into this file.
When you log on to Linux, start the/etc/profile file first, and then start one of the ~/.bash_profile, ~/.bash_login, or ~/.profile files in the user directory (depending on the different Linux operating systems, Name is not the same, the order of execution is: ~/.bash_profile, ~/.bash_login, ~/.profile.
If the ~/.bash_profile file exists, the ~/.BASHRC file is usually executed.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.