For most people, software is an application that can accomplish certain functions. In our daily life and work, as long as the use of computer equipment, we will inevitably use a variety of software. Especially now the widespread popularity of handheld smart devices makes the concept of application (APP) more popular, even people who have never been in touch with programming can speak the term fluently.
In fact, the software is actually written by the programmer in a programming language, but after the program is written, how is it used? This is probably not everyone knows, and not everyone can say the clear things. You may be wondering, isn't the headline "OS Basics"? How to say so half a day software? Everybody is anxious, next, with everyone to talk about the operation principle of computer application program, thus leads to the concept of operating system and its function.
First of all, it is worth noting that the program is composed of instructions and data. From another point of view, we can also think that the program is composed of algorithms and data structures. Programs written in whatever programming language programmers use are stored in the form of the simplest and most primitive plain text files in the external memory of a computer or similar device. The so-called instructions and data are stored in these files together. The source code of these programs is written in a language that is closer to the natural language of our human being. Software can not run without hardware construction and hardware environment, that is to say, the software must be in a particular architecture of the hardware system to run. The so-called software operation, generally refers to the Save software program execution function of the file from the disk or other external memory loaded into memory, respectively, stored in the instruction segment and data segment, and then by the CPU call instruction short message, through the instruction pointer register to find a location in the memory of the data and to perform operations on the data, The process of final output results. The entire software runs between the memory and the CPU. That is, to want a program to run, it must be able to load the program into memory, and can be processed by the CPU.
And our computer is a digital device (so-called digital devices, you can simply understand the digital information to be able to operate and processing equipment, and based on digital computers can only process binary data. ), the natural language of the human being applied in the source code of these advanced programs cannot be understood, so the program must be processed and encoded into a binary stream before it can be executed by the underlying hardware. But the hardware is very low-level, the hardware provides a very low level of functionality, the interface provided to the user is also very simple and ugly, so in order to facilitate the majority of programmers can be based on the underlying hardware for program development, hardware manufacturers will be these ugly interfaces into a more specific and easy-to-use assembly interface. But the assembler interface is still very low-level, and every programmer who wants to run the programs written in the advanced language on the hardware must then develop a program-driver that will be able to complete the assembly function successfully. This will undoubtedly increase the programmer's development burden. Also, the underlying support should be the same for different applications on the same hardware architecture platform, so the drivers should be the same, so it is redundant and unnecessary for each program to develop the driver separately. So someone came out and developed a driver for the underlying hardware, and then shared it so that all programmers did not have to write the code themselves when implementing the program development, but simply call the drivers when needed. This greatly simplifies the programmer development process, shortens the program development cycle, but also provides the guarantee for the stable operation of the program, after all, not every programmer writes the driver is so stable. At this point, the programmer developed the program, after invoking the driver, you can access the hardware, and then compiled into a binary stream can be directly processed by the CPU. In this way, each program is the basic unit of the operation of a computer, this running program, we no longer call him a program, but call him process.
But the problem is not over. On early mainframe hosts, it is common for people to take their own monitors and keyboards and connect to a dedicated terminal interface to use computer computing resources. And the entire host also has only a contiguous memory space, usually also only a CPU, then if more than one user to use the host at the same time, and all started a process, then how to deal with the computer? In fact, the most important question is: how many processes occupy and use memory and CPU resources. In fact, on the early computer is not support multi-tasking, so, a terminal on a program, if this time the computer is processing other processes, then the process just scheduled to start can only wait until the earlier execution of other processes that took up the resources of the system before the end of execution, can be loaded into memory, It is then executed by CPU computation. So we call this process processing a single-tasking approach.
However, this process of processing, so that the efficiency of the computer is too low, if there is a terminal initiated a large process, may take up a very long CPU processing time, then this period of time other programs can only wait, and even can not initiate execution, unable to load memory. As a result, there is a "multitasking" scenario in which a computer can utilize limited computer system resources while simultaneously processing multiple tasks. However, it is necessary to divide the limited system memory and CPU computing resources into a number of processes, and in order to ensure that each process does not trample, do not compete, do not affect each other, there must be a complete set of resource allocation and monitoring procedures to complete this task, And this monitoring program should be a universal program, must be a fairly credible program, and this program is what we all call the operating system. But this so-called operating system is not the concept of the operating system that we are now impressed with, it's just a narrow-sense operating system, which we usually call kernel (kernel).
Kernel This monitoring program is only responsible for the underlying hardware to drive, and the underlying hardware to provide a variety of resources virtualization (such as: the memory space to cut into N, divided into a number of programs to use, thus also realize the reuse of memory resources, we call space reuse, such as the CPU's working time is fragmented, For multiple programs to be used in turn, we call time-series multiplexing based on time-series completion multiplexing. ) in order to be able to allocate resources to multiple processes and monitor how those processes are using these resources. At the same time, how to start a program, how to close a program and so on is also done by the monitoring program. He took control of the entire hardware, and the hardware of the original interface as the face of virtual software.
The result of these resource reuse is that a set of independent complete hardware system resources can be cut into n parts. From this point of view, we can assume that each program has its own computer resources (CPU computing resources and memory storage resources), but each program is only a part of the entire computer hardware system complete resources.
From the perspective of this process, the use of resources is implemented through that monitoring program, so the process will consider itself to be the only process running on this hardware system except that of the monitoring software, He himself was the only one that possessed all the hardware system resources except the CPU and memory resources that the monitoring program consumes. He could not understand the fact that there were other processes in existence. Every program is working like this. This is because the monitoring program is a virtual home for every process that is absolutely private.
What is kernel to do? Now we can make a summary of this:
First, drive the underlying hardware so that the hardware can be easily accessed;
Second, the underlying hardware resources are abstracted into simple and easy-to-use resources to facilitate program invocation;
Thirdly, manage the operation of each process, allocate the limited resources reasonably to the process and monitor them, so that they are all peaceful.
What we usually call a complete operating system consists of the kernel (Kernel) and various applications (applications). The kernel runs on top of the hardware system, shielding the underlying hardware complex logic and the Ugly interface, and virtualizing it into a more simple-to-use interface to facilitate program invocation, so these abstracted interfaces are called "system call". Therefore, programmers in the development of the application, it must be clear that an operating system kernel contains the calling interface, when writing code in the process need to use that system call, directly in the program call. However, the number of system calls in the kernel is hundreds of, and for programmers, it is extremely hard to fully understand the functions of all interfaces and the way they are invoked. As a result, some people have once again encapsulated the interfaces that are often called, and we call these encapsulated interfaces "libraries (Libraries)", and libraries are an interface that is closer to the final form of the program. In this way, programmers in the development of the program, you can choose to write a program based on a system call, or choose to write programs based on library calls, which creates a freer and more convenient development environment for programmers. Now, in the field of Linux development, libraries have become a standard interface because library-based programs are developed faster and more efficiently. These libraries are also known as APIs (application program Interface).
If, from the programmer's point of view, a programmer has developed a member that uses the Linux API, will this program run in Windows? If you want to run a Linux-developed program in Windows, or a program developed under Windows that runs in Linux, you need to unify the APIs of these different operating systems, that is, to provide a mutually compatible library call in the operating system. A specification is defined in the Internet, called POSIX (Portable Operating system, portable operating system). Any program code that is written in a POSIX compliant API can be used on different operating systems.
If you stand in a program's running perspective, or stand in the perspective of an application, the library is a binary file, the kernel is a binary file, the program is a binary file, they can be run, but the library is generally not directly run, but in some other binary program is called to run. It also means that a program can run, to see whether the program depends on the invocation of the binary library is compatible, so that there is another interface, the ABI (Applications Binary Interface), which is the interface that the application faces when running the program.
Programming interface compatibility does not mean that the binary interface is compatible, and that any one application can only be used after transcoding becomes binary format. This requires a compiler compilation. If we compile the program on Windows, the compiler will transcode to the run format for the ABI supported by Windows, so it's not going to work in Linux, so that Linux can be compiled and compiled into binary, only compatible with the Linux ABI interface. and cannot be put into Windows to run. Therefore, the programmer interface can be compatible, on the operating interface may not be compatible, and generally it is not possible to be compatible.
These are the operating system's constituent structures.
Next talk about the functionality of the operating system, which typically has the following basic features:
1. Drivers
2. Process Management
3. Security Management
4. Network functions
5. Memory Management
6. File system
So many features do not have a specific task to help users, so, in order to allow users to use the operating system to achieve their required functionality, the operating system only the kernel is not enough, on the operating system should have a specific application, the need for which features should provide what kind of application. In many applications, there is a special application-the Access Interface program (also known as the Access Interface program). Access interface programs can be broadly divided into two broad categories, namely:
Gui:graphic user Interface, which is the GUI
Cli:command line Interface, which is the CLI
Just like the windows we used to install, there was a desktop that was the GUI that Windows provided to the user. On this desktop we can use the mouse to achieve the click operation. Or, as with the mobile phone, the interface is the touch screen, we can also think of this is a GUI, the GUI can be touched by the finger, click, scrub the way to achieve the operation. Before the touch screen appeared, we often had a keyboard and mouse for the computer. And if not the GUI, the mouse becomes useless, as long as there is a keyboard to it. So is the operation of the GUI high efficiency or CLI operating efficiency? I think anyone who has ever used the CLI should be aware of the fact that the CLI is much more efficient. In fact, even Microsoft has stripped the GUI from the Windows system, starting with 2008, Windows has also provided a CLI, that is PowerShell. Why do some people think the CLI is slow to operate? The main reason is that people usually have the idea of not using the CLI or the fear of using the file interface, the operation of the CLI is too difficult, but also the command and parameters, in contrast to the GUI, only need to click the mouse to complete the majority of the operation. But the fact is: in the CLI under a command to complete the operation, under the GUI may require several clicks plus a variety of directory switching to complete, therefore, the GUI is relatively easy to use, but the operation is actually very low efficiency. To learn to use the GUI, a short time can be, 35 minutes, if you want to use in depth, 35 days. The CLI of the entry curve is steep, for the CLI, I am afraid that only learn to use, not 350 days I am afraid it is difficult. But once you get started, you'll find that CLI-based operations are fairly simple, much easier than the Windows systems that hide complex logic behind the GUI. The use is almost transparent, any part of the problem, you can always find the problem and be able to correct these problems in a timely manner. From this point of view, Linux is a simpler and easier to use operating system than Windows. If it's difficult, Linux is just a bit more difficult to get started.
Regardless of which type of access interface is an application, including the GUI. For Linux, the desktop is just an app. Without this application Linux can still run very well, with the application may not run well, the GUI makes Linux burdened with a heavy yoke, the system resources have to be ruthlessly occupied by the GUI, and due to the GUI program instability, so that the GUI can crash at any time.
The access interface is an application, and there is nothing special about it, so there are a lot of options available for Linux.
Gui:
GNOME: Developed using the C language, the development environment is called GTK. Gnu
KDE: Developed using the C + + language, the development environment is called QT
XFCE: A lightweight graphical interface in an embedded environment
Cli:bash zsh csh sh tcsh ksh
In addition, the interface includes Telnet, SSH, shared desktop, and various types of remote connection access interface
So for the operating system, are these access interfaces necessary? This is necessary for windows, but it is not necessarily true for Linux. We install such an interface program on top of the operating system kernel, the main purpose is to make it easy for users to use a variety of other applications stored on disk, such as: games, players and so on. So the question is, is it possible that other applications have to be started in this way? The answer is obviously no, in any operating system, the way to start the application is usually two kinds: that is, manually start and run automatically.
The so-called manual start-up is the way that the operating system starts completely, after the user logs in, by mouse click or command execution way to start the application. Generally initiated in this way the process is to provide user access interface, the user login authentication process is to open the access interface necessary steps, and only after the successful opening of the interface, you can implement the operation of the manual startup program. The user to the computer any operation, in fact, in the final analysis is through the process to proxy. In other words, the execution body of each operation is not the user itself, but the processes that are running. In the end, it is up to the user to run the program and start the program as a process, but once the process has started, the process has done something, what has been done, what results have been achieved, and what form of output has been determined by the process itself, and will not require user involvement thereafter. Of course, if the user wants to forcibly abort the execution of the process, it is still possible, but it is limited to this.
The so-called auto-run is to let the operating system start the process of starting some should be automatic program, if all the critical applications on a server are initiated in this way, it is not necessary to provide user access interface. So we may find that many of the machine room inside a lot of cabinets, cabinets inside a row of server hosts, but do not see the display and keyboard and other such IO devices, what is the reason? It's simple, because all of the critical deployments will run automatically as soon as they are powered on, and more importantly, they are all supported for remote connections. By setting up a program to automatically run as a process after the operating system starts, we don't have to be human intervention, and this server can also run automatically. So the access interface is necessary for interactive application scenarios. However, for the server, the access interface is not necessarily necessary. In addition to the initial installation of the configuration, once all the deployment has been configured, as long as the computer room is constantly power, a Linux server running for several years is also very normal. Even from the beginning, we put all the installation process, configuration files are based on our operations tools to automatically implement, the server installed the operating system, with the IP address, all the configuration done by the operation Tools automatically pushed past, or let the server automatically pull over, all the process, We do not need to participate from beginning to end, this is the highest level of the OPS people-automation operations.
This article is from the "home of the Ops" blog, so be sure to keep this source http://zhaotianyu.blog.51cto.com/132212/1775155
Operating System Fundamentals