Operating System Learning Summary

Source: Internet
Author: User
Tags posix

The operating system is the basis of deep learning computer technology, so in order to better learn computer technology, now the operating system briefly learn and notes as follows.


1. The operating system (Operating system, or OS) is a computer program that manages and controls computer hardware and software resources, is the most basic system software that runs directly on the "bare metal", and any other software must be run with the support of the operating system.


2. Operating system theory researchers sometimes divide the operating system into four parts:

Drivers: The bottom-level, direct control and monitoring of all types of hardware, their responsibility is to hide the details of the hardware, and provide an abstract, common interface to other parts.
  • Kernel: The kernel part of the operating system, typically running at the highest privileged level, is responsible for providing basic, structural functionality.
  • Interface Library: A special library of libraries that is responsible for wrapping the basic services provided by the system into the programming interfaces (APIs) that the application can use, which is the closest part of the application. For example, the GNU C run-time library belongs to this class, which wraps the internal programming interfaces of various operating systems into the form of ANSI C and POSIX programming interfaces.
  • Perimeter: Refers to all the other parts of the operating system except for the above three classes, which are often used to provide specific advanced services. For example, in a microkernel structure, most of the system services, as well as the various daemons in Unix/linux, are typically classified as this column.


    3. The OS of a standard PC should provide the following functions: process management (processing management), memory management, filesystem (file system), network communication (Networking), Security mechanism, user interface (interface), driver (device drivers).


    4. Typical Systems

    4.1UNIX is a powerful multi-user, multi-tasking operating system, supporting a variety of processor architectures, according to the classification of the operating system, belong to the time-sharing system.

    4.2Linux Linux-based operating system is a multi-user, multi-tasking operating system introduced in 20th century 1991. It is fully compatible with UNIX.

    4.3Mac OS is a set of operating systems running on Apple Macintosh series computers.

    4.4Windows is a successful operating system developed by Microsoft. Windows is a multi-tasking operating system, he uses a graphical window interface, the user to the computer's various complex operations can be achieved by simply clicking the mouse.

    The 4.5iOS operating system is a handheld device operating system developed by Apple Inc. iOS, like Apple's Mac OS X operating system, is based on Darwin, and therefore also belongs to UNIX-like commercial operating systems.

    4.6Android is a Linux-based open-source operating system that is primarily used in portable devices.

    4.7WindowsPhone (WP) is a Microsoft release of a mobile phone operating system , it will Microsoft's Xbox Live Games, Xbox Music Music with a unique video experience integrated into your phone.

    The 4.8Chrome OS, a Linux-based operating system developed by Google, has developed a cloud operating system that is tightly integrated with the Internet, running Web applications while working.



    5. Virtual machine is a complete computer system that is run in a fully isolated environment with full hardware functionality that is simulated by software.

    Virtual system by generating a new virtual image of the existing operating system, it has the exact same function as the real Windows system, after entering the virtual system, all operations are in this new independent virtual system, you can install and run software independently, save data, have their own independent desktop, Does not have any impact on the real system, and has a class of operating systems that can be flexibly switched between the existing system and the virtual image. The difference between a virtual system and a traditional virtual machine (Parallels Desktop, vmware,virtualbox,virtual pc) is that virtual systems do not degrade the performance of a computer, and starting a virtual system does not have to be as time-consuming as starting a Windows system. Run the program more quickly and easily, virtual system can only simulate the same environment as the existing operating system, while the virtual machine can simulate other kinds of operating systems, and the virtual machine needs to emulate the underlying hardware instructions, so the application runs much slower than the virtual system.


    6.NFS, the network file system, is one of the file systems supported by FreeBSD that allows computers in the network to share resources across TCP/IP networks. In an NFS application, a local NFS client application can transparently read and write to files located on the remote NFS server, just as you would access a local file.

  • The purpose of the VFS (virtual file system) is to use standard UNIX system calls to read and write different file systems on different physical media, which provides a unified interface and application programming interface for various file systems. VFS is an adhesive layer that allows system calls such as open (), read (), write () to work without caring for the underlying storage media and file system types.

  • Fat is a file configuration table (English: File Allocation table, acronym: FAT), a file system invented and partially patented by Microsoft for MS-DOS, and a filesystem used by all non-NT core Microsoft Windows. The FAT file system was not complicated because of the limited computer performance at the time, so almost all of the PC's operating systems were supported. This feature makes it ideal for floppy and memory card file systems, as well as for data exchange in different operating systems. Now, generally speaking, fat refers to FAT32. However, fat has a serious disadvantage: When a file is deleted and the new data is written, Fat does not organize the file into a full fragment and then writes it, and the long-term use causes the file data to become progressively dispersed, slowing down the read and write speed. Defragmentation is a workaround, but it must often be reorganized to maintain the efficiency of the FAT file system.

  • 7.ieee802.3 describes the implementation of the MAC sub-layer of the physical layer and the data link layer, and uses the CSMA/CD access method at various speeds on a variety of physical media, extending the implementation of the standard specification for Fast Ethernet.

    

    ·8.topology(topology)is the study of some properties of geometry or space that can remain unchanged after continuous change of shape. It only considers the position relationship between objects regardless of their shape and size. [1] the English name of the topology istopology, literal translation isTopography Studies, the earliest refers to the study of the topography, landforms similar to related disciplines. Geometric topologyis a branch of mathematics formed in the 19th century, which belongs to the category of geometry.

    9.DNS(Domain Name System, the domain Name System), the Internet as the domain name andIP Address one of the mutual mappingsDistributed DatabaseTo make it easier for users to accessInternetWithout remembering to be read directly by the machine.IPnumber of strings. ThroughHostname, and finally the hostname corresponding to theIPthe process of addressing is called Domain name resolution (or hostname resolution). DNSprotocol runs onUDPprotocol, use port number -.



    10.CSMA/CD (Carrier sense Multiple Access with collision Detection) is a carrier-monitored multi-access technology with collision detection (carrier-monitored multi-point intervention/collision detection). In traditional shared Ethernet, all nodes share the transmission medium. How to ensure the transmission medium is orderly and efficient for many nodes to provide transport services, is the Ethernet media access control protocol to solve the problem.


    11.token passing (token pass) mode, in the manner used by the token ring and the IEEE802.4 benchmark LAN.

    One of the access control methods of LAN, called "token (token)" of the transmission rights data, when the terminal device obtains this token data, that is, to receive the right to send a message, can be sent data. In a local area network where multiple end devices share a single signal line, multiple end devices transmit data at the same time, and the resulting "conflict" is unavoidable. Control conflicts using media access control device "MAC", token passing (token pass) is one of the ways. Token passing (token delivery), which is called token (token), often flows through the local area network. The end device that wants to transmit the data, captures the "token" and uses the data that you want to send instead of the "token" data for the transmission of the data. After the data transfer is complete, release the token data. This enables multiple ends to transmit data using a single signal line.



    12.TCP/IP is a shorthand for "transmission Control protocol/internet Protocol", which is a protocol for transmission Protocol/Internet Protocol, TCP/IP (transmission protocol/Inter-network protocol) is a network communication protocol , which regulates all communication devices on the network, in particular the format and delivery of data between a host and another host. TCP/IP is the basic protocol for the Internet and a standard method for packaging and addressing computer data.

    1) The link layer, sometimes referred to as the data link layer or the network interface layer, usually includes the device drivers in the operating system and the corresponding network interface cards in the computer. They work together with the physical interface details of the cable (or any other transmission medium). 2) The network layer, sometimes referred to as the Internet layer, handles the grouping of activities in the network, such as grouping routing. In the TCP/IP protocol family, the network layer protocol consists of the IP Protocol (Internet Protocol), the ICMP Protocol (Network Internet Control Message Protocol), and the IGMP Protocol (Internet Group Governance Protocol). 3) Transport layer, which provides end-to-end communication primarily for applications on two hosts. In the TCP/IP protocol family, there are two different transport protocols: TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).



    A 13.MAC (Medium/media Access Control) address, or MAC address, physical address, that represents the identifier for each site on the internet, in hexadecimal notation, a total of six bytes (48 bits). Of these, the first three bytes are the code (high 24 bits) assigned by the registration Authority RA of the IEEE to different manufacturers, also known as the "unique identifier" (organizationally unique Identifier), and the last three bytes (low 24-bit) Adapter interfaces that are assigned to production by each manufacturer are called extension identifiers (uniqueness). A block of addresses can generate 2^24 different addresses. The MAC address is actually the adapter address or adapter identifier EUI-48.

    In a stable network, the IP address and MAC address are paired. If a computer to communicate with another external computer in the network, then to configure the IP address of the two computers, the MAC address is the network card Factory set, so that the configured IP address and the MAC address formed a correspondence relationship. At the time of data communication, the IP address is responsible for the network layer address of the computer, the network layer devices (such as routers) operate according to the IP address, the MAC address is responsible for the data link layer address of the computer, and the Data Link layer device (such as the switch) operates according to the MAC address. IP and MAC addresses this mapping is done by the ARP (address Resolution Protocol, addressing Resolution Protocol) protocol.


    14.RPC (remote Procedure call Protocol)-a remoting protocol, a protocol that requests services from a remote computer program over a network without needing to know the underlying network technology. The RPC protocol assumes that some transport protocols exist, such as TCP or UDP, to carry information data between communication programs. In the OSI network communication model, RPC spans the transport and application tiers. RPC makes it easier to develop applications that include distributed, multi-program networks.

    RPC takes client/server mode. The requestor is a client, and the service provider is a server. First, the client call process sends a call message with process parameters to the service process, and then waits for the reply message. On the server side, the process stays asleep until the call information arrives. When a call arrives, the server obtains the process parameters, evaluates the result, sends a reply message, and then waits for the next invocation information, and finally, the client invokes the process to receive the reply message, obtains the process result, and then invokes execution to proceed.


    15.PCB (Process management block). In order to describe the operation of the control process, the data structure of the management and control information of the process in the system is called Process Control block, which is part of the process entity and is the most important recorded data structure in the operating system. It is the most important data structure of process management and control, each process has a PCB, when the process is created, the PCB is built, accompany the process to run the process, until the process is undone and undone.



    16.POSIX represents a Portable operating system interface (Portable Operating system Interface, abbreviated as POSIX), and the POSIX standard defines the interface standards that the operating system should provide for applications. is the generic term for the series of API standards that IEEE defines for software to run on a variety of UNIX operating systems, formally referred to as IEEE 1003, and the International standard name is ISO/IEC 9945.

    The POSIX standard is intended for software portability at the source code level. In other words, a program written for a POSIX-compliant operating system should be compiled and executed on any other POSIX operating system, even from another vendor.


    17.Linux is a free-to-use and free-spread UNIX-like operating system, a POSIX and Unix-based multiuser, multitasking, multi-threaded and multi-CPU operating system. It can run major UNIX tools software, applications, and network protocols. It supports 32-bit and 64-bit hardware. Linux inherits the design idea of Unix as the core of network, and is a stable multi-user network operating system.

    Strictly speaking, the word Linux itself only represents the Linux kernel, but in fact people have become accustomed to using Linux to describe the entire Linux kernel, and use the GNU engineering various tools and databases of the operating system.

    Linux systems generally have 4 main parts:

    kernel, shell, file system, and application . The kernel, shell, and file systems together form a basic operating system structure that allows users to run programs, manage files, and use the system. partial hierarchy As shown in 1-1.

      kernel is the core of the operating system, with many basic functions, memory , device drive programs, files, and network system, which determines the performance and stability of the system.

    shell is the user interface of the system and provides an interface for users to interoperate with the kernel. It receives the command entered by the user and sends it to the kernel to execute, which is a command interpreter. In addition, Shell programming languages have many features of common programming languages, and Shell programs written in this programming language have the same effect as other applications.

    The file system is an organization method that files reside on storage devices such as disks. Linux systems can support a variety of currently popular file systems such as EXT2, EXT3, FAT, FAT32, VFAT, and ISO9660.

    Standard Linux systems generally have a set of assemblies called applications, including text editors, programming languages, X windows, Office suites, Internet Tools, and databases.


    The 18.fork () function creates a process that is almost identical to the original process through a system call, that is, two processes can do exactly the same thing, but two processes can do different things if the initial parameters or the variables passed in are different. After a process calls the fork () function, the system first assigns resources to the new process, such as space for storing data and code. All the values of the original process are then copied to the new new process, with only a few values that are different from the value of the original process. The equivalent of cloning a self.

    A wonderful thing about a fork call is that it is called only once, but it can return two times, and it may have three different return values:
    1) In the parent process, fork returns the process ID of the newly created child process;
    2) in the sub-process, fork returns 0;
    3) If an error occurs, fork returns a negative value;



    The role of the 19.EXEC function family is to find the executable file according to the specified file name and use it to replace the contents of the calling process, in other words, execute an executable file inside the calling process. The executable file here can be either a binary file or a script file that can be executed under any Linux.

    Unlike the general case, the function of the EXEC function family does not return after successful execution, because the entity that invokes the process, including the code snippet, the data segment, and the stack have been replaced by the new content, leaving only some surface information, such as the process ID, to remain as it is, rather "Jinchantuoqiao" in the "36 gauge".


    20.NTFS (New Technology File system) is the filesystem of the WINDOWSNT environment. The new technology file system is the Windows NT family (for example, Windows 2000, Windows XP, Windows Vista, Windows 7, and Windows 8.1), etc. Restricted-level dedicated file systems (the file system where the operating system is located must be formatted as an NTFS file system, in a 4096 cluster environment ). NTFS replaces the old FAT file system.

    NTFS has made several improvements to fat and HPFS, such as supporting metadata and using advanced data structures to improve performance, reliability, and disk space utilization, and to provide several additional extensibility features. The detailed definition of the file system is a trade secret and Microsoft has registered it as an intellectual property product.


    

    21. The Hungarian-American mathematician, von Neumann, introduced the principle of stored procedures in 1946, treating the procedure itself as data and storing it in the same way as the data processed by the program. The key point of von Neumann architecture von Neumann theory is that the digital computer system adopts binary system, and the computer should be executed in the order of program. People refer to this theory of von Neumann as the von Neumann architecture.

    (1) The use of stored procedures, instructions and data mixed storage in the same memory, (data and programs in-memory is no different, they are in-store data, when the EIP pointer to which CPU load the memory of the data, if it is incorrect instruction format, the CPU will have an error interrupt.) In the current CPU protection mode, each memory segment has its descriptor, which records the access rights (readable, writable, executable) of this memory segment. This is a disguised specification of what memory is stored in the instruction and what is the data can be sent to the operator to operate, That is, a program consisting of instructions can be modified. (2) memory is a one-dimensional structure of linear addresses accessed by address, and the number of bits per unit is fixed. (3) The instruction is composed of the operation code and the address. The opcode indicates the type of operation of this instruction, and the address code indicates the operand and address. The operand itself has no data type flag, and its data type is determined by the opcode. (4) Direct control signals are controlled by the execution of instructions to control the operation of the computer. Instructions are stored in memory in the order in which they are executed, and the instruction counter indicates the address of the cell where the instruction is to be executed. The instruction counter has only one, which is generally incremented sequentially, but the order of execution can be changed by the result of the operation or the external conditions at that time. (5) With the operator as the center, the data transfer between the I/O device and the memory must pass through the arithmetic device. (6) The data is represented in binary notation.


    22. The core of the von Neumann principle is "stored program control".
    (1) Use binary form to represent data and instructions.

    (2) The program (data and instruction sequence) is pre-stored in the main memory, so that the computer at work can automatically and quickly remove the instructions from the memory, and to execute.

    (3) The computer system is composed of 5 basic components of the arithmetic, memory, controller, input device and output device, and the basic functions of the 5 parts are stipulated. Von Neumann thought is actually the basic idea of computer design, and lays the basic structure of modern electronic computer.

    

    

    

    

    

    

    

    

    

    

    
  • Operating System Learning Summary

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.