1.1 Information is bit + context
The life cycle of a program starts with a source program-a text file with a file extension of. C,java source that has a certain extension (such as a C source program with a. java,c++ extension. cpp, and so on). These text files consist of a text character, and each text character is composed of 8 bits.
Data has different data objects, which are different "entities" that are composed of bits.
1.2 programs are translated into different formats by other programs
In order to run the program on the (operating) system, the statements on each source program are converted to machine language instructions by other programs. These instructions are then wrapped in the format of the executable target program and stored as a binary disk file .
driver completed. In the case of the C source program, this process has four stages ( Note: Unlike C + +, the Java compiler translates the Java source program into a bytecode (class file), which runs the JVM first, and then the class file is loaded onto the JVM to run. In the case of compiling and connecting a C/C + + program, the resulting executable binaries can only be run on the specified hardware platform and operating system and belong to the "local" executor ):
i. c the source program begins with a preprocessing command that tells the preprocessor (CPP) Read the header files of the source program and insert the files directly into the program text to form a new text file with. I as the file name extension.
ii. The compiler (CCL) translates the text file from the previous step into a new text file with. s as the file name extension, which contains an assembly language program ( different CPUs, Its corresponding assembly language will also be different ).
III. The assembler (AS) translates the file from the previous step into a machine language instruction and packages it into a relocatable target program whose file extension is. O. The file is a binary file, and the byte encoding is done by the machine language, so it cannot be opened correctly with a text editor.
Iv. If there are functions in the source program that involve the standard library, the linker (LD) merges the target files of these functions with the. o extension into the previous file, and finally the executable target file is completed. This file can be loaded into memory and executed by the system.
1.3 Understanding how the compilation system works is beneficial
The same result, different source program instructions will have different program performance. This is related to the way that machine code and compilers translate various source program statements into machine code.
1.4 Processor reads and interprets commands stored in memory
Usually what we call the 32-bit, 64-bit word length, in fact, is the bus transmission of fixed-length byte block information.
The compiled executable is stored on disk before it is executed.
RAM is a major component of main memory. However, RAM is divided into two types: SRAM and DRAM. SRAM uses a traditional flip-flop gate Circuit (with 0 and 12 states) to hold the data, because it is stored by power, so it does not need to be flushed back and forth like a capacitor, so it is fast but expensive. DRAM uses a capacitor when it is charged with a status of 1 and a discharge of 0, but because the capacitor loses a portion of the power over time, it needs to be periodically refreshed to hold the data, so it is slightly slower but cheaper (the RAM in the main memory of the book refers to DRAM). --Data Source: Introduction to Computer science
The instruction set structure describes the effect of each machine code instruction, and the micro-architecture describes how the processor is implemented.
1.5 Cache is critical
According to the mechanical principle, larger storage devices are slower to operate than smaller storage devices, but the opposite is the cost.
On the run speed, disk drive > Memory > Register.
Caches are typically embedded in the CPU, which compromises the difference in processing speed between memory and CPU by storing data that the CPU may use recently. One cache can run at the same speed as a register, while a level two cache is connected to the processor via the bus, which is much higher than the primary cache, but slightly slower than the first-level cache at run speed. The cache is implemented using SRAM's hardware technology.
1.7 Operating System Management hardware
The operation of the application on the hardware must go through the operating system. The OS has two features: ① prevents hardware from being abused by runaway applications; ② provides the application with a simple and consistent mechanism to control complex low-level hardware devices.
A process is an abstraction of an OS to a running program. The OS can execute multiple processes concurrently, which means that the instructions of one process and the instructions of another process are actually interleaved. The mechanism for implementing this interleaved execution is called context switching. What is the context? the OS tracks all the state information that is required for the process to run.
A single-processor system can execute code for a process at the same time. When a context switch is made, the OS saves the context of the current process, restores the context of the new process, and the new process gains control and starts running where it was last stopped.
A process is made up of multiple threads . Each thread runs under the context of the process, sharing the same code and global Data , and data sharing between multiple threads is easier than between multiple processes.
Unable to understand the contents of the virtual memory described in this section. It is estimated to be explored in later chapters.
files can include all I/O devices of a computer system.
1.9 Summary
concurrency : The system can execute multiple processes concurrently.
parallelism : Enables the system to run faster with concurrency.
Multi-core processors refer to the integration of multiple CPUs ("cores") into an integrated circuit chip, each with its own L1 and L2 caches, but they share a higher level of cache and interfaces to main memory.
Hyper-Threading (that is, simultaneous multithreading) is a technique that allows a single CPU to execute multiple control flows (that is, threads).
instruction-level parallelism : The processor can execute multiple instructions at the same time. This uses the pipelining technique, which divides the activities required for each instruction into different steps, organizing the hardware in the CPU into a series of stages, each of which performs a single step. These stages can be manipulated in parallel to handle different parts of different instructions.
Super Scalar processors : The execution rate reaches one instruction above a period.
single instruction, multi-data parallelism (SIMD parallelism): Allows an instruction to produce multiple operations that can be performed in parallel to improve the execution speed of processing image, sound, and video data applications.
Thread-level concurrency, instruction-level parallelism, and SIMD parallelism are aspects of system concurrency and parallelism at different levels of abstraction .
The concept of application Interfaces (APIs) is used not only in object-oriented programming languages (such as the Declaration of classes in Java), but also in process-oriented languages such as the C-language function prototypes.
The instruction-level structure is an abstraction of the actual processor hardware.
abstraction keeps the underlying complex hardware execution consistent with the execution model of the source program.
The beginning of the learning process of the in-depth learning computer system (C language) Chapter I. Computer Systems roaming