In-depth understanding of computer systems Learning notes (i)

Source: Internet
Author: User

the process of compiling the program

To illustrate the process of compiling the program, we use the classic Hello World program as an example

#include <stdio.h>int main(intcharconst *argv[]){    printf("hello world!!!\n");    return0;}

In a Linux system, we use the GCC compiler to compile the source program file HELLOWORLD.C into the executable target file HelloWorld.

zengwh@zengwh:~/test_code$ gcc helloworld.c -o hellozengwh@zengwh:~/test_code$ ./hellohello world!!!

This process passes through four stages, namely the preprocessing phase, the compilation phase, the compilation stage and the link stage. The program that executes this four phase is a preprocessor, a compiler, a assembler, and a linker, which together make up the compilation system.

    • Preprocessing phase: the preprocessor (CPP) adds the contents of the header file to the source program according to the header file contained in the ' # '. Get the new program file text with ". I" for the file name extension. For example, the HelloWorld program contains the stdio.h header file, which plugs the contents of the header file into the source program.
    • Compile stage: The compiler (CCL) translates the text file hello.i into the assembler Hello.s. That is, the translation of high-level languages into assembly code, low-level machine language instructions.
    • Assembly phase: Assembler (AS) will assemble the program hello. s translates into machine instructions, packages these instructions into a format that can relocate the target program, and saves the results in the hello.o file, hello.o is a binary file.
    • Link stage: the linker (LD) will combine all the. o File links in a project into an executable target file that can be loaded in memory and run by the system.
System Hardware Composition

Cache (Caches)

The cache is used to store information that the processor may need recently to speed up the operation of the program on the CPU and is a cache memory in a typical system.

The L1 cache access speed on the processor chip is almost as fast as the access register, and the L2 cache is connected to the CPU via a special bus, which is 5 times times slower to access than the L1, but 5-10 times faster than accessing main memory. Newer systems also have L3 caches, which are implemented using an SRAM-only hardware technology. In this way, the system can obtain a large piece of memory, and access speed is very fast.
The program has a tendency to access data and code in the local area. Most of the memory operations are done in a fast cache by storing potentially frequently accessed data in the cache, and the program performance is greatly improved.

operating System Management hardware

In the processor, the instruction set structure is the abstraction of the actual processor hardware, in the operating system, the file is an abstraction of I/O, the virtual memory is an abstraction of the program memory, the process is an abstraction of a running program, the virtual machine is the entire computer (including the operating system, processors and programs) abstraction

Process

A process is an abstraction of a running program in the operating system. A system can run multiple process programs at the same time, and each process seems to have exclusive use of the hardware. Running concurrently means that the instruction of one process and the instruction of another process are interleaved.

Threads

A process can consist of multiple threads, each running in the context of a process and sharing the same code and global data.

Virtual Memory

Virtual memory is an abstract concept that provides an illusion for each process that each process seems to be using main memory exclusively. Each process sees a consistent memory, which is the virtual address space. The address in the figure is increased from bottom to top.

    • program code and data: For all processes, the code begins with the same fixed address, followed by the data location of the global variable. The code and data areas are already sized when they start running.
    • Heap: The heap can dynamically expand and contract when it is run, such as when a malloc or free function is called.
    • Shared libraries: Around the middle of the address, to store code and data regions that are shared like the C standard library or math library.
    • Stack: The stack is located at the top of the user's virtual address space, and the compiler implements the function call with it. As with heaps, you can dynamically scale and contract while the program is running. Each call function, the stack grows, the function returns, and the stack shrinks.
    • -

In-depth understanding of computer systems Learning notes (i)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.