14th Week Study Notes

Source: Internet
Author: User
Tags terminates

Virtual memory
    • Virtual memory is the perfect interaction between hardware exceptions, hardware address translation, main memory, disk files, and kernel software.
    • Features of virtual memory:
      • Center of
      • Powerful,
      • Dangerous.
Physical and virtual addressing
    • The main memory of a computer system is organized into a data consisting of a unit of m contiguous byte size.
      • Each byte has a unique physical address
    • The address of the first byte is 0, followed by a boycott of +1
    • This approach is called physical addressing
    • Virtual addressing, the CPU accesses the main memory by generating a virtual address that is converted to the appropriate physical address before being sent to the memory. This task is called address translation .
    • Address requires close cooperation between the CPU hardware and the operating system
    • The Memory Management unit (on the CPU) uses the query table stored in main memory to dynamically translate the virtual address, and the contents of the table are managed by the operating system.
Address space
    • A spatial address is an ordered collection of non-negative integer addresses: {0,1,2,...}
    • If the integer in the address space is contiguous, it becomes the linear address space.
    • The CPU generates a virtual boycott from a boycott space with a n=2^n address, which is called a virtual address space.
    • A virtual address space that contains an n=2^n address is called an n-bit address space.
    • In addition to a virtual address space, there is a physical address space that corresponds to the M-bytes of physical memory in the system: {0,1,2,..., M-1}
      • M is not necessarily a power of 2.
Virtual memory as a virtual tool
    • The only virtual boycott is as an index to the array.
    • The contents of the data on disk are cached in main memory.
    • The VM system divides the virtual memory into a virtual page (VP).
      • The size of each virtual page is p=2^p bytes.
    • The physical memory is divided into physical pages (PP, also called page frames), and the size is p bytes.
    • At any moment, three subsets of virtual pages:
      • Unassigned: A page that is not yet assigned to the VM system
      • Cached: Allocated pages in physical memory currently in slow existence
      • Not cached: Allocated pages in physical memory are not present
The organizational structure of the DRAM cache
    • Use SRAM cache to represent L1, L2, and L3 caches between CPU and main memory
    • A dram cache is used to represent the cache of a virtual memory system, which caches virtual pages in main memory.
    • The organizational structure of the DRAM cache is entirely driven by a huge miss overhead.
Page table
    • A page table maps a virtual page to a physical page.
    • Each time the address translation hardware translates a virtual address into a physical address, the page table is read.
    • The operating system is responsible for maintaining the contents of the page table and for transmitting pages back and forth between disk and DRAM.
    • A page table is an array of page table entries.
    • Each page in the virtual boycott space is called a page table in the page table
    • Each page in the virtual boycott space has a PTE at a fixed offset in the page table.
Page fault
    • DRAM cache misses are called missing pages.
    • In the customary parlance of virtual memory, blocks are called pages.
    • The activity of transferring pages between disk and storage is called switching or paging.
    • Pages are swapped into DRAM from disk and swapped out of DRAM, and when there is a miss, the policy of swapping in pages is called on-demand page scheduling.
    • Virtual memory works quite well thanks to locality.
    • The principle of locality ensures that at any point in time, the program will often work on a smaller set of active pages called the working set or resident set.
    • Not all programs can show good time locality, if the size of the working set beyond the size of the physical memory, then the program will produce an unfortunate state, called on-demand, then the page will continue to swap out.
    • You can use the Getrusage function of UNIX to monitor the number of missing pages.
Virtual memory as a tool for memory management
    • VMS simplify linking and loading, code and data sharing, and storage allocation for applications.
      • Simplified linking: Separate address spaces allow each process's memory image to use the same basic format, regardless of where the code and data actually reside in the physical memory.
      • Simplified loading: Virtual storage also makes it easy to load executables and shared object files into memory.
      • Simplified sharing: The stand-alone address space provides a consistent mechanism for the operating system to manage the sharing of user processes and the operating system itself.
      • Simplifies memory allocation: Virtual memory provides a simple mechanism to allocate additional memory to user processes.
    • Allocates a contiguous slice (chunk) of a virtual page, starting at address 0x08048000 (for 32-bit address space), or starting at 0x400000 (for 64-bit address space).
Virtual memory as a tool for memory protection
    • If an instruction violates the license condition, then the CPU divides a general protection fault and passes control to an exception handler in the kernel. The Unix shell typically reports this exception as a segment error.
Address Translation
    • Address Translation Symbol summary (p543):
    • Address translation is a mapping between elements in the virtual address space (VAS) of an n element and elements in the physical space address (PAS) of an M element, Map:vas→pas∪ø
    • A control register in the CPU, a page table base register (PTBR), pointing to the current page table.
    • The virtual address of the N-bit contains two parts:
      • Virtual page offset of a P-bit (VPO)
      • A (n-p) bit of virtual page number (VPN)
      • The MMU uses a VPN to select the appropriate Pte.
    • When the page hits, the CPU hardware performs the steps:
      • The first step: The processor generates a virtual address and transmits it to the MMU
      • Step two: MMU generates a PTE address and obtains it from the cache/main memory request.
      • Step three: Cache/main Memory returns PTE to MMU
      • Fourth step: MMU constructs a physical address and transmits it to cache/main memory
      • Fifth step: Cache/Main Memory returns the requested data Word to the processor.
    • Processing the missing pages requires the hardware and operating system cores to work together:
      • The first step to the third step is the same as above
      • Fourth step: The valid bit in Pte is zero, so the MMU triggers an exception. Passes control of a fault handler in the CPU to the operating system kernel.
      • Fifth step: The page fault handler determines the sacrifice page in the physical memory, and if it has been modified, swap it out to disk.
      • Sixth step: The page is paged into the new page and updates the PTEs in the memory.
      • Seventh step: The fault-pages handler returns to the original process and executes the instruction that caused the missing pages again. The CPU sends the virtual address that caused the missing pages back to the MMU.
Use TLB to accelerate address translation
    • Translation backup buffers (TLB)
    • The TLB is a small, virtual-addressing cache that holds a block of a single Pte for each row in the interim.
    • TLB usually has a high degree of connectivity.
    • If you tlb=2^t a group, the TLB index is made up of the lowest bit of the VPN, and the TLB token is made up of the remaining bits in the VPN.
    • When the TLB hits the key: All the boycott translation steps are performed in the MMU on the chip, so it's very fast.
    • First step: CPU generates a virtual address
    • Step Two and step three: MMU removes the corresponding PTE from the TLB
    • Fourth step: MMU translates this virtual address into a physical address and sends it to cache/main memory
    • Fifth step: Cache/Main Memory returns the requested data Word to the CPU
Linux Virtual memory Area
    • Linux organizes virtual storage into a collection of regions (segments).
    • An area is a continuous slice (chunk) of the already existing (allocated) virtual memory.
    • The concept of a zone allows the virtual address space to have gaps.
    • The kernel maintains a separate task structure for each process in the system.
    • The elements in the task structure include or point to all the information that the kernel needs to run the process.
    • Mm_struct describes the current state of the virtual storage.
    • PGD points to the base address of the first-level page table.
    • Mmap points to a list of vm_area_structs (regional structures), each of which describes an area of the current virtual address space.
      • Vm_start: Point at the beginning of this area
      • Vm_end: Point at the end of this area
      • Vm_port: Describes the read and Write permission permissions for all pages contained within this zone.
      • Vm_flags: Describes whether pages in this area are shared with other processes, or whether the process is private.
      • Vm_next: Point to the next area structure in the list
Linux pages exception handling
    • Is virtual address a legal? (a within an area defined by a region structure?) )
      • The missing pages handler searches the list of regional structures and compares the Vm_start and Vm_end in a with each regional structure. If this directive is not valid, then the fault-pages handler starts a segment error and terminates the process.
    • Is the attempted memory access legal? (Does the process have permission to read, write, or execute pages within this area?) )
      • If the attempted access is illegal, then the fault-pages handler triggers a protection exception to terminate the program.
Memory mapping
    • Linux initializes the contents of this virtual memory area by associating a virtual memory with an object on a disk, a process known as a memory map.
    • A virtual storage area can be mapped to one of two types of objects:
      • Common files in the Unix file system
      • Anonymous files
    • Once a virtual page is initialized, it is swapped out between a dedicated swap file maintained by the kernel.
      • Swap files are also called swap spaces or swap areas.
    • An object can be mapped to an area of the virtual storage, either as a shared object or as a private object.
    • A virtual memory area that maps a shared object is called a shared zone. Similarly, there is a private area.
    • Private objects are mapped to virtual memory using the write-time copy technique.
Dynamic memory allocation
    • The dynamic memory allocator maintains a virtual memory area of a process, called a heap.
    • For each process, the kernel maintains a variable brk, which points to the top of the heap.
    • The allocator maintains the heap as a collection of blocks of different sizes.
    • Each block is a contiguous virtual memory slice, either allocated or idle.
    • There are two basic styles of dispensers:
      • Explicit allocator: Requires an application to explicitly release any allocated block.
      • Implicit allocator: The allocator is required to detect when an allocated block is no longer in use by the program, then release this block.
        • An implicit allocator page is called a garbage collector, and the process of automatically releasing unused allocated blocks is called garbage collection.
Mallock and FREE functions
    • The program allocates blocks from the heap by calling the malloc function.
    • The malloc function returns a pointer to a memory block of at least size bytes, which is aligned with any data object type that might be contained within the block.
    • The dynamic memory allocator can be used to explicitly allocate and free heap storage by using the mmap and MUNMAP functions, as well as through the SBRK function.
      • The SBRK function expands and shrinks the heap by increasing the BRK pointer of the kernel by INCR.
      • If successful, returns the old value of BRK otherwise, it returns-1, and errno is set to Enomem.
      • If INCR is 0, then SBRK returns the current value of BRK.
      • Calling sbrk with a negative incr is legal, and the return value points to the top-up ABS (INCR) byte from the new heap.
    • The free function to release the allocated heap blocks
Requirements and objectives of the dispenser
    • An explicit allocator works under fairly restrictive conditions:
      • Processing arbitrary request sequences
      • Respond to requests immediately
      • Use only heap
      • Snap To block
      • Do not modify allocated blocks
    • Maximized throughput rates and maximum memory utilization
    • The most useful criterion is peak utilization.
Fragments
    • The main cause of low heap utilization is fragmentation.
    • Two forms of fragmentation
    • Internal fragmentation
    • External fragments
    • Internal fragmentation occurs when an allocated block is larger than the payload.
    • External fragmentation occurs when the free memory is aggregated enough to satisfy an allocation request, but there is not a single free block large enough to handle the request.
Implementation issues
    • Free Block organization: How we record free blocks
    • Placement: How do we choose a suitable free block to place a new allocated block?
    • Merging: How do we deal with a block that has just been released
Placing an allocated block
    • The way the allocator performs this search is determined by the placement policy.
    • Common placement Strategies
      • First time adaptation
      • Next time to fit
      • Best Fit
Merge free Blocks
    • False fragments: There are many free blocks that are available to be cut into small, unusable blocks.
    • To solve the fake fragmentation problem, any actual allocator must merge adjacent free blocks, and this process becomes a merge.
    • The allocator can choose to merge immediately or postpone the merge.
Garbage collection
    • Garbage collector: Dynamic storage Allocator
    • Garbage: Allocated blocks that the program no longer needs
    • Garbage collection: The process of automatically reclaiming heap storage
Memory-related errors common in C programs
    • Indirectly referencing bad pointers
    • Read as initialized memory
    • Allow stack buffer overflow
    • Assume that the pointers and the objects they point to are the same size
    • Cause dislocation errors
    • Reference pointer, not the object it points to
    • Misunderstanding pointer arithmetic
    • Referencing a non-existent variable
    • Referencing data in a free heap block
    • Cause memory leaks
Learning experience
终于弄明白了malloc所工作的区域
Virtual memory
    • Virtual memory is the perfect interaction between hardware exceptions, hardware address translation, main memory, disk files, and kernel software.
    • Features of virtual memory:
      • Center of
      • Powerful,
      • Dangerous.
Physical and virtual addressing
    • The main memory of a computer system is organized into a data consisting of a unit of m contiguous byte size.
      • Each byte has a unique physical address
    • The address of the first byte is 0, followed by a boycott of +1
    • This approach is called physical addressing
    • Virtual addressing, the CPU accesses the main memory by generating a virtual address that is converted to the appropriate physical address before being sent to the memory. This task is called address translation .
    • Address requires close cooperation between the CPU hardware and the operating system
    • The Memory Management unit (on the CPU) uses the query table stored in main memory to dynamically translate the virtual address, and the contents of the table are managed by the operating system.
Address space
    • A spatial address is an ordered collection of non-negative integer addresses: {0,1,2,...}
    • If the integer in the address space is contiguous, it becomes the linear address space.
    • The CPU generates a virtual boycott from a boycott space with a n=2^n address, which is called a virtual address space.
    • A virtual address space that contains an n=2^n address is called an n-bit address space.
    • In addition to a virtual address space, there is a physical address space that corresponds to the M-bytes of physical memory in the system: {0,1,2,..., M-1}
      • M is not necessarily a power of 2.
Virtual memory as a virtual tool
    • The only virtual boycott is as an index to the array.
    • The contents of the data on disk are cached in main memory.
    • The VM system divides the virtual memory into a virtual page (VP).
      • The size of each virtual page is p=2^p bytes.
    • The physical memory is divided into physical pages (PP, also called page frames), and the size is p bytes.
    • At any moment, three subsets of virtual pages:
      • Unassigned: A page that is not yet assigned to the VM system
      • Cached: Allocated pages in physical memory currently in slow existence
      • Not cached: Allocated pages in physical memory are not present
The organizational structure of the DRAM cache
    • Use SRAM cache to represent L1, L2, and L3 caches between CPU and main memory
    • A dram cache is used to represent the cache of a virtual memory system, which caches virtual pages in main memory.
    • The organizational structure of the DRAM cache is entirely driven by a huge miss overhead.
Page table
    • A page table maps a virtual page to a physical page.
    • Each time the address translation hardware translates a virtual address into a physical address, the page table is read.
    • The operating system is responsible for maintaining the contents of the page table and for transmitting pages back and forth between disk and DRAM.
    • A page table is an array of page table entries.
    • Each page in the virtual boycott space is called a page table in the page table
    • Each page in the virtual boycott space has a PTE at a fixed offset in the page table.
Page fault
    • DRAM cache misses are called missing pages.
    • In the customary parlance of virtual memory, blocks are called pages.
    • The activity of transferring pages between disk and storage is called switching or paging.
    • Pages are swapped into DRAM from disk and swapped out of DRAM, and when there is a miss, the policy of swapping in pages is called on-demand page scheduling.
    • Virtual memory works quite well thanks to locality.
    • The principle of locality ensures that at any point in time, the program will often work on a smaller set of active pages called the working set or resident set.
    • Not all programs can show good time locality, if the size of the working set beyond the size of the physical memory, then the program will produce an unfortunate state, called on-demand, then the page will continue to swap out.
    • You can use the Getrusage function of UNIX to monitor the number of missing pages.
Virtual memory as a tool for memory management
    • VMS simplify linking and loading, code and data sharing, and storage allocation for applications.
      • Simplified linking: Separate address spaces allow each process's memory image to use the same basic format, regardless of where the code and data actually reside in the physical memory.
      • Simplified loading: Virtual storage also makes it easy to load executables and shared object files into memory.
      • Simplified sharing: The stand-alone address space provides a consistent mechanism for the operating system to manage the sharing of user processes and the operating system itself.
      • Simplifies memory allocation: Virtual memory provides a simple mechanism to allocate additional memory to user processes.
    • Allocates a contiguous slice (chunk) of a virtual page, starting at address 0x08048000 (for 32-bit address space), or starting at 0x400000 (for 64-bit address space).
Virtual memory as a tool for memory protection
    • If an instruction violates the license condition, then the CPU divides a general protection fault and passes control to an exception handler in the kernel. The Unix shell typically reports this exception as a segment error.
Address Translation
    • Address Translation Symbol summary (p543):
    • Address translation is a mapping between elements in the virtual address space (VAS) of an n element and elements in the physical space address (PAS) of an M element, Map:vas→pas∪ø
    • A control register in the CPU, a page table base register (PTBR), pointing to the current page table.
    • The virtual address of the N-bit contains two parts:
      • Virtual page offset of a P-bit (VPO)
      • A (n-p) bit of virtual page number (VPN)
      • The MMU uses a VPN to select the appropriate Pte.
    • When the page hits, the CPU hardware performs the steps:
      • The first step: The processor generates a virtual address and transmits it to the MMU
      • Step two: MMU generates a PTE address and obtains it from the cache/main memory request.
      • Step three: Cache/main Memory returns PTE to MMU
      • Fourth step: MMU constructs a physical address and transmits it to cache/main memory
      • Fifth step: Cache/Main Memory returns the requested data Word to the processor.
    • Processing the missing pages requires the hardware and operating system cores to work together:
      • The first step to the third step is the same as above
      • Fourth step: The valid bit in Pte is zero, so the MMU triggers an exception. Passes control of a fault handler in the CPU to the operating system kernel.
      • Fifth step: The page fault handler determines the sacrifice page in the physical memory, and if it has been modified, swap it out to disk.
      • Sixth step: The page is paged into the new page and updates the PTEs in the memory.
      • Seventh step: The fault-pages handler returns to the original process and executes the instruction that caused the missing pages again. The CPU sends the virtual address that caused the missing pages back to the MMU.
Use TLB to accelerate address translation
    • Translation backup buffers (TLB)
    • The TLB is a small, virtual-addressing cache that holds a block of a single Pte for each row in the interim.
    • TLB usually has a high degree of connectivity.
    • If you tlb=2^t a group, the TLB index is made up of the lowest bit of the VPN, and the TLB token is made up of the remaining bits in the VPN.
    • When the TLB hits the key: All the boycott translation steps are performed in the MMU on the chip, so it's very fast.
    • First step: CPU generates a virtual address
    • Step Two and step three: MMU removes the corresponding PTE from the TLB
    • Fourth step: MMU translates this virtual address into a physical address and sends it to cache/main memory
    • Fifth step: Cache/Main Memory returns the requested data Word to the CPU
Linux Virtual memory Area
    • Linux organizes virtual storage into a collection of regions (segments).
    • An area is a continuous slice (chunk) of the already existing (allocated) virtual memory.
    • The concept of a zone allows the virtual address space to have gaps.
    • The kernel maintains a separate task structure for each process in the system.
    • The elements in the task structure include or point to all the information that the kernel needs to run the process.
    • Mm_struct describes the current state of the virtual storage.
    • PGD points to the base address of the first-level page table.
    • Mmap points to a list of vm_area_structs (regional structures), each of which describes an area of the current virtual address space.
      • Vm_start: Point at the beginning of this area
      • Vm_end: Point at the end of this area
      • Vm_port: Describes the read and Write permission permissions for all pages contained within this zone.
      • Vm_flags: Describes whether pages in this area are shared with other processes, or whether the process is private.
      • Vm_next: Point to the next area structure in the list
Linux pages exception handling
    • Is virtual address a legal? (a within an area defined by a region structure?) )
      • The missing pages handler searches the list of regional structures and compares the Vm_start and Vm_end in a with each regional structure. If this directive is not valid, then the fault-pages handler starts a segment error and terminates the process.
    • Is the attempted memory access legal? (Does the process have permission to read, write, or execute pages within this area?) )
      • If the attempted access is illegal, then the fault-pages handler triggers a protection exception to terminate the program.
Memory mapping
    • Linux initializes the contents of this virtual memory area by associating a virtual memory with an object on a disk, a process known as a memory map.
    • A virtual storage area can be mapped to one of two types of objects:
      • Common files in the Unix file system
      • Anonymous files
    • Once a virtual page is initialized, it is swapped out between a dedicated swap file maintained by the kernel.
      • Swap files are also called swap spaces or swap areas.
    • An object can be mapped to an area of the virtual storage, either as a shared object or as a private object.
    • A virtual memory area that maps a shared object is called a shared zone. Similarly, there is a private area.
    • Private objects are mapped to virtual memory using the write-time copy technique.
Dynamic memory allocation
    • The dynamic memory allocator maintains a virtual memory area of a process, called a heap.
    • For each process, the kernel maintains a variable brk, which points to the top of the heap.
    • The allocator maintains the heap as a collection of blocks of different sizes.
    • Each block is a contiguous virtual memory slice, either allocated or idle.
    • There are two basic styles of dispensers:
      • Explicit allocator: Requires an application to explicitly release any allocated block.
      • Implicit allocator: The allocator is required to detect when an allocated block is no longer in use by the program, then release this block.
        • An implicit allocator page is called a garbage collector, and the process of automatically releasing unused allocated blocks is called garbage collection.
Mallock and FREE functions
    • The program allocates blocks from the heap by calling the malloc function.
    • The malloc function returns a pointer to a memory block of at least size bytes, which is aligned with any data object type that might be contained within the block.
    • The dynamic memory allocator can be used to explicitly allocate and free heap storage by using the mmap and MUNMAP functions, as well as through the SBRK function.
      • The SBRK function expands and shrinks the heap by increasing the BRK pointer of the kernel by INCR.
      • If successful, returns the old value of BRK otherwise, it returns-1, and errno is set to Enomem.
      • If INCR is 0, then SBRK returns the current value of BRK.
      • Calling sbrk with a negative incr is legal, and the return value points to the top-up ABS (INCR) byte from the new heap.
    • The free function to release the allocated heap blocks
Requirements and objectives of the dispenser
    • An explicit allocator works under fairly restrictive conditions:
      • Processing arbitrary request sequences
      • Respond to requests immediately
      • Use only heap
      • Snap To block
      • Do not modify allocated blocks
    • Maximized throughput rates and maximum memory utilization
    • The most useful criterion is peak utilization.
Fragments
    • The main cause of low heap utilization is fragmentation.
    • Two forms of fragmentation
    • Internal fragmentation
    • External fragments
    • Internal fragmentation occurs when an allocated block is larger than the payload.
    • External fragmentation occurs when the free memory is aggregated enough to satisfy an allocation request, but there is not a single free block large enough to handle the request.
Implementation issues
    • Free Block organization: How we record free blocks
    • Placement: How do we choose a suitable free block to place a new allocated block?
    • Merging: How do we deal with a block that has just been released
Placing an allocated block
    • The way the allocator performs this search is determined by the placement policy.
    • Common placement Strategies
      • First time adaptation
      • Next time to fit
      • Best Fit
Merge free Blocks
    • False fragments: There are many free blocks that are available to be cut into small, unusable blocks.
    • To solve the fake fragmentation problem, any actual allocator must merge adjacent free blocks, and this process becomes a merge.
    • The allocator can choose to merge immediately or postpone the merge.
Garbage collection
    • Garbage collector: Dynamic storage Allocator
    • Garbage: Allocated blocks that the program no longer needs
    • Garbage collection: The process of automatically reclaiming heap storage
Memory-related errors common in C programs
    • Indirectly referencing bad pointers
    • Read as initialized memory
    • Allow stack buffer overflow
    • Assume that the pointers and the objects they point to are the same size
    • Cause dislocation errors
    • Reference pointer, not the object it points to
    • Misunderstanding pointer arithmetic
    • Referencing a non-existent variable
    • Referencing data in a free heap block
    • Cause memory leaks
Learning experience
终于弄明白了malloc所工作的区域,而且最后的C语言中的错误也有我自己经常犯的,通过它的总结,自己以后编码的时候会更加注意那些错误的地方。

14th Week Study Notes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.