20145207 "Information Security system Design Fundamentals" 14th Week Study Summary

Source: Internet
Author: User
Tags byte sizes

nineth Chapter Virtual Memory I. Overview
    1. Three important capabilities of virtual memory:

      - 它将主存看成是一个存储在磁盘上的地址空间的高速缓存,在主存中只保存活动区域,并根据需要在磁盘和主存之间来回传送数据,通过这种方式,高效的使用了主存- 它为每个进程提供了一致的地址空间,从而简化了存储器管理- 它保护了每个进程的地址空间不被其他进程破坏
    2. Virtual memory is central, powerful, and dangerous.

Second, address 1. Physical and virtual addressing (1) Physical addressing
    • Main memory is organized into an array of cells consisting of m contiguous byte sizes, and the addressing in turn is called physical addressing.

(2) virtual addressing
    • The CPU generates a virtual address (VA) to access the main memory, which is converted to the appropriate physical address before being transmitted to the memory. Address translation is done through the Memory management unit on the CPU chip.

2. Address Space
地址空间是一个非负整数地址的有序集合:{0,1,2,……}
(1) Linear address space
- 地址空间中的整数是连续的。
(2) virtual address space
    • The CPU generates a virtual address from an address space that has a n=2^n address, which becomes known as the virtual address space.
(3) Physical address space
    • Corresponds to the M-byte of the physical memory in the system.
(4) The size of the address space
- 由表示最大地址所需要的位数来描述。- N=2^n:n位地址空间
 
主存中的每个字节都有一个选自虚拟地址空间的虚拟地址和一个选自物理地址空间的物理地址。
three, virtual memory 1. As a caching tool
虚拟存储器——虚拟页VP,每个虚拟页大小为P=2^平字节物理存储器——物理页PP,也叫页帧,大小也为P字节。
    • At any one time, the collection of virtual pages is divided into three disjoint subsets:

      • Unassigned: The VM system has not yet allocated/created the page and does not occupy any disk space.
      • Cached: Allocated pages in physical memory currently in slow existence
      • Not cached: Allocated pages in physical memory are not present
(1) The organizational structure of DRAM cache
    • The penalty for not hitting is very large
    • is fully connected-any virtual page can be placed in any physical page.
    • Replacement algorithm Precision
    • Always use write-back instead of straight write.
(2) Page table
页表:是一个数据结构,存放在物理存储器中,将虚拟页映射到物理页,就是一个页表条目的数组。
    • A page table is an array of page table entry PTEs.

      PTE:由一个有效位和一个n位地址字段组成的,表明了该虚拟页是否被缓存在DRAM中。
    • The composition of the page table:有效位+n位地址字段

      • If a valid bit is set: The Address field represents the starting position of the corresponding physical page in DRAM, the virtual page is cached in this physical page.
      • If no valid bit is set:
        • Empty address: Indicates that the virtual page is not assigned
        • Not an empty address: This address points to the starting position of the virtual page on disk.
(3) Missing pages
    • Missing pages: This means that the DRAM cache is not hit.

    • Page Fault exception: Call the kernel of the fault handler, select a sacrifice pages.

    • Page: The habit of virtual memory, is the block

    • swap = page Scheduling: The activity of transferring pages between disk and storage

    • On-Demand page scheduling: Policies that are not swapped into the page until a miss is hit, which is used by all modern systems.

(4) Local nature in virtual memory
局部性原则保证了在任意时刻,程序将往往在一个较小的活动页面集合上工作,这个集合叫做工作集/常驻集。
    • So as long as the program has good time locality, the virtual memory system can work quite well.

    • Bumps: The size of the working set exceeds the size of the physical memory.

2. As a tool for memory management
操作系统为每个进程提供了一个独立的页表,也就是一个独立的虚拟地址空间。
    • Multiple virtual pages can be mapped to the same shared physical page.
    • Memory mapping: A representation that maps a contiguous set of virtual pages to any location in any file.

    • The combination of on-demand page scheduling and a separate virtual address space simplifies linking and loading, code and data sharing, and memory allocation for applications.

      - 简化链接:独立的地址空间允许每个进程的存储器映像使用相同的基本格式,而不管代码和数据实际存放在物理存储器的何处。- 简化加载:虚拟存储器使得容易想存储器中加载可执行文件和共享文件对象。- 简化共享:独立地址空间为操作系统提供了一个管理用户进程和操作系统自身之间共享的一致机制。- 简化存储器分配:虚拟存储器为向用户进程提供一个简单的分配额外存储器的机制。
3. As a memory protection tool
通过在PTE上添加一些额外的许可来控制对一个虚拟页面的内容访问。
    • Three license bits for PTE:

      SUP: Indicates whether the process must be running in kernel mode to access the page read: Write permission

4. Address Translation (1) Address translation

    • Address translation is the virtual address space of an n element the element in the VAS and the physical address space of an M element the mapping between elements in the PAS.

      MAP: VAS → PAS ∪ ?
    • Over here

      MAP = A‘ ,如果虚拟地址A处的数据在PAS的物理地址A‘处MAP = ? ,如果虚拟地址A处的数据不在物理存储器中
    • How to implement this mapping with page tables

When the page hits, the CPU hardware executes the steps
    • The processor generates a virtual address, passed to the MMU
    • MMU generates a PTE address and gets his from cache/main memory request
    • Cache/Main Memory returns PTE to MMU
    • The MMU constructs the physical address and passes it to cache/main memory
    • Cache/Main memory returns the requested data to the processor.
CPU hardware execution steps when processing a missing pages
    • The processor generates a virtual address, passed to the MMU
    • MMU generates a PTE address and gets his from cache/main memory request
    • Cache/Main Memory returns PTE to MMU
    • A valid bit in PTE is 0, triggering a fault on the pages
    • Determine the Sacrifice page
    • To update a PTE by paging into a new page
    • Returns the original process, executes the instruction that causes the missing pages again, will hit
Diagram

(2) combination of cache and virtual memory
    • In systems that use both SRAM caches and virtual storage, most systems choose physical addressing.
    • The main idea of combining the two is that address translation occurs before the cache.
    • The page table directory can be cached, just like any other data word.
(3) using TLB to accelerate address translation
TLB:翻译后备缓冲器,是一个小的、虚拟存储的缓存,其中每一行都保存着一个由单个PTE组成的块
    • Steps
      • CPU generates a virtual address
      • MMU removes the corresponding PTE from the TLB
      • The MMU translates this virtual address into a physical address and sends it to cache/main memory
      • Cache/Main memory returns the requested data Word to the CPU
(4) Multi-level page table
多级页表——采用层次结构,用来压缩页表。
    • For an example of a two-tier page table hierarchy, the benefits are:

      • If a PTE in a page table is empty, then the corresponding Level two page table does not exist at all
      • Only a single-level page table is required to always be in main memory, the virtual memory system can be created when needed, the page calls into or bring up the Level two page table, only the most frequently used level two page table in main memory.
    • Address translation of multi-level page table:

Four, Memory 1. Memory Mapping
指Linux通过将一个虚拟存储器区域与一个磁盘上的对象关联起来,以初始化这个虚拟存储器区域的内容的过程。
    • Mapping objects:
      • Common files in the Unix file system
      • Anonymous files (all binary 0)
(1) shared objects and private objects
    • Shared objects

      - 共享对象对于所有把它映射到自己的虚拟存储器进程来说都是可见的。- 即使映射到多个共享区域,物理存储器中也只需要存放共享对象的一个拷贝。
    • Private objects

      - 私有对象运用的技术:写时拷贝- 在物理存储器中只保存有私有对象的一份拷贝
(2) The fork function is the application of the write-time copy technique, the EXECVE function:
  • Create a new virtual storage area

     #include <unistd.h>#include <sys/mman.h>void *mmap (void *start, size_t length, int prot, int flags, int fd, off_t offset); A pointer to the mapped area was successfully returned, -1          If an error occurred 
      • Parameters:

          Start: This area starts with start FD: File descriptor Length: Continuous object slice size offset: Offsets from the beginning of the file PROT: The access permission bit, as follows: prot_exec: Consists of instructions that can be executed by the CPU prot_read: Readable prot_write: Writable prot_none: cannot be accessed flag: consists of bits that describe the type of object being mapped, as follows: map_anon: Anonymous object, virtual page is binary 0map_private: Private, copy-on-write object map_shared: Shared object     
  • deleting virtual storage

    #include <sys/mman.h>int munmap(void *start, size_t length);成功返回0,失败返回-1
    • Delete from start, the region consisting of the next length byte.
2. Dynamic memory allocation
堆:一个请求二进制0的区域,紧接在未初始化的bss区域后开始,并向上(更高的地址)生长。有一个变量brk指向堆的顶部。
    • Two basic styles of dispensers:

      • Display Allocator-malloc and free
      • Implicit dispenser/garbage collector
(1) malloc and free functions
  • system calls the malloc function to allocate blocks from the heap:

      #< Span class= "Hljs-meta-keyword" >include <stdlib.h> void *malloc null  
  • The system calls the free function to release the allocated heap block:

    #include <stdlib.h>void free(void *ptr);无返回值ptr参数必须指向一个从malloc、calloc或者reallov获得的已分配块的起始位置。
  • Use dynamic memory allocation reason: You often know the size of some data structures until the program is actually running.

(2) distributor requirements and objectives:
    • Requirements

      • Processing arbitrary request sequences
      • Respond to requests immediately
      • Use only heap
      • Snap To block
      • Do not modify allocated blocks
    • Goal

      • Maximized throughput rate
      • Maximizing memory utilization-maximizing peak utilization
 
吞吐率:每个单位时间里完成的请求数
(3) Fragments
虽然有未使用的存储器,但是不能用来满足分配请求。
    • Internal fragmentation: Occurs when an allocated block is larger than the payload and is easily quantified.

    • External fragmentation: Occurs when free memory is aggregated enough to satisfy an allocation request, but there is not a single space block sufficient to handle this request. Difficult to quantify, unpredictable.

(4) Implicit idle link list
    • Heap block format: consists of a word's head, valid loads, and possible extra padding.
    • Organize the heap into a sequence of contiguous allocated and free blocks:
      • The free block is implicitly connected by the size field in the head, and the allocator can traverse the entire set of free blocks indirectly by traversing all the blocks in the heap.
    • Required: The end block of the special tag.

      系统对齐要求和分配器对块格式的选择会对分配器上的最小块大小有强制的要求。
(5) placing allocated blocks--placement policy
    • First fit: Search for an idle list from scratch, select the first appropriate free block
    • Next fit: Search from the end of the previous search
    • Best fit: Retrieve each free block and select the smallest free block that fits the desired request size
(6) Application for additional heap storage
    • SBRK function

      #include <unistd.h>vid *sbrk(intptr_t incr);成功则返回旧的brk指针,出错为-1
      • Expands and shrinks the heap by adding incr to the BRK pointer of the kernel.
(7) Merging free blocks
    • Merging is a matter of false fragmentation, and any actual allocator must merge adjacent free blocks.

    • Two strategies:

      立即合并推迟合并
(8) Merging with borders 3. Garbage collection
垃圾收集器是一种动态存储分配器。,自动释放程序已经不再需要的已分配块(垃圾)。
(1) Basic knowledge
    • The garbage collector treats the memory as a forward graph, and the node of the graph is assigned to a set of root nodes and a set of heap nodes. When there is a forward path from any root node to the p, it is said that node P is reachable.
(2) mark&sweep garbage collector
    • The Mark&sweep garbage collector consists of a tagging phase and a purge phase, which marks the root node for all the accessible and assigned successors, and the purge phase releases each unmarked allocated block.

    • Use the following functions in the description of Mark&sweep

      - ptr isPtr(ptr p):如果p指向一个已分配块中的某个字,那么就返回一个指向这个块起始位置的指针b,否则返回NULL。- int blockMarked(ptr b):如果已经标记了块b,就返回true。- int blockAllocated(ptr b):如果块b是已分配的,就返回true。- void markBlock(ptr b):标记块b。- int length(ptr b):返回块b的以字为单位的长度(不包括头部)。- void unmarkBlock(ptr b):将块b的状态由已标记的改为未标记的。- ptr nextBlock(ptr b):返回堆中块b的后继。
memory-related errors common in program C (1) indirectly referencing bad pointers
    • There is a larger hole in the virtual address space of the process, no mapping to any meaningful data, and if you attempt to reference a pointer to these holes, the operating system terminates the program with a segment exception.

    • The typical errors are:

      scanf("%d",val);
(2) Read uninitialized memory
    • Although the bass memory location is always initialized to 0 by the loader, it is not the case for heap storage.
    • A common mistake is to assume that the heap memory is initialized to 0.
(3) Allow stack buffer overflow
    • If a program writes to the target buffer in the stack without checking the size of the input string, the program will have a buffer overflow error.
(4) Assume that the pointer is the same size as the object that points to them. (5) Cause dislocation error.
    • A very common source of coverage errors
(6) The pointer is referenced, not the object he points to.
    • Note the priority and the binding of C
(7) Misunderstanding pointer arithmetic
    • The arithmetic operations that forget pointers are made in units of the size of the object they point to, and this size unit is not necessarily a byte.
(8) reference a non-existent variable. (9) reference the data in the free heap block. (10) cause memory leaks
    • When you accidentally forget to release an allocated block and create a garbage in the heap, it can cause a memory leak.

20145207 "Information Security system Design Fundamentals" 14th Week Study Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.