20145205 "Information Security system Design Fundamentals" 14th Week Study Summary

Source: Internet
Author: User
Tags byte sizes

Summary of learning contents of textbook virtual memory
    • Virtual memory is one of the most important concepts in computer system, it is an abstraction of main memory
    • Three important capabilities of virtual memory:

      ?它将主存看成是一个存储在磁盘上的地址空间的高速缓存,在主存中只保存活动区域,并根据需要在磁盘和主存之间来回传送数据,通过这种方式,高效的使用了主存?它为每个进程提供了一致的地址空间,从而简化了存储器管理?它保护了每个进程的地址空间不被其他进程破坏
Physical and virtual addressing
    • The main memory of a computer system is organized into an array of units of m contiguous byte sizes, each with a unique physical address Pa.
    • Physical addressing is addressed according to the physical address.
    • The virtual memory is organized into an array of n contiguous byte-sized cells stored on disk.
    • With virtual addressing, the CPU accesses main memory by generating a virtual address VA, which is converted to the appropriate physical address before being sent to the memory (this process is called address translation, and the associated hardware is the memory management unit MMU)
Address space
    • An address space is an ordered collection of non-negative integer addresses: {0,1,2,......}
    • Linear address space: integers in the address space are contiguous.
    • Virtual address space: The CPU generates a virtual address from an address space that has a n=2^n address, which becomes known as the virtual address space.
    • The size of the address space is described by the number of bits required to represent the maximum address. N=2^n:n bit address space
    • Each byte in main memory has a virtual address selected from the virtual address space and a physical address selected from the Physical address space.
Virtual memory as a tool for caching
    • Virtual Memory-Virtual page VP, each virtual page size is p=2^
    • physical Memory-physical page PP, also called page frame, size is also P-byte.
    • At any time, the collection of virtual pages is divided into three disjoint subsets:

        
      The organizational structure of the DRAM cache
    • Need to know, this caching structure:

        Do you have a big penalty? It's all connected--any virtual page can be placed on any physical page. The replacement algorithm is precise? Always use writeback instead of straight write. The  
      page Table
    • page table is a data structure that is stored in physical memory and maps a virtual page to a physical page the
    • Page table is an array of page table entry PTEs, consisting of:

        valid bit +n bit address field  
    • If a valid bit is set: The Address field represents the starting position of the corresponding physical page in DRAM, and the virtual page is cached in this physical page
    • If no valid bit is set:

        (1) empty address: Indicates that the virtual page is not assigned (2) is not an empty address: This address points to the starting position of the virtual page on disk.  
      pages
    • pages: the DRAM cache misses.
    • Page Fault exception: Call the fault handler in the kernel and select a sacrifice.
    • page: The custom version of virtual memory is the block
    • interchange = paging: The activity of the transfer page between disk and memory
    • On-demand page scheduling: Policies that are not swapped into the page until a miss is hit, all modern systems use this.

      locality in virtual memory
    • the
    • locality principle guarantees that at any point in time, the program will often work on a smaller set of active pages, a set called Working set/resident set
    • so long as the program has good temporal locality, The virtual memory system works pretty well.
    • Bumps: Working set size exceeds the size of physical storage

Virtual memory as a tool for memory management
    • The operating system provides a separate page table for each process, which is a separate virtual address space
    • Shaking a virtual page can be mapped to the same shared physical page.
    • Memory mapping: A representation that maps a contiguous set of virtual pages to any location in any file
    • VMS simplify linking and loading, code and data sharing, and storage allocation for applications
Virtual memory as a tool for memory protection
    • Three license bits for PTE:

      ?SUP:表示进程是否必须运行在内核模式下才能访问该页?READ:读权限?WRITE:写权限
Address Translation
  • Address translation is the virtual address space of an n element the element in the VAS and the physical address space of an M element mapping between elements in the PAS
  • Page Base Register PTBR point to current page table
  • MMU uses VPN to select the appropriate PTE
  • Ppo=vpo
  • When the page hits, the CPU action:

    ?处理器生成虚拟地址,传给MMU?MMU生成PTE地址,并从高速缓存/主存请求得到他?高速缓存/主存向MMU返回PTE?MMU构造物理地址,并把它传给高速缓存/主存?高速缓存/主存返回所请求的数据给处理器
  • When dealing with a missing pages:

    ?处理器生成虚拟地址,传给MMU?MMU生成PTE地址,并从高速缓存/主存请求得到他?高速缓存/主存向MMU返回PTE?PTE中有效位为0,触发缺页异常?确定牺牲页?调入新页面,更新PTE?返回原来的进程,再次执行导致缺页的指令,会命中
    In combination with cache and virtual memory,
  • First, in systems that use both SRAM caches and virtual storage, most systems choose physical addressing
  • The main idea is that address translation occurs before the cache
  • Page table directories can be cached, just like any other data word

    Use TLB to accelerate address translation
  • TLB: Translation fallback buffer, is a small, virtual storage cache, where each row holds a block consisting of a single Pte

  • Steps:

    ?CPU产生一个虚拟地址?MMU从TLB中取出相应的PTE?MMU将这个虚拟地址翻译成一个物理地址,并且将它发送到高速缓存/主存?高速缓存/主存将所请求的数据字返回给CPU
    Multi-level page table
  • Multilevel Page table--Using Hierarchies to compress page tables
  • For an example of a two-tier page table hierarchy, the benefits are:

    ?如果一级页表中的一个PTE是空的,那么相应的二级页表就根本不会存在?只有一级页表才需要总是在主存中,虚拟存储器系统可以在需要时创建、页面调入或调出二级页表,只有最经常使用的二级页表才缓存在主存中。
  • Address translation of multi-level page table:

Case study
###Core i7地址翻译
    • PTEs have three privilege bits:

      ?R/W位:确定内容是读写还是只读?U/S位:确定是否能在用户模式访问该页?XD位:禁止执行位,64位系统中引入,可以用来禁止从某些存储器页取指令
    • The bits that are involved in the fault-pages handler:

      ?A位,引用位,实现页替换算法?D位,脏位,告诉是否必须写回牺牲页
      Linux Virtual memory system
    • Linux maintains a separate virtual address space for each process,

    • Kernel virtual memory includes: code and data structures in the kernel
    • A subset of the regions are mapped to physical pages shared by all processes, and the other contains data that is different for each process.
    • Area: is the contiguous slice of the allocated virtual memory
    • Examples of Regions:

      ?代码段?数据段?堆?共享库段?用户栈?……
    • Each virtual page that exists is saved in a region. The kernel maintains a separate task structure for each process in the system task_struct
    • The regional structure of a specific region includes:

      ?vm_start:指向起始处?vm_end:指向结束处?vm_prot:描述这个区域包含的所有页的读写许可权限?vm_flags:是共享的还是私有的?vm_next:指向下一个区域
Memory mapping
  • This is the process by which Linux initializes the contents of this virtual memory area by associating a virtual memory area with an object on a disk.
  • Mapping objects:

    1.Unix文件系统中的普通文件2.匿名文件(全都是二进制0)
    Shared Objects and private objects
  • The shared object is visible to all the virtual memory processes that map it to itself. Even if you map to multiple shared areas, only one copy of the shared object needs to be stored in the physical memory.
  • Techniques used by Private objects: copy-on-write. Only one copy of the private object is saved in the physical memory
  • The fork function is the application of the write-time copy technique, as for the EXECVE function

    User-level memory mapping using the MMAP function
  • Create a new virtual storage area

    void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset); 成功返回指向映射区域的指针,若出错则为-
  • To delete a virtual storage:

    int munmap(void *start, size_t length); 成功返回0,失败返回-1
  • Delete from start, area consisting of next length byte

Dynamic memory allocation
  • The heap is a region that requests a binary 0, immediately after the uninitialized BSS region, and grows upward (higher address). There is a variable brk point to the top of the heap
  • Two basic styles of dispensers:

    1.显示分配器-malloc和free2.隐式分配器/垃圾收集器
    malloc and FREE functions
  • The system calls the malloc function to allocate blocks from the heap:

    void *malloc(size_t size);成功返回指针,指向大小至少为size字节的存储器块,失败返回NULL
  • The system calls the free function to release the allocated heap block:

    void free(void *ptr);无返回值
  • The PTR parameter must point to a starting position for an allocated block obtained from malloc, calloc, or Reallov
  • Dynamic memory allocations are used because they know the size of certain data structures when the program is actually running.

    Requirements and objectives of the dispenser
  • Requirements

    ?处理任意请求序列?立即响应请求?只使用堆?对齐块?不修改已分配的块
  • Goal

    ?最大化吞吐率(吞吐率:每个单位时间里完成的请求数)?最大化存储器利用率——峰值利用率最大化
    Fragments
  • This behavior occurs when there is unused memory but cannot be used to satisfy the allocation request
  • Internal fragmentation occurs when an allocated block is larger than the payload. Easy to quantify.
  • External fragmentation occurs when free memory is aggregated enough to satisfy an allocation request, but there is not a single space block sufficient to handle this request. Difficult to quantify, unpredictable.

    Implicit idle linked list
  • Heap block format: consists of a word's head, valid loads, and possible extra padding.

  • Organize the heap into a sequence of contiguous allocated and free blocks:

  • The free block is implicitly connected by the size field in the head, and the allocator can traverse the entire set of free blocks indirectly by traversing all the blocks in the heap.
  • Required: The end block of the special tag.
  • System alignment requirements and allocator-to-block format selection impose a mandatory requirement on the minimum block size on the allocator.

    Placing an allocated block--placement policy
  • First fit: Search for an idle list from scratch, select the first appropriate free block
  • Next fit: Search from the end of the previous search
  • Best fit: Retrieve each free block and select the smallest free block that fits the desired request size

    Request additional heap Storage
  • SBRK function:

    void *sbrk(intptr_t incr);成功则返回旧的brk指针,出错为-1
  • Expands and shrinks the heap by adding incr to the BRK pointer of the kernel.

    Merge free Blocks
  • Merging is a matter of false fragmentation, and any actual allocator must merge adjacent free blocks.
  • There are two kinds of strategies:

    ?立即合并?推迟合并
    Merge with boundaries
  • Merging means that the backward merging is simple because of the presence of the head, but it is inconvenient to merge forward, so just add a foot to the end of the block, as a copy of the head, it is convenient to merge, the specific four kinds of situations are as follows:

  • The free block always needs the foot part.

    Implement a simple allocator
  • Prologue Block and end Block: The preamble block is created at initialization and never released; the end block is a special block, always ends with it.
  • There is a technique that will be reused, the operation is complex and repetitive, these can be defined macro, easy to use and easy to modify.
  • It is important to note that coercion of type conversions, especially with pointers, is very complex
  • Because the byte alignment is specified as a double word, the size of the block is the integer multiple of the double word, not the rounding to Yes.

    Explicit idle linked list
  • Difference

    (1)分配时间    隐式的,分配时间是块总数的线性时间    但是显式的,是空闲块数量的线性时间。(2)链表形式    隐式——隐式空闲链表    显式——双向链表,有前驱和后继,比头部脚部好使
  • Sorting policy:

    ?后进先出?按照地址顺序维护
    Detached List of idle links
  • Separating storage is a popular way to reduce allocation time. The general idea is to divide all possible block sizes into equivalent class/size classes
  • The allocator maintains an array of idle lists, one free list for each size class, in ascending order of size
  • Two basic approaches: simple separation of storage and separation adaptation
  • Simple separation of storage: The free list of each size class contains blocks of equal size, and the size of each block is the size of the largest element in the size class.

    (1)操作    如果链表非空:分配其中第一块的全部    如果链表为空:分配器向操作系统请求一个固定大小的额外存储器片,将这个片分成大小相等的块,并且连接起来成为新的空闲链表。(2)优缺点    优点:时间快,开销小    缺点:容易造成内部、外部碎片
  • Separation adaptation: Each free list is associated with a size class and is organized into a type of display or implicit linked list, each containing a potentially different block size that is a member of the size class. This method is fast and efficient for memory use.
  • Partner system is a special case of separation adaptation
  • Each of these size classes is a power of 2. Thus, given the size of the address and block, it is easy to calculate the address of its partner, that is to say: The address of a block and its partner's address only a different. Benefits: Quick Search, quick merge

20145205 "Information Security system Design Fundamentals" 14th Week Study Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.