20145306 "Information Security system Design Fundamentals" 14th Week Study Summary

Source: Internet
Author: User
Tags byte sizes

20145306 "Fundamentals of Information Security system Design" 14th Week study summary textbook Learning content Summary physical and virtual addressing

Physical addressing: The main memory of a computer system is organized into an array of cells consisting of m contiguous byte sizes. Each byte has a unique physical address of Pa. The address of the first byte is 0, the next byte has an address of 1, the next is 2, and so on. Given this simple structure, the most natural way for a CPU to access memory is to use physical addresses, which we call physical addressing.

Virtual addressing: When using virtual addressing, the CPU accesses main memory by generating a virtual address, which is converted to the appropriate physical address before being sent to the memory. The task of translating a virtual address into a physical address is called address translation.

The dedicated hardware called the Memory management unit on the CPU chip uses the query table stored in main memory to dynamically translate the virtual address, and the contents of the table are managed by the operating system.

Address space

1. Linear address space: an integer in the address space is an ordered collection of contiguous, non-integer addresses: {0,1,2,...}.

2. Virtual address space: In a system with virtual memory, the CPU generates virtual addresses from a single n = 2 ^ n address space. {0,1,2,3,..., N-1}.

3. The size of an address space is described by the number of times required to represent the maximum address.

4. The basic idea of virtual memory: Allow each data object to have multiple independent addresses, each of which is selected from a different address space.

Virtual memory as a tool for caching

Each byte has a unique virtual address
Each virtual page size is p=2^p bytes
At any one time, the collection of virtual pages is divided into three disjoint subsets:
Unassigned: Page not allocated/created by VM system, no disk space occupied
Cached: Allocated pages in physical memory currently in slow existence
Not cached: Allocated pages in physical memory are not present
Physical memory is divided into physical pages (PP), also called page frames, and the size is P-byte

The organizational structure of the DRAM cache

DRAM cache to represent the cache of a virtual storage system
The organizational structure of the DRAM cache is entirely driven by huge misses overhead
The DRAM cache always uses write-back instead of straight-write
Page table: A data structure that is stored in physical memory, maps a virtual page to a physical page, or an array of page table entries
A page table is an array of page table entries
PTE: Consists of a valid bit and an n-bit address field
DRAM cache misses are called missing pages
The activity of transferring pages between disk and storage is called Exchange or page scheduling
Bump: The working set size exceeds the size of the physical memory

Virtual memory as a memory management tool

Multiple virtual pages can be mapped to the same shared physical page
Simplified linking: Separate address spaces allow each process's memory image to use the same basic format, regardless of where the code and data actually reside in the physical memory
Simplified loading: Virtual memory makes it easy to load executables and shared object files in memory
Simplified sharing: stand-alone address space provides a consistent mechanism for the operating system to manage the sharing of user processes and the operating system itself
Simplifies memory allocation: Virtual memory provides a simple mechanism for allocating additional memory to user processes

Virtual memory as a tool for memory protection

Three license bits for PTE:
SUP: Indicates whether the process must be running in kernel mode to access the page
READ: Reading permissions
Write: Writing Permissions

Address Translation

When the page hits, the CPU executes the steps:

第一步:处理器生成一个虚拟地址,并把它传送给MMU第二步:MMU生成PTE地址,并从高速缓存/主存请求得到他第三步:高速缓存/主存向MMU返回PTE第四步:MMU构造物理地址,并把它传送给高速缓存/主存第五步:高速缓存/主存返回所请求的数据字给处理器

Handle missing pages, requiring hardware and operating system collaboration to complete

第一步到第三步同上第四步:PTE中的有效位是0,MMU触发异常,传递CPU中的控制到操作系统内核中的缺页异常处理程序第五步:程序确定物理存储器中的牺牲页,如果页面被修改,则换出到磁盘第六步:程序页面调入新的页面,并更新存储器中的PTE第七步:程序返回到原来的进程,再次执行导致缺页的指令

Combining cache and Virtual memory

The main idea: Address translation occurs before the cache is found

Use TLB to accelerate address translation

Translation fallback buffer tlb: is a small, virtual-addressing cache in which each row holds a block of a single pte with a high degree of connectivity

A common way to compress page tables is to use a hierarchical page table, which reduces storage requirements from two aspects:

第一点:节约,如果一级页表中的PTE是空的,那么相应的二级页表就不会存在第二点:减压,只有一级页表存在主存中,只有经常使用的二级页表才需要缓存在主存中

Linux Virtual memory system

1.Linux Virtual Memory Area

Each existing virtual page exists in a zone, and a virtual page that is not part of a zone does not exist and cannot be referenced by the process.

2. A specific area structure contains the fields:

(1)vm_start:指向这个区域的起始处。(2)vm_end:指向这个区域的结束处。(3)vm_prot:描述这个区域的内包含的所有页的读写许可权限。(4)vm_flags:描述这个区域内页面是与其他进程共享的,还是这个进程私有的(还描述了其他一些信息)。(5)vm_next:指向链表中下一个区域结构。

Memory mapping

1. See the shared object again

(1) Shared area:

A virtual storage area mapped to a shared object is called a shared zone.

The key point of a shared object is that even if the object is mapped to multiple shared areas, the physical memory only needs to hold a copy of the shared object. A Shared Object physical page is not necessarily contiguous.

(2) Private objects are mapped to virtual memory using smart copy-on-write techniques.

2. Look at the fork function again

When the fork function is called by the current process, the kernel creates a variety of data structures for the new process and assigns it a unique PID.

3. Look at the EXECVE function again

The EXECVE function loads and runs the program contained in the executable target file a.out in the current process, effectively replacing the current program with the A.out program.

4. User-level memory mapping using the MMAP function

The MMAP function requires the kernel to create a new virtual memory area, preferably a region starting at address start, and mapping a contiguous slice of the object specified by the file descriptor FD to the new region.

20145306 "Information Security system Design Fundamentals" 14th Week Study Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.