Information Security System design basics 13th Week Study summary-Lu Songhon

Source: Internet
Author: User
Tags byte sizes

Nineth Chapter Virtual Memory

Virtual memory is one of the most important concepts in computer system, it is an abstraction of main memory

Three key competencies:

    • It sees main memory as a cache of address space stored on disk, storing only active areas in main memory, and transmitting data back and forth between disk and main memory as needed, in this way, efficiently using main memory
    • It provides a consistent address space for each process, simplifying memory management
    • It protects the address space of each process from being destroyed by other processes
9.1 Physical and virtual addressing 1. Physical Address

The main memory of a computer system is organized into an array of units of M contiguous byte sizes , each with a unique physical address pa.

Physical addressing is addressed according to the physical address.

2. Virtual Address

The virtual memory is organized into an array of n contiguous byte-sized cells stored on disk.

With virtual addressing, the CPU accesses main memory by generating a virtual address VA , which is converted to the appropriate physical address before being sent to the memory (this process is called address translation , and the associated hardware is the memory Management Unit MMU)

9.2 address space 1. Address space

An address space is an ordered collection of non-negative integer addresses:

{0,1,2,......}

2. Linear address space

Integers in the address space are contiguous.

3. Virtual address space

The CPU generates a virtual address from an address space that has a n=2^n address, which becomes known as the virtual address space.

4. Size of address space

Described by the number of bits required to represent the maximum address.

N=2^n:n bit address space

Each byte in main memory has a virtual address selected from the virtual address space and a physical address selected from the Physical address space.

9.3 Virtual memory as a caching tool

Virtual memory--Virtual page VP, each virtual page size is p=2^ flat bytes

Physical memory-The physical page PP, also called the page frame, is also a P-byte size.

At any one time, the collection of virtual pages is divided into three disjoint subsets:

    • Unassigned: The VM system has not yet allocated/created the page and does not occupy any disk space.
    • Cached: Allocated pages in physical memory currently in slow existence
    • Not cached: Allocated pages in physical memory are not present

9.3.1DRAM Cache Organizational Structure

Need to know, this cache structure:

    • The penalty for not hitting is very large
    • is fully connected-any virtual page can be placed in any physical page.
    • Replacement algorithm Precision
    • Always use write-back instead of straight write.
9.3.2-Page Table

A page table is a data structure that is stored in physical memory and maps a virtual page to a physical page.

A page table is an array of page table entry PTEs, consisting of:

有效位+n位地址字段
1. If a valid bit is set:

The Address field represents the starting position of the corresponding physical page in DRAM, which caches the virtual page in the physical page

2. If no valid bit is set: (1) Empty address:

Indicates that the virtual page is not assigned

(2) Not empty address:

This address points to the starting position of the virtual page on the disk.

9.3.3-page Hit

9.3.4 pages

Several definitions:

    • Missing pages: This means that the DRAM cache is not hit.
    • Page Fault exception: Call the kernel of the fault handler, select a sacrifice pages.
    • Page: The habit of virtual memory, is the block
    • swap = page Scheduling: The activity of transferring pages between disk and storage
    • On-Demand page scheduling: Policies that are not swapped into the page until a miss is hit, which is used by all modern systems.
4. Local Nature in virtual memory

The principle of locality ensures that at any point in time, the program will often work on a smaller set of active pages called the Working set/resident set .

So as long as the program has good time locality, the virtual memory system can work quite well.

Not good?

bumps : The size of the working set exceeds the size of the physical memory.

9.4 Virtual memory as a tool for memory management
    • The operating system provides a separate page table for each process, which is a separate virtual address space.
    • Shaking a virtual page can be mapped to the same shared physical page.
    • Memory mapping: A representation that maps a contiguous set of virtual pages to any location in any file.

VMS simplify linking and loading, code and data sharing, and storage allocation for applications.

9.5 Virtual memory as a tool for memory protection

Here you need to know the three license bits for PTE:

    • SUP: Indicates whether the process must be running in kernel mode to access the page
    • READ: Reading permissions
    • Write: Writing Permissions
9.6 Address Translation

See the specific symbols

Address translation is the virtual address space of an n element the element in the VAS and the physical address space of an M element the mapping between elements in the PAS.

The page base register PTBR points to the current page table.

The MMU uses the VPN to select the appropriate Pte.

Ppo=vpo.

1. When the page hits, the CPU action:
    • The processor generates a virtual address, passed to the MMU
    • MMU generates a PTE address and gets his from cache/main memory request
    • Cache/Main Memory returns PTE to MMU
    • The MMU constructs the physical address and passes it to cache/main memory
    • Cache/Main memory returns the requested data to the processor.

      2. When dealing with a missing pages:
    • The processor generates a virtual address, passed to the MMU
    • MMU generates a PTE address and gets his from cache/main memory request
    • Cache/Main Memory returns PTE to MMU
    • A valid bit in PTE is 0, triggering a fault on the pages
    • Determine the Sacrifice page
    • To update a PTE by paging into a new page
    • Returns the original process, executes the instruction that causes the missing pages again, will hit

9.6.1 combined with cache and virtual memory
    • First, in systems that use both SRAM caches and virtual storage, most systems choose physical addressing
    • The main idea is that address translation occurs before the cache
    • Page table directories can be cached, just like any other data word
9.6.2 using TLB to accelerate address translation

TLB: Translation fallback buffer, is a small, virtual storage cache, where each row holds a block consisting of a single Pte

Steps:

    • CPU generates a virtual address
    • MMU removes the corresponding PTE from the TLB
    • The MMU translates this virtual address into a physical address and sends it to cache/main memory
    • Cache/Main memory returns the requested data Word to the CPU
9.6.3 Multi-level page table

Multi-level page table-hierarchical structure, used to compress page tables .

1. For an example of a two-tier page table hierarchy, the benefits are:
    • If a PTE in a page table is empty, then the corresponding Level two page table does not exist at all
    • Only a single-level page table is required to always be in main memory, the virtual memory system can be created when needed, page calls into or bring up the Level two page table, only the most frequently used level two page table in main memory
9.6.4 end-to-end address translation

This section read the examples in the book.

9.7 Case Study 9.7.1Core i7 address translation

In this case, the PTE has three privilege bits:

    • R/W bit: Determines whether the content is read or write
    • U/S bit: Determines whether the page can be accessed in user mode
    • XD bit: Prohibit execution bit, introduced in 64-bit system, can be used to prohibit fetching instructions from some memory pages

There are also bits involved in a missing pages handler:

    • A-bit, reference bit, implement page substitution algorithm
    • D-bit, dirty bit, tells whether the page must be written back to the sacrifice
9.7.2Linux Virtual Memory System

Linux maintains a separate virtual address space for each process,

Kernel virtual memory includes: code and data structures in the kernel.

A subset of the regions are mapped to physical pages shared by all processes

The other part contains data that is not the same for each process.

1.Linux Virtual Memory Area

Area: is the contiguous slice of the allocated virtual memory.

Examples of Regions:

    • Code snippet
    • Data segment
    • Heap
    • Shared library Segments
    • User stack
    • ......

Each virtual page that exists is saved in a region. The kernel maintains a separate task structure for each process in the system task_struct:

The regional structure of a specific region includes:

    • Vm_start: Point to start
    • Vm_end: Point at end
    • Vm_prot: Describes read and Write permission permissions for all pages contained in this zone
    • Vm_flags: Is it shared or private?
    • Vm_next: Point to Next area
2.Linux Missing pages exception handling (1) is virtual address a legal?

Illegal, triggering segment error, terminating process

Legal, go to the next article

(2) is the memory access legal? That is, do you have permission?

Illegal, triggering protection exception, terminating program

Legal, go to the next article

(3) At this time, is the legitimate virtual address for the legitimate operation. So: Select a sacrifice page and change the new one and update the page table if it is modified. 9.8 Memory Mapping

This is the process by which Linux initializes the contents of this virtual memory area by associating a virtual memory area with an object on a disk.

Mapping objects:

Normal files in 1.Unix file system

2. Anonymous files (all binary 0)

9.8.1 shared objects and private objects 1. Shared objects
    • Shared objects are visible to all the virtual memory processes that map it to their own

    • Even if you map to multiple shared areas, only one copy of the shared object needs to be stored in the physical memory.

2. Private objects
    • Techniques used by Private objects: copy-on-write
    • Only one copy of the private object is saved in the physical memory

The fork function is the application of the write-time copy technique, as for the EXECVE function:

9.8.2 user-level memory mapping using the MMAP function 1. Create a new virtual storage area
#include <unistd.h>#include <sys/mman.h>void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset); 成功返回指向映射区域的指针,若出错则为-1

Parameter meaning:

    • Start: This area starts from start
    • FD: File descriptor
    • Length: Continuous object slice Size
    • Offset: Offsets from the beginning of the file
    • Prot: Access permission bit, as follows:

      PROT_EXEC:由可以被CPU执行的指令组成PROT_READ:可读PROT_WRITE:可写PROT_NONE:不能被访问
    • Flag: Consists of bits that describe the type of object being mapped, as follows:

      MAP_ANON:匿名对象,虚拟页面是二进制0MAP_PRIVATE:私有的、写时拷贝的对象MAP_SHARED:共享对象
      2. Delete Virtual storage: Includeinclude <sys/mman.h>

      int Munmap (void *start, size_t length);
      Successful return 0, failure return-1

Delete from start, the region consisting of the next length byte.

9.9 Dynamic memory allocation 1. Heap:

is a region that requests a binary 0, which starts immediately after the uninitialized BSS region and grows upward (higher address). There is a variable brk point to the top of the heap

2. Two basic styles of dispenser: A. Display dispenser-malloc and freeb. Implicit allocator/garbage collector 9.9.1malloc and FREE functions:

The system calls the malloc function to allocate blocks from the heap:

#include <stdlib.h>void *malloc(size_t size);成功返回指针,指向大小至少为size字节的存储器块,失败返回NULL

The system calls the free function to release the allocated heap block:

#include <stdlib.h>void free(void *ptr);无返回值

The PTR parameter must point to a starting position for an allocated block obtained from malloc, Calloc, or Reallov.

Why use dynamic memory allocation?

They know the size of some data structures because they often know that the program is actually running.

Requirements and objectives of the 9.9.2 allocator: 1. Requirements
    • Processing arbitrary request sequences
    • Respond to requests immediately
    • Use only heap
    • Snap To block
    • Do not modify allocated blocks
2. Objectives:
    • Maximize Throughput (throughput: Number of requests completed per unit of time)
    • Maximizing memory utilization-maximizing peak utilization
9.9.3 fragments

This behavior occurs when there is unused memory, but cannot be used to satisfy the allocation request.

1. Internal fragmentation

Occurs when a allocated block is larger than the payload

Easy to quantify.

2. External debris

Occurs when the free memory is aggregated enough to satisfy an allocation request, but there is not a single space block sufficient to handle this request.

Difficult to quantify, unpredictable.

9.9.4 Implicit idle link list

Format of heap blocks:

Consists of a word's head, an effective load, and possible additional padding.

The free block is implicitly connected by the size field in the head, and the allocator can traverse the entire set of free blocks indirectly by traversing all the blocks in the heap.

Required: The end block of the special tag.

System alignment requirements and allocator-to-block format selection impose a mandatory requirement on the minimum block size on the allocator.

9.9.5 placing allocated blocks--placement Strategy 1. First time adaptation

Search the free list from the beginning and select the first appropriate free block

2. Next adaptation

Start search from the end of the last search

3. Best Fit

Retrieve each free block and select the smallest free block that fits the desired request size

9.9.6 Application for additional heap storage

Use the SBRK function:

#include <unistd.h>vid *sbrk(intptr_t incr);成功则返回旧的brk指针,出错为-1

Expands and shrinks the heap by adding incr to the BRK pointer of the kernel.

9.9.7 Merging free Blocks

Merging is a matter of false fragmentation, and any actual allocator must merge adjacent free blocks.

There are two kinds of strategies:

    • Merge now
    • Postpone merging
Merging of 9.9.8 with boundaries

This merger means that because of the presence of the head, it is simple to merge backwards, but it is inconvenient to merge forward, so??? At the end of the block plus a foot, as a copy of the head, it is convenient to merge, the specific four cases are as follows:

The free block always needs the foot part.

9.9.9 implements a simple allocator

Here is a detailed example of how to implement a simple allocator design, there are a few points to note:

    • Prologue Block and End Block: The prologue block is created at initialization and never released; the end block is a special block that always ends with it.
    • There is a technique that will be reused, the operation is complex and repetitive, these can be defined as macros , easy to use and easy to modify.
    • It is important to note that coercion of type conversions, especially with pointers, is very complex.
    • Because the byte alignment is specified as a double word, the size of the block is the integer multiple of the double word, not the rounding to Yes.
9.9.10 explicit Idle list 1. Difference (1) Time allotted

Implicitly, the allocation time is the linear time of the total number of blocks

Explicit, however, is the linear time of the number of free blocks .

(2) Linked list form

Implicit--Implicit idle list

Explicit-doubly linked list, with a precursor and successor, better than the head foot.

2. Sorting strategy:
    • Last in, first out
    • Maintenance by Address Order
9.9.11 separated list of idle links

Separating storage is a popular way to reduce allocation time. The general idea is to divide all possible block sizes into equivalent class/size classes.

The allocator maintains an array of idle lists, one for each size class, in ascending order of size.

There are two basic ways of doing this:

1. Simple separation of storage

The free list of each size class contains blocks of equal size, and each block is the size of the largest element in the size class.

(1) Operation

If the list is non-empty: Assigns all of the first block

If the list is empty: The allocator requests a fixed-size extra memory slice to the operating system, dividing the slice into equal-sized blocks and connecting them to a new free-form list.

(2) Advantages and disadvantages

Advantages: Fast time, low overhead

Cons: Easy to create internal and external debris

2. Separating and fitting

Each free list is associated with a size class and is organized into a type of display or implicit linked list, each containing a potentially different block size that is a member of the size class.

This method is fast and efficient for memory use.

3. Partner Systems-a special case of separation adaptation

Each of these size classes is a power of 2

Thus, given the size of the address and block, it is easy to calculate the address of its partner, that is to say: The address of a block and its partner's address only a different.

Pros: Quick Search, quick merge.

9.10 Garbage Collection

The garbage collector is a dynamic storage allocator that automatically frees the allocated blocks that the program no longer needs, called garbage , and the process of automatically reclaiming heap storage is called garbage collection .

1. Basic knowledge

The garbage collector sees memory as a forward-reachable graph, only if there is a forward path from any root node and reaches p, it is said that node p is reachable, and the unreachable point is rubbish.

2.mark&sweep garbage collector

There are two stages:

    • Tag: All accessible and assigned successors that are marked out of the root node
    • Clear: Release each unassigned block that is not marked.

Related functions:

ptr定义为typedef void *ptr
    • PTR isptr (PTR p): if p points to a word in an allocated block, it returns a pointer B to the starting position of the block, otherwise returns null
    • int blockmarked (PTR b): Returns TRUE if Block B is already marked
    • int blockallocated (PTR b): If block B is allocated, then long returns ture
    • void Markblock (PTR b): Tag block B
    • int length (PTR b): Returns the length in words of block B, excluding the head
    • void Unmarkblock (PTR b): Changes the state of block B from marked to unmarked
    • PTR nextblock (PTR b): Returns the successor of Block B in the heap

3.C conservative mark&sweep--Balanced binary tree

The root cause is that the C language does not tag the memory location with a type tag.

Memory-related errors common in 9.11 C programs 1. Indirectly referencing bad pointers

Common errors:

--SCANF Error

2. Read Uninitialized memory

Common errors:

--assuming that the heap memory is initialized to 0

3. Allow stack buffer overflow

Common errors:

--Buffer Overflow error

4. Assume that the pointer and the object they point to are of the same size

Working in the distance action at distance

5. Dislocation error 6. Refer to the pointer, not the object it points to 7. Misunderstanding pointer arithmetic 8. Reference a non-existent variable 9. Reference to data in an empty heap block 10. Causing memory leaks

Resources:

1. "In-depth understanding of computer Systems", chapter Nineth

2. Shang Blog: http://www.cnblogs.com/20135202yjx/p/5040711.html

Information Security System design basics 13th Week Study summary-Lu Songhon

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.