20135234 Ma Qiyang-—— Information Security system design basics 14th Week study Summary

Source: Internet
Author: User
Tags byte sizes

nineth Chapter Virtual Memory

Main functions:

    1. The main memory is considered as a cache of the address space stored on disk, only the active area is protected in main memory, and the data is transmitted back and forth between the disk and main memory as needed;
    2. simplifies memory management by providing a consistent address space for each process;
    3. Protects the address space of each process from being destroyed by other processes
9.1 Physical and virtual addressing  1. Physical Address

The main memory of a computer system is organized into an array of units of m contiguous byte sizes, each with a unique physical address Pa.

Physical addressing is addressed according to the physical address.

2. Virtual Address

The virtual memory is organized into an array of n contiguous byte-sized cells stored on disk.

With virtual addressing, the CPU accesses main memory by generating a virtual address VA , which is converted to the appropriate physical address before being sent to the memory

9.2 address Space1. Address Space

Address space is an ordered set of non-negative integer addresses

2. Linear address Space

Integers in the address space are contiguous.

3. Virtual address space

The CPU generates a virtual address from an address space that has a n=2^n address, which becomes known as the virtual address space.

4. Size of address space

Described by the number of bits required to represent the maximum address.

N=2^n:n bit address space

  Each byte in main memory has a virtual address selected from the virtual address space and a physical address selected from the Physical address space.

9.3 Virtual memory as a caching tool

Virtual memory--Virtual page VP, each virtual page size is p=2^ flat bytes

Physical memory-The physical page PP, also called the page frame, is also a P-byte size.

At any one time, the collection of virtual pages is divided into three disjoint subsets:

Unassigned: The VM system has not yet allocated/created the page and does not occupy any disk space.

Cached: Allocated pages in physical memory currently in slow existence

Not cached: Allocated pages in physical memory are not present

9.3.1 DRAM Cache Organizational Structure

Need to know, this cache structure:

The penalty for not hitting is very large

is fully connected-any virtual page can be placed in any physical page.

Replacement algorithm Precision

Always use write-back instead of straight write.

9.3.2-Page Table

A page table is a data structure that is stored in physical memory and maps a virtual page to a physical page.

A page table is an array of page table entry PTEs

9.3.3 Pages

Several definitions:

Missing pages: This means that the DRAM cache is not hit.

Page Fault exception: Call the kernel of the fault handler, select a sacrifice pages.

Page: The habit of virtual memory, is the block

swap = page Scheduling: The activity of transferring pages between disk and storage

On-Demand page scheduling: Policies that are not swapped into the page until a miss is hit, which is used by all modern systems.

9.4 Virtual memory as a tool for memory management

The operating system provides a separate page table for each process, which is a separate virtual address space.

Shaking a virtual page can be mapped to the same shared physical page.

Memory mapping: A representation that maps a contiguous set of virtual pages to any location in any file.

9.5 Virtual memory as a tool for memory protection

Here you need to know the three license bits for PTE:

SUP: Indicates whether the process must be running in kernel mode to access the page

READ: Reading permissions

Write: Writing Permissions

9.6 Address Translation

Address translation is the virtual address space of an n element the element in the VAS and the physical address space of an M element the mapping between elements in the PAS.

The page base register PTBR points to the current page table.

The MMU uses the VPN to select the appropriate Pte.

Ppo=vpo.

1. When the page hits, the CPU action:

The processor generates a virtual address, passed to the MMU

MMU generates a PTE address and gets his from cache/main memory request

Cache/Main Memory returns PTE to MMU

The MMU constructs the physical address and passes it to cache/main memory

Cache/Main memory returns the requested data to the processor.

2. When dealing with a missing pages:

The processor generates a virtual address, passed to the MMU

MMU generates a PTE address and gets his from cache/main memory request

Cache/Main Memory returns PTE to MMU

A valid bit in PTE is 0, triggering a fault on the pages

Determine the Sacrifice page

To update a PTE by paging into a new page

Returns the original process, executes the instruction that causes the missing pages again, will hit

9.6.1 combined with cache and virtual memory

First, in systems that use both SRAM caches and virtual storage, most systems choose physical addressing

The main idea is that address translation occurs before the cache

Page table directories can be cached, just like any other data word

9.6.2 using TLB to accelerate address translation

TLB: Translation fallback buffer, is a small, virtual storage cache, where each row holds a block consisting of a single Pte

Steps:

CPU generates a virtual address

MMU removes the corresponding PTE from the TLB

The MMU translates this virtual address into a physical address and sends it to cache/main memory

Cache/Main memory returns the requested data Word to the CPU

9.6.3 Multi-level page table

Multi-level page table-hierarchical structure, used to compress page tables .

1. For an example of a two-tier page table hierarchy, the benefits are:

If a PTE in a page table is empty, then the corresponding Level two page table does not exist at all

Only a single-level page table is required to always be in main memory, the virtual memory system can be created when needed, the page calls into or bring up the Level two page table, only the most frequently used level two page table in main memory.

9.8 Memory Mapping9.8.1 shared objects and private objects1. Shared Objects

Shared objects are visible to all the virtual memory processes that map it to their own

Even if you map to multiple shared areas, only one copy of the shared object needs to be stored in the physical memory.

2. Private Objects

Techniques used by Private objects: copy-on-write

Only one copy of the private object is saved in the physical memory

9.9 Dynamic Memory allocationRequirements and objectives of the 9.9.3 allocator:1. Requirements

Processing arbitrary request sequences

Respond to requests immediately

Use only heap

Snap To block

Do not modify allocated blocks

2. Objectives:

Maximize Throughput (throughput: Number of requests completed per unit of time)

Maximizing memory utilization-maximizing peak utilization

9.9.4 Fragments

This behavior occurs when there is unused memory, but cannot be used to satisfy the allocation request.

1. Internal fragmentation

Occurs when a allocated block is larger than the payload

Easy to quantify.

2. External debris

Occurs when the free memory is aggregated enough to satisfy an allocation request, but there is not a single space block sufficient to handle this request.

Difficult to quantify, unpredictable.

9.9.5 Implicit idle link list

format of heap blocks: Consists of a word's head, an effective load, and possible additional padding.

9.9.7 placing an allocated block1. First time adaptation

Search the free list from the beginning and select the first appropriate free block

2. Next adaptation

Start search from the end of the last search

3. Best Fit

Retrieve each free block and select the smallest free block that fits the desired request size

9.9.10 Merging free blocks

Merging is a matter of false fragmentation, and any actual allocator must merge adjacent free blocks.

There are two kinds of strategies:

Merge now

Postpone Merging .

9.9.13 Explicit Idle list1. Differences(1) Time allotted

Implicitly, the allocation time is the linear time of the total number of blocks

Explicit, however, is the linear time of the number of free blocks .

(2) linked list form

Implicit--Implicit idle list

Explicit-doubly linked list, with a precursor and successor, better than the head foot.

2. Sorting strategy:

Last in, first out

Maintenance by Address Order

9.9.14 separated list of idle links

Separating storage is a popular way to reduce allocation time. The general idea is to divide all possible block sizes into equivalent class/size classes.

The allocator maintains an array of idle lists, one for each size class, in ascending order of size.

There are two basic ways of doing this:

1. Simple separation of storage

The free list of each size class contains blocks of equal size, and each block is the size of the largest element in the size class.

2. Separating and fitting

Each free list is associated with a size class and is organized into a type of display or implicit linked list, each containing a potentially different block size that is a member of the size class.

This method is fast and efficient for memory use.

3. Partner Systems

  Each of these size classes is a power of 2

Thus, given the size of the address and block, it is easy to calculate the address of its partner, that is to say: The address of a block and its partner's address only a different.

Pros: Quick Search, quick merge.

9.10 Garbage Collection

The garbage collector is a dynamic storage allocator that automatically frees the allocated blocks that the program no longer needs, called garbage, and the process of automatically reclaiming heap storage is called garbage collection.

 

20135234 Ma Qiyang-—— Information Security system design basics 14th Week study Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.