20135205 Information Security System Design Foundation 14th Week study Summary

Source: Internet
Author: User
Tags byte sizes intel core i7

nineth Chapter Virtual Memory

Three important capabilities of virtual memory:

它将主存看成是一个存储在磁盘上的地址空间的高速缓存,在主存中只保存活动区域,并根据需要在磁盘和主存之间来回传送数据,通过这种方式,高效的使用了主存。

它为每个进程提供了一致的地址空间,从而简化了存储器管理

它保护了每个进程的地址空间不被其他进程破坏。

1. Physical and virtual addressing

The main memory of a computer system is organized into an array of cells consisting of m contiguous byte sizes.

Each byte has a unique physical address.

The address of the first byte is 0, the next byte address is 1, the next is 2, and so on.

Given this simple structure, the most natural way for the CPU to access the memory is to use the physical address. We refer to this approach as physical addressing.

Modern processors use a form of addressing called virtual addressing. With this virtual addressing, the CPU accesses the main memory by generating a virtual address, which is then converted to the appropriate physical address before being sent to the memory.

2. Address Space

An address space is an ordered collection of non-negative integer addresses: {1, 2, 3 ...}

Linear address space: an integer of the address space is contiguous.

Virtual address space: The CPU generates a virtual address from an address space that has a n=2^n address, which becomes known as the virtual address space.

Physical Address space: M-bytes corresponding to the physical memory in the system {1,2,3.....,M-1}

3, virtual memory as a tool for caching

1) Unallocated: The VM system has not yet allocated/created the page, and does not occupy any disk space.

2) Cached: Allocated pages in physical memory currently in slow existence

3) Not cached: The allocated pages in the physical memory are not slow to exist

the organizational structure of the DRAM cache:

The term DRAM cache is used to represent the cache of a virtual storage system, which caches virtual pages in main memory.

The penalty is large because the large misses are punished and the overhead of accessing the first byte, the virtual pages tend to be large.

The cache is fully connected-any virtual page can be placed in any physical page.

It is also important to replace the policy when not hit, because the penalty for replacing the wrong virtual page is also very high. Always has a long disk access time. The replacement algorithm is precise and always uses writeback instead of straight write.

Page table: A data structure that is stored in physical memory, maps a virtual page to a physical page, or an array of page table entries.

The page table consists of:有效位+n位地址字段。

Missing pages: This means that the DRAM cache is not hit.

Page Fault exception: Call the kernel of the fault handler, select a sacrifice pages.

Transform/Page Scheduling: The activity of transferring pages between disk and storage.

Locality:局部性原则保证了在任意时刻,程序将往往在一个较小的活动页面集合上工作,这个集合叫做工作集或者常驻集。

If the size of the working set exceeds the size of the physical memory, then the program will produce an unfortunate state called a bump.

4, as a tool for memory management

1) simplified linking: A separate address space allows each process's memory image to use the same basic format, regardless of where the code and data actually reside in the physical memory.
2) Simplified loading: virtual memory makes it easy to load executables and shared file objects in memory.
3) Simplified sharing: The standalone address space provides a consistent mechanism for the operating system to manage the sharing of user processes and the operating system itself.
4) simplifies memory allocation: Virtual memory provides a simple mechanism to allocate additional memory to user processes.

5. As a memory protection tool

Any modern computer system must provide the operating system with the means to control access to the memory system.

The address translation mechanism is extended in a natural way to provide better access control. Each time the CPU generates an address, the address translation hardware reads a PTE, so it is easy to control access to a virtual page's content by adding some additional license bits to the PTE.

Pte Add three license bits:

SUP: Indicates whether the process must be running in kernel mode to access the page.

Read and write: Indicates read and write access control for the page, respectively.

6. Address Translation

Formally, address translation is a mapping between elements in the virtual address space (VAS) of an n element and elements in the Physical address space (PAS) of an M element: Map:vas->pas u∅

MAP = a ': If the data at virtual address A is at the physical address a ' of PAS.
MAP =∅: If the data at virtual address A is not in physical memory.

when the page hits, the CPU hardware executes the steps:

1) The processor generates a virtual address and passes it to the MMU.

2) The MMU generates a PTE address and gets him from the cache/main memory request.

3) cache/main Memory returns PTE to the MMU.

4) The MMU constructs the physical address and passes it to cache/main memory.

5) Cache/main memory returns the requested data to the processor.

The CPU hardware performs the steps when dealing with a missing pages:

1) The processor generates a virtual address and passes it to the MMU.

2) The MMU generates a PTE address and gets him from the cache/main memory request.

3) cache/main Memory returns PTE to the MMU.

4) The valid bit in PTE is 0, triggering the fault of the missing pages.

5) determine the Sacrifice page.

6) Update the PTE by paging into the new page.

7) Returns the original process, executes the instruction that causes the missing pages again, will hit.

end-to-end address translation:

The memory is addressed by Byte.

Memory access is for 1-byte words (not 4-byte words).

The virtual address is a 14-bit long (n=14).

The physical address is 12 bits long (m=12).

The page size is 64 bytes (p=64).

TLB is a four-way group, with a total of 16 entries

The L1 D-cache is physically addressed, mapped directly, with a row size of 4 bytes and a total of 16 groups.

7. Intel Core I7/linu Memory System

An Intel Core i7 that runs Linux is based on the Nehalem micro-architecture. While the Nehalem design allows for full 64-bit virtual and physical address space, the current and foreseeable future Core i7 implementations support 48-bit (256TB) virtual address space and 52-bit (4PB) Physical address space, as well as a compatibility mode Supports 32-bit (4GB) virtual and physical address space.

Processor package: Includes four cores, one large all-core shared L3 cache, and one DDR3 memory controller.

Linux virtual memory area

Linux organizes virtual storage into a collection of areas (also called segments). A region is a contiguous slice of the already existing (allocated) virtual memory, which is associated in some way. Code snippets, data segments, heaps, shared library segments, and user stacks are different areas.

8. Memory Mapping

Memory Map:指Linux通过将一个虚拟存储器区域与一个磁盘上的对象关联起来,以初始化这个虚拟存储器区域的内容的过程。

Shared objects:共享对象对于所有把它映射到自己的虚拟存储器进程来说都是可见的。即使映射到多个共享区域,物理存储器中也只需要存放共享对象的一个拷贝。

Private objects:私有对象运用的技术:写时拷贝,在物理存储器中只保存有私有对象的一份拷贝。

The MMAP function requires the kernel to create a new virtual memory area

parameter prot contains the access permission bits that describe the new mapped virtual memory area

Prot_exec: This area of the page consists of instructions that can be executed by the CPU.
Prot_read: This area of the page is readable.
Prot_write: This area of the page is writable.
Prot_none: Pages in this area cannot be accessed.

9. Dynamic Memory allocation

堆:一个请求二进制0的区域,紧接在未初始化的bss区域后开始,并向上(更高的地址)生长。有一个变量brk指向堆的顶部。

Explicit allocator: Requires the app to release any allocated blocks as shown.

Implicit allocator: The allocator is required to detect when an allocated block is no longer in use by the program, then release the block. An implicit allocator is also called a garbage collector, and the process of automatically releasing unused allocated blocks is called garbage collection.

malloc and FREE functions:

The program allocates blocks from the heap by calling the malloc function.

#include <stdlib.h>

void *malloc (size_t size)

A successful return pointer to a memory block that is at least a size byte, which fails to return null

This block will be aligned for any type of data object that might be contained within the block.

You can also use the SBRK function:

#include <unistd.h>

void *sbrk (intptr_t incr);

Returns: The old BRK pointer if successful, or 1 if an error occurs.

The program frees the allocated heap blocks by calling the free function:

#include <stdlib.h>

void free (void *ptr);

return: None

The PTR parameter must point to a starting position for an allocated block obtained from malloc, Calloc, or realloc.

If not, then the behavior of free is undefined.

Requirements and objectives of the dispenser

Requirements:

A. Handling arbitrary request sequences

B. Responding to requests immediately

C. Use only heap

D. Snap To block

E. Do not modify allocated blocks

Goal:

A. Maximizing throughput rates

B. Maximizing memory utilization-maximizing peak utilization

10. Garbage collection

void Garbage ()

{

int *p = (int *) Malloc (15213)

Return

}

garbage collector:

is a dynamic storage allocator that automatically frees allocated blocks that are no longer needed by the program. These blocks are called garbage. The process of automatically reclaiming heap storage is called garbage collection.

The following functions are used in the description of Mark&sweep: where PTR is defined as typedef void *PTR

Ptrisptr (PTR p): if p points to a word in an allocated block, it returns a pointer B to the starting position of the block, otherwise returns NULL.
int blockmarked (PTR b): Returns TRUE if Block B is already marked.
int blockallocated (PTR b): Returns TRUE if Block B is assigned.
void markblock (PTR B): Tag block B.
int length (PTR b): Returns the length in words of block B, excluding the head.
void unmarkblock (PTR b): Changes the state of block B from marked to unmarked.
ptr nextblock (PTR b): Returns the successor of Block B in the heap.

reference:

"In-depth understanding of computer operating Systems"

20135205 Information Security System Design Foundation 14th Week study Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.