The 14th Week study summary of the basic design of information security system

Source: Internet
Author: User
Tags byte sizes

20145336 Zhang Ziyang "The foundation of Information security system Design" 14th Week study Summary Learning goal
    • Understanding the concept and role of virtual memory
    • Understanding the concept of address translation
    • Understanding Memory Mappings
    • Mastering the method of dynamic memory allocation
    • Understand the concept of garbage collection
    • Understanding memory-related errors in the C language
Summary of learning contents of teaching materials Nineth Chapter virtual memory

3 important capabilities of virtual memory

    1. Consider main memory as a cache of address space stored on disk, protecting only active areas in main memory and transmitting data back and forth between disk and main memory as needed
    2. simplifies memory management by providing a consistent address space for each process
    3. Protects the address space of each process from being destroyed by other processes

    4. Physical Address: The main memory of a computer system is organized into an array of cells consisting of m contiguous byte sizes, each with a unique physical address Pa.

    5. Virtual Address: The virtual memory is organized into an array of n contiguous byte-sized cells stored on disk. With virtual addressing, the CPU accesses main memory by generating a virtual address VA, which is converted to the appropriate physical address before being sent to the memory

Address space

    • Linear address space: integers in the address space are contiguous.
    • Virtual address space: The CPU generates a virtual address from an address space that has a n=2^n address, which becomes known as the virtual address space.
    • The size of the address space: described by the number of bits required to represent the maximum address.
    • Physical address space: corresponds to M-bytes of physical memory in the system.
    • The basic idea of virtual memory: each byte in main memory has a virtual address selected from the virtual address space and a physical address selected from the Physical address space.

Virtual memory as a tool for caching

At any one time, the collection of virtual pages is divided into three disjoint subsets:

Unassigned: The VM system has not yet allocated/created the page and does not occupy any disk space.

Cached: Allocated pages in physical memory currently in slow existence

Not cached: Allocated pages in physical memory are not present

    • Virtual memory--Virtual page VP, each virtual page size is p=2^ flat bytes
    • Physical memory-The physical page PP, also called the page frame, is also a P-byte size.

Characteristics of the organizational structure of DRAM caches

    1. The penalty for not hitting is very large
    2. It's all connected-any virtual page can be placed on any physical page.
    3. Replacement algorithm Precision
    4. Always use write back instead of straight write

Page table

    • A page table is a data structure that is stored in physical memory and maps a virtual page to a physical page.

A page table is an array of page table entry PTEs, consisting of: a valid bit +n bit address field

A valid bit is set: The Address field represents the starting position of the corresponding physical page in DRAM, which caches the virtual page in the physical page

If no valid bit is set: (1) Empty address: Indicates that the virtual page is not assigned (2) is not an empty address: This address points to the starting position of the virtual page on disk.

Virtual memory as a tool for memory management

    • The operating system provides a separate page table for each process, which is a separate virtual address space.
    • Shaking a virtual page can be mapped to the same shared physical page.
    • Memory mapping: A representation that maps a contiguous set of virtual pages to any location in any file.

Virtual memory as a tool for memory protection

Three license bits for PTE:

    • SUP: Indicates whether the process must be running in kernel mode to access the page
    • READ: Reading permissions
    • Write: Writing Permissions

Address Translation

    • Address translation is a mapping between an element in the virtual address space (VAS) of an n element and the Physical address space (PAS) of an M element.
    • Page Table Base Register: A control register in the CPU called the page table base register that points to the current page table. The virtual address of N-bit contains two parts: a P-bit VPO (Virtual page offset) and a n-p-bit VPN. The MMU uses a VPN to select the appropriate Pte. Concatenation of the PPN (physical page number) and the VPO in the virtual address of the page table entry is the corresponding physical address.

    • Page hits are completely hardware-processed, and processing pages require hardware and OS kernel collaboration to complete.

Combining cache and Virtual memory

    • Address translation occurs before the cache, the page table directory can be cached, just like any other data word.

Use TLB to accelerate address translation

    • TLB: Translation fallback buffer is a small, virtual storage cache in which each row holds a block of a single Pte.

Multi-level page table

    • Multi-level page table-hierarchical structure, used to compress page tables. If a PTE in a page table is empty, then the corresponding Level two page table will not exist at all. Only a single-level page table is required to always be in main memory, the virtual memory system can be created when needed, the page calls into or bring up the Level two page table, only the most frequently used level two page table in main memory.

Storage mapping

Memory mapping: The process by which Linux initializes the contents of this virtual memory area by associating a virtual storage area with an object on a disk.

    • Shared objects

The shared object is visible to all the virtual memory processes that map it to itself.

Even if you map to multiple shared areas, only one copy of the shared object needs to be stored in the physical memory.

    • Private objects

Techniques used by Private objects: copy-on-write

Only one copy of the private object is saved in the physical memory

Fork function

    1. When the fork function is called by the current process, the kernel creates a variety of data structures for the new process and assigns it a unique PID. To create a virtual memory for this new process, it creates the mm_struct, the region structure, and the original copy of the page table for the current process. It marks every page in two processes as read-only and marks each of the two processes as private write-time copies.

    2. When fork is returned in a new process, the new process now has the same virtual memory as the virtual memory that existed when the fork was called. When either of these processes later writes, the write-time copy mechanism creates a new page, thus preserving the abstract concept of private address space for each process.

Execve function

The process of loading a a.out program into storage using the EXECVE function

Execve("a.out",NULL,NULL);

Load and run: (1) Delete the existing user area (2) Mapping Private Zone (3) Map shared area (4) Set Program counter.

User-level memory mapping using the map function

Create a new virtual storage area

 #include <unistd.h> #include <sys/mman.h> void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset);

Parameter meaning:

    • Start: This area starts from start
    • FD: File descriptor
    • Length: Continuous object slice Size
    • Offset: Offsets from the beginning of the file
    • Prot: Access Permission bit

Prot_exec: Consists of instructions that can be executed by the CPU

Prot_read: Readable

Prot_write: Writable

Prot_none: cannot be accessed

    • Flag: Consists of bits that describe the type of object being mapped, as follows:

Map_anon: Anonymous object, virtual page is binary 0

Map_private: Private, copy-on-write objects

Map_shared: Shared objects

To delete a virtual storage:

    #include <unistd.h>    #include <sys/mman.h>    int munmap(void *start, size_t length);

Successful return 0, failure return-1

Delete from start, area consisting of next length byte

Dynamic memory allocation

When additional virtual storage is required at run time, the virtual memory area of a process is maintained using the dynamic memory allocator.

There are two styles of dispensers

    • Display allocator: Requires an application to explicitly release any allocated blocks.
    • Implicit allocator: The allocator is required to detect when an allocated block is no longer in use by the program, releasing the Block. Also called a garbage collector.

    • malloc function

The system calls the malloc function to allocate blocks from the heap:

#include <stdlib.h>void *malloc(size_t size);

A successful return pointer to a memory block that is at least a size byte, which fails to return null

    • The free function

The system calls the free function to release the allocated heap block:

#include <stdlib.h>void free(void *ptr);

No return value, the PTR parameter must point to a starting position of an allocated block obtained from malloc, calloc, or Reallov

Requirements and objectives of the dispenser

Display Allocator Requirements: (1) process arbitrary request sequence (2) Immediate Response request (3) Use only heap (4) to aligning (5) Do not modify allocated blocks

Display Allocator target: (1) Maximize Throughput: Maximize memory utilization – Maximize peak utilization (2) throughput: Number of requests completed per unit of time

Fragments

Divided into internal fragments and external fragments

Internal fragmentation: Occurs when an allocated block is larger than the payload and is easily quantified.

External fragmentation: Occurs when free memory is aggregated enough to satisfy an allocation request, but there is not a single space block sufficient to handle this request. Difficult to quantify, unpredictable.

Heap block format: consists of a word's head, valid loads, and possible extra padding.

The allocation of allocated blocks is: (1) First fit: Search for an idle list from scratch, and select the first appropriate free block (2) Next time: Search from the end of the previous search (3) Best fit: Retrieve each free block and select the smallest free block for the desired request size

Request additional heap Storage

SBRK function:

#include <unistd.hvid *sbrk(intptr_t incr);

Successful returns the old BRK pointer with an error of-1

Expands and shrinks the heap by adding incr to the BRK pointer of the kernel.

Two strategies for merging free blocks: (1) Merge now (2) Postpone merge

Detached List of idle links

Simple separation of storage: The free list of each size class contains blocks of equal size, and the size of each block is the size of the largest element in the size class.

Separation adaptation: Allocate a block: Determine the size of the request class, make the appropriate idle list for the first time, find a suitable block, find one, then optionally split it, and insert the remainder into the appropriate free list, if not find the appropriate block, search the next larger size of the idle list of the class, Until a suitable block is found, if there is no suitable block, request additional heap memory from the operating system, allocate a block from the new heap storage, place the remainder in the appropriate size class, release the block, perform the merge, and place the result in the appropriate idle list.

Partner System: A partner system is a special case of separation adaptation, with each size class being a power of 2.

Garbage collection

Garbage collector: A dynamic storage allocator that automatically frees allocated blocks that are no longer needed by the program, which are called garbage, and the process of automatically reclaiming heap storage is called garbage collection.

Two stages: Mark, clear

Using functions:

 ptr定义为typedef void *ptr ptr isPtr(ptr p):如果p指向一个已分配块中的某个字,那么就返回一个指向这个块的起始位置的指针b,否则返回NULL int blockMarked(ptr b):如果已经标记了块b,那么就返回true int blockAllocated(ptr b):如果块b是已分配的,那么久返回ture void markBlock(ptr b):标记块b int length(ptr b):返回块b的以字为单位的长度,不包括头部 void unmarkBlock(ptr b):将块b的状态由已标记的改为未标记的 ptr nextBlock(ptr b):返回堆中块b的后继

Memory-related errors common in C programs

    • Indirectly referencing bad pointers

There is a larger hole in the virtual address space of the process, no mapping to any meaningful data, and if you attempt to reference a pointer to these holes, the operating system terminates the program with a segment exception. Typical errors are: scanf ("%d", Val);

Read Uninitialized memory

Although the bass memory location is always initialized to 0 by the loader, it is not the case for heap storage. A common mistake is to assume that the heap memory is initialized to 0.

    • Allow stack buffer overflow

If a program writes to the target buffer in the stack without checking the size of the input string, the program will have a buffer overflow error.

    • Assume that the pointer is the same size as the object that points to them.

A common mistake is to assume that pointers to objects are the same size as the objects they point to.

    • Cause dislocation errors
    • Reference pointer, not the object he points to
    • Misunderstanding pointer arithmetic
    • Referencing a non-existent variable
    • Cause memory leaks
Learning progress Bar /Cumulative) new/cumulative)
lines of code (newBlog volume (Learning time (new/cumulative) Important growth
Goal 5000 rows 30 Articles 400 hours
Second week 0/0 1/2 19/20
Third week 80/80 1/3 25/44
Week Four 110/190 1/4 23/67
Week Five 60/250 2/6 26/93
Week Six 80/330 2/8 25/118
Seventh Week 60/390 1/9 25/133
Eighth Week 0/390 2/11 22/155
Nineth Week 70/460 2/13 23/178
Tenth Week 375/835 2/15 22/200
11th Week 880/1715 2/17 26/226
12th Week 0/1715 3/20 25/251
13th Week 600/2315 1/21 22/273
14th Week 0/2315 1/22 23/276

The 14th Week study summary of the basic design of information security system

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.