The request segmentation system is based on the segmented system, and the segmented virtual storage system which is formed by the request function and the segmented permutation function is added. The segmented storage management mode allocation algorithm is similar to the variable partition allocation algorithm, which can use the optimal adaptive method, the worst fit method and the first adaptive method. It is clear that the problem of outer debris still needs to be solved.
- First Adaptive allocation algorithm: This algorithm finds the table from the first table in the free partition table by the partition sequence number, and distributes the first found free partition greater than or equal to the job size to the requested job . Then, by the size of the job, a chunk of memory space is allocated from the partition to the job, and the remaining free partitions remain in the free partition table. This allocation fails if you find that the partition table has not been found at the end of the workspace that is greater than or equal to the job's idle area. Advantage: Take advantage of the idle partition in memory in the low-access part, and the free partition of the high-address part is seldom exploited, thus preserving the large idle area of the high-address part. A condition is created for allocating large memory space for large jobs that arrive later. Cons: The low-address section is constantly being divided, leaving many hard-to-use, small, free partitions.
- Loops First Adaptive allocation algorithm: the algorithm is developed by the first adaptive allocation algorithm. When allocating memory for a job, the lookup is not done every time from the first table item in the free partition table, but from the next scratch area of the last found scratch area until the first free area that satisfies the requirement is found, and a chunk of memory space equal to the request size is allocated to the job. To implement this algorithm, a starting lookup pointer should be set to indicate the next free partition to start the lookup, and to use a circular lookup method. That is, if the size of the last free partition still does not meet the requirements, it is returned to the first free partition for a lookup. Advantage : In-memory idle areas are distributed more evenly, reducing the overhead of finding free partitions. disadvantage : The lack of large free partition in the system is disadvantageous to the big job.
- Optimal Adaptive allocation algorithm : The algorithm picks up an idle partition that is closest to the job size and is greater than or equal to the job from all unassigned partitions to minimize the remaining fragments after each allocation. In order to find the most appropriate size of the free partition, you need to look through the entire free partition table, which increases the time to find. Therefore, in order to speed up the search, all the free partitions are ordered from small to large increments. In this way, the first time to find the satisfied requirements of the free partition, is necessarily the best. cons : After each allocation, the remainder is a small fragment that cannot be exploited by other operations. Therefore, the memory utilization of the algorithm is not high.
- worst-fit allocation algorithm: the algorithm picks up one of the largest free partitions from all unassigned partitions to assign to the job, so that the remaining free partitions after allocation are large enough to be used by other jobs. To find the largest free partition, you need to look through the entire free partition table to increase the lookup time. Therefore, in order to speed up the lookup, all the free partitions are ordered in descending order from large to small. Thus, the first time to find the free partition, is bound to be the largest. Advantages: The worst-fit allocation algorithm may be larger than the remaining free partition after allocation, still can meet the general job requirements, can be used later. To minimize the fragmentation that is not available in the system. Disadvantage: This algorithm makes each free partition in the system to reduce evenly, after working for a period of time, can not meet the allocation requirements of large free partition.
In the virtual storage system, if the process in memory accounted for three (empty at the beginning, the use of first-out page elimination algorithm, when the execution of the access page number sequence is 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5, 6 o'clock, will produce () the number of pages interrupted.
FIFO page replacement algorithm (the First-in first-out page replacement algorithm). As the name implies, the idea of this algorithm is to organize all the pages in memory into a queue (or a linked list), and each time a page is swapped into memory, it is added to the end of the queue, and when a page is swapped out, the page is removed directly from the queue. Swap out memory for the longest page in memory, 1 12 123 1234 2341 3412 4125 125123 2534 53456 346
The operating system uses buffering technology to improve the utilization of resources by reducing the number of CPUs ().
The main reasons for introducing buffering include: easing the contradiction between the speed mismatch between CPU and I/O devices, reducing the frequency of interrupts to the CPU, easing the limit on interrupt response time, and increasing parallelism between CPU and I/O devices. Therefore, the use of buffering technology can reduce the number of interrupts to the CPU, thereby improving system efficiency.
The multilevel feedback queue scheduling algorithm is a CPU processor scheduling algorithm, which is adopted by UNIX operating system. ** multistage (assuming n-level) feedback queue scheduling algorithm can be ** 1, with n queues (q1,q2 .... QN), where each queue has a different priority for the processor, which means that the jobs (processes) in each queue have a different priority. In general, priority (Q1) > First (Q2) > ... > Priorities (QN). What is said, any job (process) in Q1 is higher than the CPU priority of any job (process) in Q2 (that is, the job in Q1 must be dispatched by the processor before the job in Q2), and so on. 2, for a particular queue, is followed by the time slice rotation method. That is, there are n jobs in the queue Q2, and their run time is determined by the time slice set by Q2 this queue (for ease of understanding, we can also assume that the priority of jobs in a particular queue is scheduled according to FCFS). 3, are the time slices of each queue the same? is not the same, this is the subtlety of the algorithm design. The time slices for each queue are reduced as the priority increases, that is, the shorter the time slice in the queue with the higher priority. At the same time, in order to facilitate the completion of those oversized jobs, the last queue qn (the lowest priority queue) is generally very large (no need to consider this issue). ** Multilevel Feedback Queue scheduling algorithm describes **1, the process enters the queue to be scheduled to wait, first into the highest priority Q1 wait.  2, the process in the queue that first schedules high priority. If there are no scheduled processes in the queue in high priority, the processes in the secondary priority queue are dispatched. For example: Q1,q2,q3 three queues, only when there is no process waiting in the Q1 to dispatch Q2, in the same way, only q1,q2 are empty when the Q3 will be dispatched. 3, for each process in the same queue, is dispatched according to the time slice rotation method. For example, the time slice of the Q1 queue is N, then Q1 in the job after n time after the film has not been completed, then into the Q2 queue waiting, if the Q2 time slice run out after the job can not be completed, has entered the next level of the queue until the completion. 4, when a process in a low-priority queue runs, and a new job arrives, the CPU is immediately assigned to the newly arrived job (preemption) after running the time slice. ** Let's take a look at how the algorithm works * * Assuming that there are 3 feedback queue q1,q2,q3 in the system, the time slices are 2,4,8 respectively. There are now 3 jobs j1,j2,j3, respectively, at the time ofRoom 0, 1,3 time to arrive. The CPU time they need is 3,2,1 time slices, respectively. 1, Time 0 J1 arrived. So into the queue 1, run 1 time slices, time slices are not yet, at this point J2 arrived. 2, time 1 J2 arrived. As the time slices were still controlled by J1, they waited. J1 after running a time slice, has completed the Q1 in the 2 time slice of the limit, so J1 placed Q2 waiting to be dispatched. Now the processor is assigned to J2. 3, time 2 J1 into Q2 waiting for scheduling, J2 get the CPU to start running. 4, time 3 J3 arrived, due to J2 time slice not to, so J3 in Q1 wait for dispatch, J1 also in Q2 wait for dispatch. 5, time 4 J2 processing completed, because j3,j1 are waiting for scheduling, but J3 queue than J1 the priority of the queue, so J3 is scheduled, J1 continue to wait in Q2. 6, Time 5 J3 after 1 time slices, complete. 7, moment 6 because Q1 has been idle, and then began to dispatch Q2 in the job, then J1 get the processor to start running. J1 again after a time slice, completed the task. So the entire dispatch process is over. From the above example, in the multilevel feedback queue, the backward job is not necessarily slow to complete. The router receives a packet with the destination address of 202.65.17.4, which network segment is the subnet of the packet? Log on to other people's computers and login QQ does not need DNS for domain name resolution, which proves that the network is not a problem, but when you enter the URL, so that the DNS domain name resolution service, the Web site does not go up, it proves that the DNS service is problematic. The c class network only has the last 8 bits to allocate the subnet number and the host number, each subnet accommodates at least 55 hosts, so 6 bits are required to allocate the host number, only two bits to allocate the subnet number, and the subnet mask is 255.255.255.11000000; 255.255.255.192 In the Network 7 layer protocol, if you want to use the UDP protocol to achieve the effect of TCP protocol, which layer can make a fuss? Because UDP to achieve the function of TCP must realize the function of congestion control, and is implemented between the routes, this at the bottom is obviously not able to do congestion control, in the application layer is not able to do , because the application layer and application hook, generally can only control the host program, and the presentation layer is to handle all data representation and transport related issues, including conversion, encryption and compression, in the transport layer is not possible, because you have used the UDP protocol, you can not convert it at this level, only at the session layer . Session layer (SESSIOn LAYER) allows a session relationship to be established between users on different machines. The session layer carries out a similar transfer layer common data transfer, and in some cases provides some useful enhanced services. Allows the user to log on to the remote CTSS using a single session, or to pass files between two machines. One of the services provided by the Session Layer is management dialog control. The session layer allows information to be transmitted in both directions at the same time, or only one way at any one time. If it belongs to the latter, similar to the half-duplex mode on the physical channel, the session layer will record at this time which party access to the Web server is first accessed, the order of the different protocols, when you connect the Web server network cable, it will automatically send an ARP message, So that the access gateway can find it, the gateway will form a similar: 2c 1e 3c 3e 9b-192.168.1.123 MAC address to the IP address of the mapping record. 2, if the user entered the domain name in the browser, such as the local DNS cache does not, will inevitably make a DNS query to determine the IP address of the domain name. 3, HTTP. After obtaining the DNS corresponding IP address, use the HTTP protocol to access the Web server (regardless of the phase of the TCP three handshake establishment connection).
Linux用户分为:拥有者、组群(Group)、其他(other)
linux中的文件属性过分四段,如 -rwzrwz---
第一段 - 是指文件类型 表示这是个普通文件
文件类型部分
-为:表示文件
d为:表示文件夹
l为:表示<a href=
"https://www.baidu.com/s?wd=%E9%93%BE%E6%8E%A5%E6%96%87%E4%BB%B6&tn=44039180_cpr&fenlei=mv6quAkxTZn0IZRqIHckPjm4nH00T1Y3nWnzPWbzmHNBn1RYuHmd0ZwV5Hcvrjm3rH6sPfKWUMw85HfYnjn4nH6sgvPsT6KdThsqpZwYTjCEQLGCpyw9Uz4Bmy-bIi4WUvYETgN-TLwGUv3En1TvPHmzn1b4"
target=
"_blank"
>链接文件,可以理解为 windows中的快捷方式(link file)
b为:表示里面可以供存储周边设备
c为:表示里面为一次性读取装置
第二段 rwz 是指拥有者具有可读可写可执行的权限
类似于windows中的所有者权限比如 administrator 对文件具有 修改、读取和执行权限
第三段 rwz 是指所属于这个组的成员对于这个文件具有,可读可写可执行的权限
类似于windows中的组权限比如administrators组,属于这个组的成员对于文件的都有 可读可写可执行权限
第四段 --- 是指其他人对于这个文件没有任何权限
类似于windows中的 anyone 一样就是说所有人对着个文件都会有一个怎样的权限
</a>
When a host in the local area network tests the network connection with the ping command, it is found that the hosts within the network can be combined with the public network, and the problem may be
Incorrect gateway settings for LAN or host
Incorrect host IP settings
No gateway to LAN connected
Incorrect gateway settings for LAN or host
LAN DNS server settings are incorrect
- A. If the host IP settings are wrong, the intranet is not connected
- B. Play a word game, LAN communication No gateway this said AH! Gateways are all about two networks. The same network, the gateway does not work
- C. Incorrect gateway settings, will not affect the LAN ping, as long as the intranet to ensure that IP in the same network segment can ping the same. So at this time the intranet can be ping-pass. But the gateway is a door between the two networks, if you want to ping the outside network, you have to open the door key that is the gateway is configured correctly.
- The D.dns configuration is for the domain name resolution. It is not related to ping without ping.
In a Linux system,/etc/skel you can store the default file for the system user that created the user directory
Linux/etc/skel directory is often not noticed, in fact, this directory in the new user is still very useful, flexible use of this directory can save a certain amount of configuration time.
Skel is the abbreviation of skeleton, meaning skeleton and frame. Therefore, the purpose of the directory is to initialize the user root directory when a new user is created. All files and directories in this directory are copied to the root of the new user, and the user owner and user group are adjusted to be the same as this root directory. So user profiles can be provisioned to the/etc/skel directory, such as. BASHRC,. Profile, and. Vimrc.
Note:
1. If the user root is not automatically established when creating a new user, it cannot be called to this framework directory.
2. If you do not want to use the default/etc/skel directory as the framework directory, you can specify a new framework directory when you run the Useradd command. For example:
sudo useradd-d/home/chen-m-k/etc/my_skel Chen
The above command will create a new user Chen, set the user root directory to/home/chen, and this directory will be created automatically, and specify the framework directory as/etc/my_skel.
3. If you do not want to reassign the new frame directory each time you create a new user, you can change the default framework directory by modifying the/etc/default/useradd configuration file as follows:
Look for the definition of the Skel variable, and if the definition of the variable has been commented out, uncomment it and modify its value:
Skel=/etc/my_skel
Nagle algorithm is mainly used to avoid a large number of small packets in the network transmission, thereby reducing network capacity utilization. such as a 20-byte TCP header + 20-byte IP header + 1 bytes of data composed of a TCP datagram, effective transmission channel utilization is only nearly 1/40. If the network is flooded with such small packet data, the utilization of network resources is quite low. --but for some programs that require a small package scenario, such as a more interactive program like telnet or SSH, you need to close the algorithm. You can turn off this algorithm by setting the TCP_NODELAY option in the socket.
Format: Sync
Forces the file buffer content in memory to be written to disk.
For a binary lookup of an ordered array of 20 elements, the array starts with a subscript of 1, then the subscript of the comparison sequence for finding a[2] is
(high-low)/2+low = middle; Subscript starting from 1, because find find a[2], low is always 1; (20-1)/2+1=10; (10-1)/2+1 = 5; (5-1)/2+1 = 3; (3-1)/2+1 = 2; A Huffman tree has M-leaf nodes, using a struct Node{node *l,*r;int val; structure to store the nodes in the tree, how many null pointers will be generated altogether? There are no nodes in the Huffman tree with a degree of 1 so the number of nodes n=n0+n1+n2=n0+n2=n0+ (n0-1) =2n0-1=2m-1 altogether has 2m-1 nodes, so there are 2m-1-1 bars, one edge occupies a pointer field, Therefore, the number of pointer fields occupied is 2M-22M-1 nodes there are 4m-2 pointer fields are not occupied by the number of pointer fields is 4m-2-(2m-2) =2m The last answer is 2m huffman tree has no degree of a node, there are m leaf nodes so there are 2m null pointers
Death Throes-2