2.3 disk drive performance
==================================
The disk drive is an electronic mechanical device that determines the performance of the entire storage system environment. This section will discuss various factors that affect the disk drive performance.
2.3.1 Disk Service Time
The disk service time is the time required for the disk to complete an I/O Request. the disk service time consists of seek time, rotational latency, and data transmission rate ).
Seek time
Seek time, also called access time, describes the time required to place the R/W header to a certain position across disks in a radial direction (moving along the radius. in other words, it is the time required to re-place and determine the disk arm and write head to the correct track. the smaller the seek time, the faster the IO operation. disk supply will announce their seek time details as follows:
■ Full stroke: the time required for reading and writing headers to span the entire disc, from the innermost ring to the outermost ring.
■ Average: the average time required for reading and writing headers from a random circle to another random circle, usually 1/3 of full stroke.
■ Track-to-Track: the time required for reading and writing headers to move between two adjacent circles.
Each of these attacks is measured in milliseconds (1‰ seconds. the seek time of the random circle is more important than the seek time of the adjacent circle during reading operations. to minimize seek time, data is written only on a subset of available cylinder. for example, if a 40% GB disk drive is set to use only the starting cylinder of, the disk can be regarded as a GB valid disk. this kind of disk is called short-stroking (passive short arm ).
Rotational latency
Rotation delay. In order to access data, the drive arm moves the read/write header across the disk to a certain circle, and the disk is also rotating, so that the required sector (sector) is placed under the read/write header. the time when the disk is rotated to place the read/write header to the specified Sector, which is called the rotation delay. the delay time depends on the rotation speed of the shaft, and the unit of the time is calculated in milliseconds (1‰ seconds. the average rotation delay is half the time the disk is rotated in the entire circle. same as seek time, the size of the rotation delay moving in the random fan interval in read/write operations is more important than the speed indicator of the adjacent slice. the average rotation delay of 5400 to 5.5 disks per second is about 15000 milliseconds. For to 2 milliseconds.
Data Transfer Rate
The data transmission rate (also called the data transfer rate) refers to the amount of data that can be transferred to the HBA per disk unit time. to calculate the transmission rate, you must first understand the read/write operation process. in one operation, data is first transmitted from the disk to the read/write head, and then moved to the internal cache of the disk drive. finally, data is transmitted from the cache to the host's HbA through the interface. in the write operation, the HBA first writes data from the disk interface to the internal cache of the disk, and then moves the data to the read/write header. finally, the data is moved to the disk.
The data transmission rate of read/write operations is measured based on the internal and external transmission rates, 2-8.
The internal transmission rate is the speed at which data is moved from a single lap on the disk surface to the internal cache of the disk. the internal transmission rate takes seek time into account. the external transmission rate is the speed at which data is moved to the HBA. the external data transmission rate is an excuse, for example, the ATA hard disk is 133 Mbit/s. the external transmission rate remains below the internal transmission rate for a long time.
2.4 basic principles for determining disk Performance
To understand the disk performance rules, a disk can be viewed as a black box consisting of the following two elements.
■ Queue: the location where the IO request is waiting for processing by the IO Controller
■ Disk I/O controller: The units that process IO requests in the queue one by one.
IO requests reach the Controller speed by the applicationProgramGenerate. this rate is called the arrival rate (arrival rate ). these requests are retained in the IO queue, and the IO controller processes them one by one, 2-9. io arrival rate, queue length, and the time when the IO controller processes each request determine the disk system performance. This performance is measured by the response time.
The little rule is a basic rule that describes the number of requests in a queue and the response time. The rule describes the following relationship (the numbers in the brackets are reference notes)
N = A × R
"N" indicates the total number of requests in the queue system, that is, the number of requests in the queue + the number of requests in the IO controller.
"A" indicates the arrival rate, or the number of IO requests that arrive in the system per unit time.
Here, "R" is the average response time, or the unloading time of each Io request, that is, the total time from Io request to exit.
The Utilization Law is another important rule that defines the utility of the IO controller. The formula is as follows:
U = A × rs
Where
"U" is the utility of Io (utilization)
"Rs" is service time, or the average time the Controller draws on a request. 1/RS is service rate.
From arrival rate "a", you can calculate the average inter-arrival time, as follows:
Ra = 1/
Therefore, utilization can be defined as the ratio of service time to average inter-arrival time. The formula is as follows:
U = RS/ra
The ratio is between 0 and 1.
Here, it is important to understand that in a single controller system, the arrival rate must be smaller than the service rate. In other words, the service time must be smaller than the average inter-arrival time, otherwise, the arrival of IO requests will be faster than the IO controller can process.
With the help of these two rules, some important indicators of disk performance, such as average response time, average queue length, and the time of a request in the queue can be deduced.
In the equation below, average response rate (s) is defined as the reciprocal of the average response time. The derivation is as follows:
S = Service Rate-Arrival Rate
Therefore,
R = 1/(Service Rate-arrival rate)
R = 1/(1/RS-1/RA)
= 1/(1/RS-A) (From EQ. 3)
= RS/(1-A × RS)
R = RS/(1-u) (5) (From EQ. 2)
Result,
Average response time (R) = service time/(1-utilization)
(From equation 2)
With utilization approaching 1, I/O controller approaching saturation, that is, response time tends to be infinite. in essence, saturated components, that is, bottlenecks, limit the serialization of IO requests, meaning that each Io request has to wait until the previous Io request is completed before it can be processed.
Utilization (u) can also be used to whitelist the average number of IO requests in the Controller, as shown below:
Number of requests in the queue (NQ) = number of requests in the system (N)-number of requests on the Controller or utilization (u ).
Number of requests in a queue is also called average queue size.
NQ = N-u
= A × r-u (From EQ. 1)
= A × (RS/(1-u)-U (From EQ. 5)
= (RS/RA)/(1-u)-U (From EQ. 3)
= U/(1-u)-U (From EQ. 4)
= U (1/(1-u)-1)
= U2/(1-u) (6)
The time that a request spends in the queue is equal to the time that the request spends in the system, or the average response time minus the time when the controller processes a request.
= RS/(1-u)-Rs (From EQ. 5)
= U × RS/(1-U)
= U × avg. Response Time
= Utilization × R (7)
Consider that in an IO system, 100 IO requests are generated every second. the service time is 8 milliseconds. the disk performance can be calculated through the above relationship utilization (u), total response time (R), average queue size [U2/(1-u)], there is also the total time the request is spent in the queue, as follows:
Arrival Rate (A) = 100 I/O/s; therefore, the arrival time
Ra = 1/a = 10 MS
MS (given)
1. Utilization (u) = RS/Ra = 8/10 = 0.8 or 80%
2. response time (R) = RS/(1-u) = 8/(1-0.8) = 40 MS
3. Average queue size = u2/(1-u) = (0.8) 2/(1-0.8) = 3.2
4. Time spent by a request in a queue = u × R, or the total response timeservice
MS time = 32
If the Controller capability is doubled, the service time is halved. As a result, in this scenario:
Rs = 4 MS.
1. Utilization (u) = 4/10 = 0.4 or 40%
2. response time (R) = 4/(1-0.4) = 6.67 MS
3. Average queue size = (0.4) 2/(1-0.4) = 0.26
4. Time spent by a request in a queue = 0.4 × 6.67 = 2.67 MS
We can conclude that by reducing the sum of half service time (seek time, latency, internal tranfer rete) or utilization, the response time can be dramatically reduced (in the previous example, it was almost six times ). the relationship between utilization and response time is shown in Figure 2-10.
The response time changes linearly with the increase of utilization. when the average queue size is relatively low, the response time is short. response time increases slowly with the increase of queue, and increases exponentially when utilization exceeds 70%.