550 likes | 793 Views
operating systems. Disk Management. operating systems. Goals of I/O Software. Provide a common, abstract view of all devices to the application programmer (open, read, write) Provide as much overlap as possible between the operation of I/O devices and the CPU. operating systems.
E N D
operating systems Disk Management
operating systems Goals of I/O Software Provide a common, abstract view of all devices to the application programmer (open, read, write) Provide as much overlap as possible between the operation of I/O devices and the CPU.
operating systems Application programmers expect serial execution semantics I/O – Processor Overlap read (device, “%d”, x); y = f(x); We expect that this statement will complete before the assignment is executed. To accomplish this, the OS blocks the process until the I/O operation completes.
Without Blocking! read(device, %D”, x); y = f(x); … The read is issued. The read completes and the value of x is updated. The read has not Completed … but the process continues to execute. operating systems User Process READ Device Independent Layer CPU Device Dependent Layer Interrupt Handler data status command Device Controller x 5 5 12 12
In a multi-programming environment, another application could use the cpu while the first application waits for the I/O to complete. operating systems Request I/O operation I/O Complete app2 done app1 app2 I/O controller
operating systems Performance • Thread execution time can be broken into: • Time compute The time the thread spends doing computations • Time device The time spent on I/O operations • Time overhead The time spent determining if I/O is complete So, Time total = Time compute + Time device + Time overhead
operating systems When the device driver polls Performance Time total = Time compute + Time device + Time overhead Time overhead = The period of time between the point where the device completes the operation and the point where the polling loop determines that the operation is complete. This is generally just a few instruction times. Note that when the device driver polls, no other process can use the cpu. Polling consumes the cpu.
operating systems When the device driver uses interrupts Performance Time total = Time compute + Time device + Time overhead When the device driver uses interrupts Time overhead = Time handler + Time ready Time handler is the time spent in the interrupt handler Time ready is the time the process waits for the cpu after it has completed its I/O, while another process uses the CPU.
operating systems For simplicity’s sake assume processes of the following form: Each process computes for a long while and then writes its results to a file. We will ignore the time taken to do a context switch. Request an I/O operation Time compute Time compute process Time device I/O controller
Polling Case Time overhead Time overhead operating systems Time device Time device Time compute Time compute Time compute Proc 1 Time compute Proc 2 Proc1 polls Proc2 polls In the polling case, the process starts the I/O operation, and then continually loops, asking the device if it is done.
Interrupt Case Time overhead operating systems Time compute Time interrupt handler Time device Time compute Proc 1 Time compute Proc 2 Time device In the interrupt case, the process starts the I/O operation, and then blocks. When the I/O is done, the os will get an interrupt.
Which gives better system throughput? * Polling * Interrupts Which gives better application performance? * Polling * Interrupts If you were developing an operating system, would you choose interrupts or polling?
operating systems User space Buffering Issues Kernel Read from the disk Into user memory Assume that you are using interrupts… What problems Exist in this situation?
operating systems User space Buffering Issues Kernel Read from the disk Into user memory Assume that you are using interrupts… What problems Exist in this situation? The process cannot be completely swapped out of memory. At least the page containing the addresses into which the data is being written must remain in real memory.
operating systems User space Buffering Issues Read from the disk into kernel buffer. When the buffer is full, transfer to memory in user space. Kernel
operating systems We can now swap the user processor out While the I/O completes. User space Buffering Issues What problems Exist in this situation? Kernel 1. The O/S has to carefully keep track of the assignment of system buffers to user processes. 2. There is a performance issue when the user process is not in memory and the O/S is ready to transfer its data to the user process. Also, the device must wait while data is being transferred. 3. The swapping logic is complicated when the swapping operation uses the same disk drive for paging that the data is being read from.
operating systems Buffering Issues User space Some of the performance issues can be addressed by double buffering. While one buffer is being transferred to the user process, the device is reading data into a second buffer. Kernel
operating systems Networking may involve many copies
operating systems Disk Scheduling Because Disk I/O is so important, it is worth our time to investigate some of the issues involved in disk I/O. One of the biggest issues is disk performance.
seek time is the time required for the read head to move to the track containing the data to be read.
rotational delay or latency, is the time required for the sector to move under the read head.
Seek time is the time required to move the disk arm to the specified track Ts = # tracks * disk constant + startup time ~ operating systems rotational delay Performance Parameters data transfer (latency) seek Wait for device Wait for Channel Device busy Transfer Time Tt = bytes / ( rotation_speed * bytes_on_track ) Rotational delay is the time required for the data on that track to come underneath the read heads. For a hard drive rotating at 3600 rpm, the average rotational delay will be 8.3ms.
operating systems Consider a file where the data is stored as compactly as possible, in this case the file occupies all of the sectors on 8 adjacent tracks (32 sectors x 8 tracks = 256 sectors total). Data Organization vs. Performance The time to read the first track will be average seek time 20 ms rotational delay 8.3 ms read 32 sectors 16.7 ms 45ms Assuming that there is essentially no seek time on the remaining tracks, each successive track can be read in 8.3 + 16.7 ms = 25ms. Total read time = 45ms + 7 * 25ms = 220ms = 0.22 seconds
operating systems If the data is randomly distributed across the disk: For each sector we have average seek time 20 ms rotational delay 8.3 ms read 1 sector 0.5 ms Total time = 256 sectors * 28.8 ms/sector = 7.73 seconds 28.8 ms Random placement of data can be a problem when multiple processes are accessing the same disk.
operating systems In the previous example, the biggest factor on performance is ? Seek time! To improve performance, we need to reduce the average seek time.
Queue operating systems Request Request Request Request … If requests are scheduled in random order, then we would expect the disk tracks to be visited in a random order.
Queue First-come, First-served Scheduling operating systems Request Request Request Request … • If there are few processes competing • for the drive, we can hope for good • performance. • If there are a large number of processes • competing for the drive, then performance • approaches the random scheduling case.
While at track 15, assume some random set of read requests -- tracks 4, 40, 11, 35, 7 and 14 operating systems Head Path Tracks Traveled Track 15 to 4 11 steps 4 to 40 36 steps 40 to 11 29 steps 11 to 35 24 steps 35 to 7 28 steps 7 to 14 7 steps 135 steps 40 30 20 10 Steps 50 100
Queue Shortest Seek Time First operating systems Request Request Request Request … Always select the request that requires the shortest seek time from the current position.
While at track 15, assume some random set of read requests -- tracks 4, 40, 11, 35, 7 and 14 operating systems Shortest Seek Time First Track Head Path Tracks Traveled 40 30 20 10 Steps 50 100 In a heavily loaded system, incoming requests with a shorter seek time will constantly push requests with long seek times to the end of the queue. This results in what is called “Starvation”. Problem?
Queue The elevator algorithm (scan-look) operating systems Request Request Request Request … Search for shortest seek time from the current position only in one direction. Continue in this direction until all requests have been satisfied, then go the opposite direction. In the scan algorithm, the head moves all the way to the first (or last) track with a request before it changes direction.
operating systems While at track 15, assume some random set of read requests Track 4, 40, 11, 35, 7 and 14. Head is moving towards higher numbered tracks. Scan-Look Track Tracks Traveled Head Path 40 30 20 10 Steps 50 100
Which algorithm would you choose if you were implementing an operating system? Issues to consider when selecting a disk scheduling algorithm: Performance is based on the number and types of requests. What scheme is used to allocate unused disk blocks? How and where are directories and i-nodes stored? How does paging impact disk performance? How does disk caching impact performance?
operating systems Disk Cache The disk cache holds a number of disk sectors in memory. When an I/O request is made for a particular sector, the disk cache is checked. If the sector is in the cache, it is read. Otherwise, the sector is read into the cache.
operating systems Replacement Strategies Least Recently Used replace the sector that has been in the cache the longest, without being referenced. Least Frequently Used replace the sector that has been used the least
RAID Redundant Array of Independent Disks • Push Performance • Add reliability
RAID Level 0: Striping operating systems strip 0 strip 1 Physical Drive 1 Physical Drive 2 strip 2 strip 3 A Stripe strip 0 strip 1 strip 4 strip 2 strip 3 strip 5 strip 4 strip 5 strip 6 strip 6 strip 7 strip 7 o o o o o o strip 8 strip 9 strip 10 Disk Management Software strip 11 o o o Logical Disk
RAID Level 1: Mirroring High Reliability operating systems strip 0 strip 1 Physical Drive 1 Physical Drive 3 Physical Drive 2 Physical Drive 4 strip 2 strip 3 strip 0 strip 0 strip 1 strip 1 strip 0 strip 0 strip 1 strip 1 strip 4 strip 2 strip 2 strip 3 strip 3 strip 2 strip 2 strip 3 strip 3 strip 5 strip 4 strip 4 strip 5 strip 5 strip 4 strip 4 strip 5 strip 5 strip 6 strip 6 strip 6 strip 7 strip 7 strip 6 strip 6 strip 7 strip 7 o o o o o o o o o o o o strip 7 o o o o o o o o o o o o strip 8 strip 9 strip 10 Disk Management Software strip 11 o o o Logical Disk
RAID Level 3: Parity High Throughput operating systems strip 0 strip 1 Physical Drive 1 Physical Drive 3 Physical Drive 2 Physical Drive 4 strip 2 strip 3 strip 0 strip 0 strip 1 strip 1 strip 2 strip 0 strip 1 para strip 4 strip 3 strip 2 strip 3 strip 4 strip 2 strip 5 strip 3 parb strip 5 strip 4 strip 6 strip 5 strip 7 strip 8 strip 4 strip 5 parc strip 6 strip 9 strip 6 strip 10 strip 7 strip 11 strip 6 strip 7 pard o o o o o o o o o o o o strip 7 o o o o o o o o o o o o strip 8 parity strip 9 strip 10 Disk Management Software strip 11 o o o Logical Disk
operating systems Suppose that 3 processes, p1, p2, and p3 are attempting to concurrently use a machine with interrupt driven I/O. Assuming that no two processes can be using the cpu or the physical device at the same time, what is the minimum amount of time required to execute the three processes, given the following (ignore context switches): Process Time compute Time device 1 10 50 2 30 10 3 15 35
Process Time compute Time device 1 10 50 2 30 10 3 15 35 p3 p2 P1 0 50 60 90 120 130 10 20 30 40 70 80 110 100 105
Consider the case where the device controller is double buffering I/O. That is, while the process is reading a character from one buffer, the device is writing to the second. Process What is the effect on the running time of the process if the process is I/O bound and requests characters faster than the device can provide them? A B Device Controller The process reads from buffer A. It tries to read from buffer B, but the device is still reading. The process blocks until the data has been stored in buffer B. The process wakes up and reads the data, then tries to read Buffer A. Double buffering has not helped performance.
Consider the case where the device controller is double buffering I/O. That is, while the process is reading a character from one buffer, the device is writing to the second. Process What is the effect on the running time of the process if the process is Compute bound and requests characters much slower than the device can provide them? B A Device Controller The process reads from buffer A. It then computes for a long time. Meanwhile, buffer B is filled. When The process asks for the data it is already there. The process does not have to wait and performance improves.
Suppose that the read/write head is at track is at track 97, moving toward the highest numbered track on the disk, track 199. The disk request queue contains read/write requests for blocks on tracks 84, 155, 103, 96, and 197, respectively. How many tracks must the head step across using a FCFS strategy?
Suppose that the read/write head is at track is at track 97, moving toward the highest numbered track on the disk, track 199. The disk request queue contains read/write requests for blocks on tracks 84, 155, 103, 96, and 197, respectively. How many tracks must the head step across using a FCFS strategy? Track 97 to 84 13 steps 84 to 155 71 steps 155 to 103 52 steps 103 to 96 7 steps 96 to 197 101 steps 244 steps 199 150 100 50 Steps 100 200
Suppose that the read/write head is at track is at track 97, moving toward the highest numbered track on the disk, track 199. The disk request queue contains read/write requests for blocks on tracks 84, 155, 103, 96, and 197, respectively. How many tracks must the head step across using an elevator strategy?
Suppose that the read/write head is at track is at track 97, moving toward the highest numbered track on the disk, track 199. The disk request queue contains read/write requests for blocks on tracks 84, 155, 103, 96, and 197, respectively. How many tracks must the head step across using an elevator strategy? Track 97 to 103 6 steps 103 to 155 52 steps 155 to 197 42 steps 197 to 199 2 steps 199 to 96 103 steps 96 to 84 12 steps 217steps 199 150 100 50 Steps 100 200
In our class discussion on directories it was suggested that directory entries are stored as a linear list. What is the big disadvantage of storing directory entries this way, and how could you address this problems? Consider what happens when look up a file … The directory must be searched in a linear way.