590 likes | 1.07k Views
System Devices Sequential Access Storage Media Direct Access Storage Devices Components of I/O Subsystem Communication Among Devices Management of I/O Requests. Paper Storage Media Magnetic Tape Storage Magnetic Disk Storage Optical Disc Storage. Chapter Seven : Device Management.
E N D
System Devices Sequential Access Storage Media Direct Access Storage Devices Components of I/O Subsystem Communication Among Devices Management of I/O Requests Paper Storage Media Magnetic Tape Storage Magnetic Disk Storage Optical Disc Storage Chapter Seven : Device Management
Device Management Functions • Track status of each device (such as tape drives, disk drives, printers, plotters, and terminals). • Use preset policies to determine which process will get a device and for how long. • Allocate the devices. • Deallocate the devices at 2 levels: • At process level when I/O command has been executed & device is temporarily released • At job level when job is finished & device is permanently released.
System Devices • Differences among system’s peripheral devices are a function of characteristics of devices, and how well they’re managed by the Device Manager. • Most important differences among devices • Speeds • Degree of sharability. • By minimizing variances among devices, a system’s overall efficiency can be dramatically improved.
Dedicated Devices • Assigned to only one job at a time and serve that job for entire time it’s active. • E.g., tape drives, printers, and plotters, demand this kind of allocation scheme, because it would be awkward to share. • Disadvantage -- must be allocated to a single user for duration of a job’s execution. • Can be quite inefficient, especially when device isn’t used 100 % of time.
Shared Devices • Assigned to several processes. • E.g., disk pack (or other direct access storage device) can be shared by several processes at same time by interleaving their requests. • Interleaving must be carefully controlled by Device Manager. • All conflicts must be resolved based on predetermined policies to decide which request will be handled first.
Virtual Devices • Combination of dedicated devices that have been transformed into shared devices. • E.g, printers are converted into sharable devices through a spooling program that reroutes all print requests to a disk. • Output sent to printer for printing only when all of a job’s output is complete and printer is ready to print out entire document. • Because disks are sharable devices, this technique can convert one printer into several “virtual” printers, thus improving both its performance and use.
Sequential Access Storage Media • Magnetic tape used for secondary storage on early computer systems; now used for routine archiving & storing back-up data. • Records on magnetic tapes are stored serially, one after other. • Each record can be of any length. • Length is usually determined by the application program. • Each record can be identified by its position on the tape. • To access a single record, tape is mounted & “fast-forwarded” from its beginning until locate desired position.
Data is recorded on 8 parallel tracks that run length of tape. Ninth track holds parity bitused for routine error checking. Number of characters that can be recorded per inch is determined by density of tape (e.g., 1600 or 6250 bpi). • • • • • • • • • • • • Magnetic Tapes Parity Characters
Storing Records on Magnetic Tapes • Can store records individually or grouped into blocks. • If individually, each record is separated by a space to indicate its starting and ending places. • If blocks, then entire block is preceded by a space and followed by a space, but individual records are stored sequentially within block. • Interrecord gap (IRG) is gap between records about 1/2 inch long regardless of the sizes of the records it separates. • Interblock gap (IBG) the gap between blocks of records; still 1/2 inch long.
Pros & Cons of Blocking • Fewer I/O operations are needed because a single READ command can move an entire block (physical record that includes several logical records) into main memory. • Less tape is wasted because size of physical record exceeds size of gap. • Overhead and software routines are needed for blocking, deblocking, and record keeping. • Buffer space may be wasted if you need only one logical record but must read an entire block to get it.
Transfer Rates & Speeds • Block size set to take advantage of transfer rate. • Transfer rate -- density of the tape, multiplied by the tape transport speed (speed of the tape) transfer rate = density * transport speed • If transport speed is 200 inches per second, at 1600 bpi, a total of 320,000 bytes can be transferred in one second, • Theoretically optimal size of a block is 320,000 bytes. • Buffer must be equivalent.
Magnetic Tape Access Times Vary Widely Benchmarks Access time Maximum access 2.5 minutes Average access 1.25 minutes Sequential access 3 milliseconds • Variability makes magnetic tape a poor medium for routine secondary storage except for files with very high sequential activity.
Direct Access Storage Devices (Random Access Storage Devices) • Direct access storage devices (DASDs)-- any devices that can directly read or write to a specific place on a disk. • Two major categories: • DASD with fixed read/write heads • DASD with movable read/write heads. • Although variance in DASD access times isn’t as wide as with magnetic tape, location of specific record still has a direct effect on amount of time required to access it.
Fixed-Head Drums • Magnetically recordable drums. • Resembles a giant coffee can covered with magnetic film and formatted so the tracks run around it. • Data is recorded serially on each track by the read/write head positioned over it. • Fixed-head drums were very fast but also very expensive, and they did not hold as much data as other DASDs.
Fixed Head Disks • Fixed-head disks -- each disk looks like a phonograph album. • Covered with magnetic film that has been formatted, usually on both sides, into concentric circles. • Each circle is a track. Data is recorded serially on each track by the fixed read/write head positioned over it. • One head for each track. Rotation
Pros & Cons of Fixed Head Disks • Very fast—faster than movable-head disks. • High cost. • Reduced storage space compared to a moveable-head disk • because tracks must be positioned farther apart to accommodate width of the read/write heads.
Movable-Head Drums and Disks • Movable-head drums have only a few read/write heads that move from track to track to cover entire surface of drum. • Least expensive device has only 1 read/write head for entire drum • More conventional design has several read/write heads that move together. • One read/write head that floats over the surface of the disk. • Disks can be individual units (used with many PCs) or part of a disk pack (a stack of disks).
Cylinders • It’s slower to fill a disk pack surface-by-surface than to fill it up track-by-track. • If fill Track 0 of all surfaces, got virtual cylinder of data. • Are as many cylinders as there are tracks. • Cylinders are as tall as the disk pack. • To access any given record, system needs: • Cylinder number, so arm can move read/write heads to it. • Surface number, so proper read/write head is activated. • Record number, so read/write head know when to begin reading or writing.
Optical Disc Storage (CD-ROM) • Optical disc drives uses a laser beam to read and write to multi-layered discs. • Optical disc drives work in a manner similar to a magnetic disk drive. • Head on an arm that moves forward and backward across the disc. • Uses a high-intensity laser beam to burn pits (indentations) and lands (flat areas) in disc to represent ones and zeros, respectively.
Concentric Tracks vs. Spiraling Tracks • Magnetic disk consists of concentric tracks of sectors and it spins at a constant speed (constant angular velocity). • Because sectors at outside of disk spin faster past read/write head than inner sectors, outside sectors are much larger than sectors located near center of disk. • An optical disc consists of a single spiraling track of same-sized sectors running from center to rim of disc. • Allows many more sectors & much more data to fit on optical disc compared to magnetic disk of same size.
Measures of Performance for Optical Disc Drives • Sustained data-transfer rate -- speed at which massive amounts of data can be read from disc. • Measured in bytes per second (such as Mbps). • Crucial for applications requiring sequential access. • Average access time -- average time required to move head to a specific place on disc. • Expressed in milliseconds (ms). • Cache size -- hardware cache acts as a buffer by transferring blocks of data from the disc • Anticipates user may want to reread some recently retrieved info. • Act as read-ahead buffer, looking for next block of info on disc.
CD-ROM Technology • CD-ROM -- first commonly used optical storage DASD. • Stores very large databases, reference works, complex games, large software packages, system documentation, and user training material. • CD-ROM jukeboxes (autochangers or libraries) are capable of handling multiple discs and networked to distribute multimedia and reference works to distant user.
CD-Recordable Technology (CD-R) • CD-R drives record data on optical discs using a write-once technique. • WORM (write once, read many). • Only a finite amount of data can be recorded on each disc and, once data is written, it can’t be erased or modified. • It has an extremely long shelf life.
CD-Rewritable Technology (CD-RW) • CD-RW drives can read a standard CD-ROM, CD-R and CD-RW discs. • CD-RW discs can be written and rewritten many times by focusing a low-energy laser beam on surface, heating media just enough to erase pits that store data and restoring recordable media to its original state. • Useful for storing large quantities of data and for sound, graphics, and multimedia applications.
Digital Video Disc (DVD) Techonolgy • DVD uses infrared laser to read disc (holds equivalent of 13 CD-ROM discs). • By using compression technologies, has more than enough space to hold a 2-hour of movie with enhanced audio. • Single layered DVDs can hold 4.7 GB • Double-layered disc can hold 8.5 GB on each side of the disc. DVDs are used to store music, movies, and multimedia applications. • DVD-RAM is a writable technology that uses a red laser to read, modify, and write data to DVD discs.
Three Factors Contribute To Time Required To Access a File • Seek time -- time required to position the read/write head on the proper track. (Doesn’t apply to devices with fixed read/write heads.) • Slowest of the three factors • Search time (rotational delay) -- time it takes to rotate DASD until requested record is under read/write head. • Transfer time -- when data is actually transferred from secondary storage to main memory. • Fastest.
Access Time For Fixed-Head Devices • Fixed-head devices can access a record by knowing its track number and record number. • Total amount of time required to access data depends on: • Rotational speed is constant within each device (although it varies from device to device) • Position of record relative to position of the read/write head. search time (rotational delay) + transfer time (data transfer) access time
Example of Access Time For Fixed-Head Devices • How long will it take to access a record? • Typically, one complete revolution takes 16.8 ms, so average rotational delay is 8.4 ms. • Data transfer time varies from device to device, but a typical value is 0.00094 ms per byte • size of record dictates this value. • For example, it takes 0.094 ms (almost 0.1 ms) to transfer a record with 100 bytes.
Access Time For Movable-Head Devices • Movable-head DASDs adds time required to move arm into position over the proper track (seek time). seek time (arm movement) search time (rotational delay) + transfer time (data transfer) access time • Seek time is the longest and several strategies have been developed to minimize it.
Disk 1 Disk 2 Disk 3 Tape 1 Tape 4 Tape 2 Disk 4 Tape 3 Disk 5 Components of the I/O Subsystem Control Unit 1 Channel 1 Control Unit 2 CPU Control Unit 3 Channel 2 Control Unit 4
I/O Subsystem : I/O Channel • I/O Channel -- keeps up with I/O requests from CPU and pass them down the line to appropriate control unit. • Programmable units placed between CPU and control unit. • Synchronize fast speed of CPU with slow speed of the I/O device. • Make it possible to overlap I/O operations with processor operations so the CPU and I/O can process concurrently. • Use channel programs that specifies action to be performed by devices & controls transmission of data between main memory & control units. • Entire path must be available when an I/O command is initiated.
I/O Subsystem : I/O Control Unit • I/O control unit interprets signal sent by channel. • One signal for each function. • At start of I/O command, info passed from CPU to channel: • I/O command (READ, WRITE, REWIND, etc.) • Channel number • Address of physical record to be transferred (from or to secondary storage) • Starting address of a memory buffer from which or into which record is to be transferred
Device Manager Must • Know which components are busy and which are free. • Be able to accommodate requests that come in during heavy I/O traffic. • Accommodate disparity of speeds between CPU and I/O devices. Solved by structuring interaction between units Handled by “buffering” records & queueing requests
Communication Among Devices • Each unit in I/O subsystem can finish its operation independently from others. • CPU is free to process data while I/O is being performed, which allows for concurrent processing and I/O. • Success of operation depends on system’s ability to know when device has completed operation. • Uses a hardware flag that must be tested by CPU.
Hardware Flag Used To Communicate When A Device Has Completed An Operation • Composed made up of three bits. • Each bit represents a component of I/O subsystem. • One each for channel, control unit, and device. • Resides in the Channel Status Word (CSW) • In a predefined location in main memory and contains info indicating status of channel. • Each bit is changed from zero to one to indicate that unit has changed from free to busy.
Testing the Flag : Polling or Interrupts • Polling uses a special machine instruction to test flag. • CPU periodically tests the channel status bit (in CSW). • Major disadvantage with this scheme is determining how often the flag should be polled. • If polling is done too frequently, CPU wastes time testing flag just to find out that channel is still busy. • If polling is done too seldom, channel could sit idle for long periods of time.
Interrupts • Use of interrupts is a more efficient way to test flag. • Hardware mechanism does test as part of every machine instruction executed by CPU. • If channel is busy flag is set so that execution of current sequence of instructions is automatically interrupted. • Control is transferred to interrupt handler, which resides in a predefined location in memory. • Some sophisticated systems are equipped with hardware that can distinguish between several types of interrupts.
Direct Memory Access (DMA) • I/O technique that allows a control unit to access main memory directly. • Once reading or writing begins, remainder of data can be transferred to and from memory without CPU intervention. • To activate this process CPU sends enough info to control unit to initiate transfer of data • Then CPU goes to another task while control unit completes transfer independently. • This mode of data transfer is used for high-speed devices such as disks.
Buffers • Buffers are temporary storage areas residing in convenient locations throughout system: main memory, channels, and control units. • Used extensively to better synchronize movement of data between relatively slow I/O devices & very fast CPU. • Double buffering --2 buffers are present in main memory, channels, and control units. • While one record is being processed by CPU another can be read or written by channel
Management of I/O Requests • Device Manager divides task into 3 parts, with each handled by specific software component of I/O subsystem. • I/O traffic controller watches status of all devices, control units, and channels. • I/O scheduler implements policies that determine allocation of, and access to, devices, control units, and channels. • I/O device handler performs actual transfer of data and processes the device interrupts.
I/O Traffic Controller • Monitors status of every device, control unit, and channel. • Becomes more complex as number of units in I/O subsystem increases and as number of paths between these units increases. • Three main tasks: (1) it must determine if there’s at least 1 path available; (2) if there’s more than 1 path available, it must determine which to select; and (3) if paths are all busy, it must determine when one will become available. • Maintains a database containing status and connections for each unit in I/O subsystem, grouped into Channel Control Blocks, Control Unit Control Blocks, and Device Control Blocks.
Traffic Controller Maintains Database For Each Unit In I/O Subsystem
I/O Scheduler • I/O scheduler performs same job as Process Scheduler-- it allocates the devices, control units, and channels. • Under heavy loads, when # requests > # available paths, I/O scheduler must decide which request satisfied first. • I/O requests are not preempted: once channel program has started, it’s allowed to continue to completion even though I/O requests with higher priorities may have entered queue. • Feasible because programs are relatively short (50 to 100 ms).
I/O Scheduler - 2 • Some systems allow I/O scheduler to give preferential treatment to I/O requests from “high-priority” programs. • If a process has high priority then its I/O requests also has high priority and is satisfied before other I/O requests with lower priorities. • I/O scheduler must synchronize its work with traffic controller to make sure that a path is available to satisfy selected I/O requests.
I/O Device Handler • I/O device handler processes the I/O interrupts, handles error conditions, and provides detailed scheduling algorithms, which are extremely device dependent. • Each type of I/O device has own device handler algorithm. • first come first served (FCFS) • shortest seek time first (SSTF) • SCAN (including LOOK, N-Step SCAN, C-SCAN, and C-LOOK) • Every scheduling algorithm should : • Minimize arm movement • Minimize mean response time • Minimize variance in response time
First Come First Served (FCFS) Device Scheduling Algorithm • Simplest device-scheduling algorithm: • Easy to program and essentially fair to users. • On average, it doesn’t meet any of the three goals of a seek strategy. • Remember, seek time is most time-consuming of functions performed here, so any algorithm that can minimize it is preferable to FCFS.
Shortest Seek Time First (SSTF) Device Scheduling Algorithm • Uses same underlying philosophy as shortest job next where shortest jobs are processed first & longer jobs wait. • Request with track closest to one being served (that is, one with shortest distance to travel) is next to be satisfied. • Minimizes overall seek time. • Favors easy-to-reach requests and postpones traveling to those that are out of way.
SCAN Device Scheduling Algorithm • SCAN uses a directional bit to indicate whether the arm is moving toward the center of the disk or away from it. • Algorithm moves arm methodically from outer to inner track servicing every request in its path. • When it reaches innermost track it reverses direction and moves toward outer tracks, again servicing every request in its path.
LOOK (Elevator Algorithm) : A Variation of SCAN • Arm doesn’t necessarily go all the way to either edge unless there are requests there. • “Looks” ahead for a request before going to service it. • Eliminates possibility of indefinite postponement of requests in out-of-the-way places—at either edge of disk. • As requests arrive each is incorporated in its proper place in queue and serviced when the arm reaches that track.
Other Variations of SCAN • N-Step SCAN -- holds all requests until arm starts on way back. New requests are grouped together for next sweep. • C-SCAN (Circular SCAN) -- arm picks up requests on its path during inward sweep. • When innermost track has been reached returns to outermost track and starts servicing requests that arrived during last inward sweep. • Provides a more uniform wait time. • C-LOOK (optimization of C-SCAN) --sweep inward stops at last high-numbered track request, so arm doesn’t move all the way to last track unless it’s required to do so. • Arm doesn’t necessarily return to the lowest-numbered track; it returns only to the lowest-numbered track that’s requested.