1 / 66

TK 6123 COMPUTER ORGANISATION & ARCHITECTURE

TK 6123 COMPUTER ORGANISATION & ARCHITECTURE. Lecture 12: The Internal Operating Systems. FUNDAMENTAL OS REQUIREMENTS. The OS must divide up the space in memory, load one or more programs into that space, and then execute those programs, giving each program sufficient time to complete.

sreddin
Download Presentation

TK 6123 COMPUTER ORGANISATION & ARCHITECTURE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TK 6123COMPUTER ORGANISATION & ARCHITECTURE Lecture 12: The Internal Operating Systems

  2. FUNDAMENTAL OS REQUIREMENTS • The OS must divide up the space in memory, load one or more programs into that space, and then execute those programs, giving each program sufficient time to complete. • It also provides file, I/O, and other services to each of the programs. • When not providing services, it sits idle.

  3. FUNDAMENTAL OS REQUIREMENTS • The challenge is that multiple programs are each sharing resources: • memory, • I/O, and • CPU time Thus, the OS must provide additional support functions that allocate each program its fair share of memory, CPU time, and I/O resource time when it is needed. • It must also isolate and protect each program, yet allow the programs to share data and communicate, when required.

  4. Processes and Threads • A process is defined as a program, together with all the resources that are associated with that program as it is executed. • Jobs, tasks, and processes. • When the job is admitted to the system, a process is created for the job. • Each of the tasks within the job also represent processes, that will be created as each step in the job is executed.

  5. PROCESSES AND THREADS • Processes that do not need to interact with any other processes are known as independent processes. • In modern systems, many processes will work together. • They will share information and files. • Processes that work together are known as cooperating processes. • The OS provides mechanisms for synchronizing and communicating between processes that are related in some way: • E.g. semaphore - nonnegative integers variable.

  6. PROCESSES AND THREADS • To keep track of each of the different processes that are executing concurrently in memory, the OS creates and maintains a block of data for each process in the system - a process control block (PCB). • The PCB contains all relevant information about the process.

  7. Process Creation • Since any executing program is a process, almost any command that you type into a multitasking interactive system normally creates a process. • Creating a new process from an older one is commonly called forking or spawning. • The spawning process is called a parent. • The spawned process is known as a child.

  8. Process Creation • Removing a parent process usually kills all the child processes associated with it. • When the process is created, the OS gives it a unique ID, creates a PCB for it, allocates the memory and other initial resources. • When the process exits, its resources are returned to the system pool, and its PCB is removed from the process table.

  9. Process States • Three primary operating states for a process. • ready state, • Once a process has been created and admitted to the system for execution. • A process is capable of execution if given access to the CPU. • running state, • When the process is given time for execution. Moving from the ready state to the running state is called dispatching the process. • Only one process can be in the running state at a time for a uniprocessor system.

  10. Process States • blocked state. • Some OS will suspend the program when I/O or other services are required for the continuation of program execution; • This state transition is known as blocking. • When the I/O operation is complete, the OS moves the process from the blocked state back to the ready state. • This state transition is frequently called wake-up.

  11. Five State Process Model

  12. Process States • Nonpreemptive systems will allow a running process to continue running until it is completed or blocked. • Preemptive systems will limit the time that the program remains in the running state to a fixed length of time corresponding to one or more quanta. • If the process remains in the running state when its time limit has occurred, the OS will return the process to the ready state to await further time for processing. • The transition from the running state to the ready state is known as time-out. • When the process completes execution, control returns to the OS, and the process is destroyed or killed or terminated.

  13. Threads • A thread represents a piece of a process that can be executed independently of other parts of the process. • Each thread has its own context, consisting of a PC value, register set, and stack space, but shares program code, and data, and other system resources such as open files with the other member threads in the process. • Threads can operate concurrently. • Like processes, threads can be created and destroyed and can be in ready, running, and blocked states.

  14. CPU Scheduling • Provides mechanisms for the acceptance of processes into the system and for the actual allocation of CPU time to execute those processes. • Separated into two different phases: • The high-level, or long-term, scheduler is responsible for admitting processes to the system. • The dispatcher provides short-term scheduling, specifically, the instant-by-instant decision as to which one of the processes that are ready should be given CPU execution time.

  15. High-level Scheduler • Determines which processes are to be admitted to the system. • Once submitted, a job becomes a process for the short term scheduler. • The criteria for admission of processes to a batch system are usually based on: • priorities • the balancing of resources, Although some systems use a first-come, first-served algorithm.

  16. Short Term Scheduler: Dispatching • Dispatcher • Fine grained decisions of which job to execute next • i.e. which job actually gets to use the processor in the next time slot

  17. Dispatching • Some conditions that might cause a process to give up the CPU: • voluntary • involuntary. • Dispatcher aims to select the next candidate in such a way as to optimize system use. • Processes vary in their requirements. • Long/short CPU execution time, • Require many/few resources, • Vary in their ratio of CPU to I/O execution time. • Different scheduling algorithms favor different types of processes or threads and meet different optimization criteria.

  18. Dispatching • The choice of scheduling algorithm depends on the optimization objective(s) and expected mix of process types.Some of the objectives are: • Ensure fairness • Maximize throughout • Maximize CPU utilization • Maximize resource allocation • Minimize response time • Provide consistent response time • Prevent starvation • Starvation (indefinite postponement) is a situation that occurs when a process is never given the CPU time that it needs to execute. It is important that the algorithm selected not permit starvation to occur.

  19. Nonpreemptive Dispatch Algorithms • FIRST-IN, FIRST-OUT (FIFO) • Processes will be executed as they arrive, in order. • Starvation cannot occur with this method, and the method is certainly fair in a general sense; • however, it fails to meet other objectives. • SHORTEST JOB FIRST (SJF) • Maximize throughput by selecting jobs that require only a small amount of CPU time. • Since short jobs will be pushed ahead of longer jobs, starvation is possible. • Turnaround time is particularly inconsistent.

  20. Nonpreemptive Dispatch Algorithms • PRIORITY SCHEDULING • The dispatcher will assign the CPU to the job with the highest priority. • If there are multiple jobs with the same priority, the dispatcher will select among them on a FIFO basis.

  21. Preemptive Dispatch Algorithms • ROUND ROBIN • Gives each process a quantum of CPU time. • The uncompleted process is returned to the back of the ready queue after each quantum. • It is simple and inherently fair. • Shorter jobs get processed quickly: • Reasonably good on maximizing throughput. • Does not attempt to balance the system resources,: • penalizes processes when they use I/O resources, by forcing them to reenter the ready queue.

  22. Preemptive Dispatch Algorithms • MULTILEVEL FEEDBACK QUEUES • Attempts to combine some of the best features of several different algorithms. • This algorithm favors • short jobs and I/O-bound jobs (resulting in good resource utilization). • It provides high throughput, with reasonably consistent response time.

  23. Preemptive Dispatch Algorithms • MULTILEVEL FEEDBACK QUEUES • The dispatcher provides a number of queues. • A process initially enters the queue at the top level (top priority). • Short processes will complete at this point. • Many I/O-bound processes will be quickly initialized and sent off for I/0. • Processes that are not completed are sent to a second-level queue. • Processes in the second-level queue receive time only when the first-level queue is empty. • Although starvation is possible, it is unlikely, because new processes pass through the first queue so quickly.

  24. Preemptive Dispatch Algorithms • DYNAMIC PRIORITY SCHEDULING • Windows 2000 and Linux use a dynamic priority algorithm as their primary criterion for dispatch selection. • The algorithms on both systems adjust priority based on their use of resources.

  25. Memory Management • The goal is : • to make it as simple as possible for programs to find space, so that they maybe loaded and executed. • to maximize the use of memory, that is, to waste as little memory as possible. • There may be more programs than can possibly fit into the given amount of physical memory space. • Even a single program may be too large to fit the amount of memory provided.

  26. Memory Management • Uni-program • Memory split into two • One for Operating System (monitor) • One for currently executing program • Multi-program • “User” part is sub-divided and shared among active processes

  27. Memory Management • Traditional memory management: • Memory Partitioning. • Overlays • Virtual storage: • Paging • Page replacement algorithms • Trashing • Segmentation

  28. Memory Management : Partitioning • Splitting memory into sections to allocate to processes (including Operating System) • Fixed-sized partitions • May not be equal size • Process is fitted into smallest hole that will take it (best fit) • Some wasted memory • Leads to variable sized partitions

  29. FixedPartitioning

  30. Variable Sized Partitions (1) • Allocate exactly the required memory to a process • This leads to a hole at the end of memory, too small to use • Only one small hole - less waste • When all processes are blocked, swap out a process and bring in another • New process may be smaller than swapped out process • Another hole

  31. Variable Sized Partitions (2) • Eventually have lots of holes (fragmentation) • Solutions: • Coalesce - Join adjacent holes into one large hole • Compaction - From time to time go through memory and move all hole into one free block (c.f. disk de-fragmentation)

  32. Effect of Dynamic Partitioning

  33. Internal and External Fragmentation

  34. Memory Management : Overlays • If a program does not fit into any available partition, it must be divided into small logical pieces for execution-overlay. • Each piece must be smaller than the allocated memory space.

  35. Memory Management : Overlays • Most systems do not allow the use of multiple partitions by a single program: • An alternative is to load individual pieces as they are actually needed for execution. • Disadvantage: • It cannot take advantage of more memory if it is available, since the overlays are designed to fit into a specific, given amount of memory.

  36. Virtual Storage • Virtual storage can be used to store a large number of programs in a small amount of physical memory • makes it appear that the computer has more memory than is physically present. • So - we can now run processes that are bigger than total memory available! • MMU (Memory Management Unit)- is a device for mapping virtual-to-physical address.

  37. Virtual Storage • Is an important technique for the effective use of memory in a multitasking system. • To translate a virtual/logical address to a physical address with paging: • the virtual address is separated into a page number and an offset; • a lookup in a page table translates, or maps, the virtual memory reference into a physical memory location consisting of a corresponding frame number and the same offset. Logical address - relative to beginning of program. Physical address - actual location in memory Automatic conversion using base address

  38. Virtual Storage • Every memory reference in a fetch- execute cycle goes through the same translation process, which is known as dynamic address translation (DAT). • The address that would normally be sent to the memory address register (MAR) is mapped through the page table and then sent to the MAR.

  39. Virtual Storage

  40. Virtual Storage • Each process in a multitasking system has its own virtual memory, and its own page table. • Physical memory is shared among the different processes. • Since all the pages are the same size, any frame may be placed anywhere in memory. • The pages selected do not have to be contiguous. • Therefore, virtual memory eliminates the need for: • overlay techniques • contiguous program loading (partitioning)

  41. Virtual Storage : Paging • Split memory into equal sized, small chunks -page frames • Split programs (processes) into equal sized small chunks - pages • Allocate the required number page frames to a process • Operating System maintains list of free frames • A process does not require contiguous page frames • Use page table to keep track

  42. Implementation of Paging (1) The first 64 KB of virtual address space divided into 16 pages, with each page being 4K. The virtual address space is broken up into a number of equal-sized pages. Page sizes ranging from 512 to 64KB per page (always a power of 2).

  43. Implementation of Paging(2) A 32 KB main memory divided up into eight page frames of 4 KB each. The physical address space is broken up into pieces in a similar way (each being the same size as a page).

  44. Allocation of Free Frames

  45. Logical and Physical Addresses - Paging

  46. Virtual Storage : Paging • To execute a program instruction or access data, two requirements must be met: • The instruction or data must be in memory. • The page table for that program must contain an entry that maps the virtual address to the physical location. • If a page table entry is missing when the memory management hardware attempts to access it, • the CPU hardware causes a special type of interrupt called a page fault or a page fault trap.

  47. Virtual Storage : Paging • Page fault • Required page is not in memory • Operating System must swap in required page • May need to swap out a page to make space • Select page to throw out based on recent history

  48. Virtual Storage : Paging • Demand paging • Do not require all pages of a process in memory • Bring in pages as required i.e. swapping the page as a result of a page fault. • Most systems perform use demand paging.

  49. Virtual Storage : Paging • A few systems attempt to anticipate page needs before they occur, so that a page is swapped in before it is needed. • Prepaging. • Have not been very successful at predicting accurately the future page needs of programs.

  50. Virtual Storage : Page Replacement Algorithms • FIFO • The oldest page remaining in the page table is selected for replacement. • LEAST RECENTLY USED (LRU) • Replaces the page that has not been used for the longest time. • LEAST FREQUENTLY USED • Replace the page that has been used the least frequently. • NOT USED RECENTLY (NUR) • Is a simplification of the LRU algorithm. • Replace page that has not been used for a while.

More Related