1 / 46

Memory Technology, Virtual Memory

Memory Technology, Virtual Memory. Omid Fatemi Advanced Computer Architecture. Quiz on Monday. Appendix C, first 4 out of 11 (5.2). Reducing hit time Small and simple caches Way prediction Trace caches Increasing cache bandwidth Pipelined caches Multibanked caches Nonblocking caches.

hisoki
Download Presentation

Memory Technology, Virtual Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Technology, Virtual Memory Omid Fatemi Advanced Computer Architecture Advanced Computer Architecture

  2. Quiz on Monday Appendix C, first 4 out of 11 (5.2) Advanced Computer Architecture

  3. Reducing hit time Small and simple caches Way prediction Trace caches Increasing cache bandwidth Pipelined caches Multibanked caches Nonblocking caches Reducing Miss Penalty Critical word first Merging write buffers Reducing Miss Rate Compiler optimizations Reducing miss penalty or miss rate via parallelism Hardware prefetching Compiler prefetching 11 Advanced Cache Optimizations

  4. Memory • Memory Technology and Optimizations • Virtual Memory and Virtual Machines • AMD Opteron Memory Hierarchy Advanced Computer Architecture

  5. Core Memory • Core memory was first large-scale reliable main memory • invented by Forrester in late 40s/early 50s at MIT for Whirlwind project • Bits stored as magnetization polarity on small ferrite cores threaded onto 2-dimensional grid of wires • Coincident current pulses on X and Y wires would write cell and also sense original state (destructive reads) • Robust, non-volatile storage • Used on space shuttle computers until recently • Core access time ~ 1ms DEC PDP-8/E Board, 4K words x 12 bits, (1968) Advanced Computer Architecture

  6. Types of Memory • DRAM (Dynamic Random Access Memory) • Cell design needs only 1 transistor per bit stored. • Cell charges leak away and may dynamically (over time) drift from their initial levels. • Requires periodic refreshing to correct drift • e.g. every 8 ms • Time spent refreshing kept to <5% of bandwidth • SRAM (Static Random Access Memory) • Cell voltages are statically (unchangingly) tied to power supply references. No drift, no refresh. • But needs 4-6 transistors per bit. • DRAM: 4-8x larger, 8-16x slower, 8-16x cheaper/bit Advanced Computer Architecture

  7. 1-T DRAM Cell word TiN top electrode (VREF) access transistor Ta2O5 dielectric VREF bit Storage capacitor (FET gate, trench, stack) W bottom electrode poly word line access transistor One Transistor Dynamic RAM Advanced Computer Architecture

  8. Semiconductor Memory, DRAM • Semiconductor memory began to be competitive in early 1970s • Intel formed to exploit market for semiconductor memory • First commercial DRAM was Intel 1103 • 1Kbit of storage on single chip • charge on a capacitor used to hold value • Semiconductor memory quickly replaced core in 1970s • Today (September 2007), 1GB DRAM < $30 • Individuals can easily afford to fill a 32-bit address space with DRAM (4GB) Advanced Computer Architecture

  9. Typical DRAM Organization Low 14 bits High14 bits Advanced Computer Architecture

  10. DRAM Architecture bit lines word lines Col. 1 Col.2M Row 1 N Row Address Decoder Row 2N Memory cell(one bit) M N+M Column Decoder & Sense Amplifiers D Data • Bits stored in 2-dimensional arrays on chip • Modern chips have around 4 logical banks on each chip • each logical bank physically implemented as many smaller arrays Advanced Computer Architecture

  11. RAS vs. CAS DRAM bit-cell array • Address lines: Separate bus CPU  Memory to carry addresses. • RAS (Row Access Strobe) • First half of address, sent first. • CAS (Column Access Strobe) • Second half of address, sent second. 1. RAS selects a row 2. Parallelreadout ofall row data 3. CAS selectsa column to read 4. Selected bitwritten to memory bus Advanced Computer Architecture

  12. DRAM Operation Three steps in read/write access to a given bank • Row access (RAS) • decode row address, enable addressed row (often multiple Kb in row) • bitlines share charge with storage cell • small change in voltage detected by sense amplifiers which latch whole row of bits • sense amplifiers drive bitlines full rail to recharge storage cells • Column access (CAS) • decode column address to select small number of sense amplifier latches (4, 8, 16, or 32 bits depending on DRAM package) • on read, send latched bits out to chip pins • on write, change sense amplifier latches. which then charge storage cells to required value • can perform multiple column accesses on same row without another row access (burst mode) • Precharge • charges bit lines to known value, required before next row access • Each step has a latency of around 10-20ns in modern DRAMs. • Various DRAM standards (DDR, RDRAM) have different ways of encoding the signals for transmission to the DRAM, but all share same core architecture • Can overlap RAS/CAS/Precharge in different banks to increase bandwidth Advanced Computer Architecture

  13. Main Memory • Bandwidth: Bytes read or written per unit time • Latency: Described by • Access Time: Delay between initiation/completion • For reads: Present address till result ready. • Cycle time: Minimum interval between separate requests to memory. • Latency important for cache, Bandwidth for IO (ch. 6) and multiprocessors (ch. 4). • Bandwidth is easier to improve.  larger cache blocks. • In past, the innovation: how to organize DRAM chips • Higher bandwidth: memory banks, wider bus • capacity per memory chip increases  fewer chips • Now, memory innovations are now happening inside the DRAM chips. Advanced Computer Architecture

  14. Some DRAM Trend Data Advanced Computer Architecture

  15. DRAM Variations • SDRAM – Synchronous DRAM • DRAM internal operation synchronized by a clock signal provided on the memory bus • Double Data Rate (DDR) – uses both clock edges • RDRAM – RAMBUS (Inc.) DRAM • Proprietary DRAM interface technology • on-chip interleaving / multi-bank technology • a high-speed packet-switched (split-transaction) bus interface • byte-wide interface, synchronous, dual-rate • Licensed to many chip & CPU makers • Higher bandwidth, costly than generic SDRAM • DRDRAM – “Direct” RDRAM (2nd ed. spec.) • Separate row and column address/command buses • Higher bandwidth (18-bit data, more banks, faster clock) Advanced Computer Architecture

  16. Quest for DRAM Performance • Fast Page mode • Add timing signals that allow repeated accesses to row buffer without another row access time • Such a buffer comes naturally, as each array will buffer 1024 to 2048 bits for each access • Synchronous DRAM (SDRAM) • Add a clock signal to DRAM interface, so that the repeated transfers would not bear overhead to synchronize with DRAM controller • Double Data Rate (DDR SDRAM) • Transfer data on both the rising edge and falling edge of the DRAM clock signal  doubling the peak data rate • DDR2 lowers power by dropping the voltage from 2.5 to 1.8 volts + offers higher clock rates: up to 533 MHz • DDR3 drops to 1.5 volts + higher clock rates: up to 800 MHz • Improved Bandwidth, not Latency Advanced Computer Architecture

  17. Double-Data Rate (DDR2) DRAM 200MHz Clock [ Micron, 256Mb DDR2 SDRAM datasheet ] Row Column Precharge Row’ Data 400Mb/s Data Rate Advanced Computer Architecture

  18. DRAM Packaging ~7 Clock and control signals DRAM chip Address lines multiplexed row/column address • DIMM (Dual Inline Memory Module) contains multiple chips arranged in “ranks” • Each rank has clock/control/address signals connected in parallel (sometimes need buffers to drive signals to all chips), and data pins work together to return wide word • e.g., a rank could implement a 64-bit data bus using 16x4-bit chips, or a 64-bit data bus using 8x8-bit chips. • A modern DIMM usually has one or two ranks (occasionally 4 if high capacity) • A rank will contain the same number of banks as each constituent chip (e.g., 4-8) ~12 Data bus (4b,8b,16b,32b) Advanced Computer Architecture

  19. ROM and Flash • ROM (Read-Only Memory) • Nonvolatile + protection • Flash • Nonvolatile RAMs • NVRAMs require no power to maintain state • Reading flash is near DRAM speeds • Writing is 10-100x slower than DRAM • Frequently used for upgradeable embedded SW • Used in Embedded Processors Advanced Computer Architecture

  20. Flash Memory • Flash memory is a non-volatile computer memory that can be electrically erased (like EEPROM) and reprogrammed. • It does not need power to maintain the information stored in the chip. • It offers fast read access times and better kinetic shock resistance than hard disks. It is enormously durable, being able to withstand intense pressure, extremes of temperature and immersion in water. • It is a specific type of EEPROM that is erased and programmed in large blocks; in early flash the entire chip had to be erased at once. • Examples of applications include PDAs, digital audio players, digital cameras and mobile phones. • Flash memory stores information in an array of floating-gate transistors, called "cells". In traditional single-level cell (SLC) devices, each cell stores only one bit of information. Some newer flash memory, known as multi-level cell (MLC) devices, can store more than one bit per cell by choosing between multiple levels of electrical charge to apply to the floating gates of its cells. • NOR and NAND flash memories. Advanced Computer Architecture

  21. Virtual Memory and Virtual Machines Advanced Computer Architecture

  22. Protection via Virtual Memory • Goal is to make computing systems secure • A process should not be able to observe/modify the content (e.g., credit card details) of another process • A process is a running program plus any state needed to continue running it (in case of context/process switch). • Architecture and OS should join hands • Architecture must limit what a user process can access but allow an OS process greater access. • Architecture must support • At least two modes: user, kernel/supervisor • Allow user process some read-only area e.g., protection bits • Transfer from user mode to supervisor mode (system call) • Limit memory access to protect the memory state of a process without having to swap the process to disk on a context switch. Advanced Computer Architecture

  23. Protection via Virtual Memory • Is it enough protection? • NO • Large OS with too many possible bugs • Flaws in OS have led to vulnerabilities that are routinely exploited. • Much smaller code base than OS • Virtual Machines (VM) for improving protection • VMs can also be used for • Managing software • Managing hardware Advanced Computer Architecture

  24. Introduction to Virtual Machines • VMs developed in late 1960s • Remained important in mainframe computing over the years • Largely ignored in single user computers of 1980s and 1990s • Recently regained popularity due to • increasing importance of isolation and security in modern systems, • failures in security and reliability of standard operating systems, • sharing of a single computer among many unrelated users, • and the dramatic increases in raw speed of processors, which makes the overhead of VMs more acceptable Advanced Computer Architecture

  25. What is a Virtual Machine (VM)? • Broadest definition includes all emulation methods that provide a standard software interface, such as the Java VM • “(Operating) System Virtual Machines” provide a complete system level environment at binary ISA • Here assume ISAs always match the native hardware ISA • E.g., IBM VM/370, VMware ESX Server, and Xen • Present illusion that VM users have entire computer to themselves, including a copy of OS • Single computer runs multiple VMs, and can support multiple, different OSes • On conventional platform, single OS “owns” all HW resources • With a VM, multiple OSes all share HW resources • Underlying HW platform is called the host, and its resources are shared among the guest VMs Advanced Computer Architecture

  26. Virtual Machine Monitors (VMMs) • Virtual machine monitor (VMM) or hypervisor is software that supports VMs • VMM determines how to map virtual resources to physical resources • Physical resource may be time-shared, partitioned, or emulated in software • VMM is much smaller than a traditional OS; • isolation portion of a VMM is  10,000 lines of code Advanced Computer Architecture

  27. VMM Overhead? • Depends on the workload • User-level processor-bound programs (e.g., SPEC) have zero-virtualization overhead • Runs at native speeds since OS rarely invoked • I/O-intensive workloads  OS-intensive • execute many system calls and privileged instructions • can result in high virtualization overhead • For System VMs, goal of architecture and VMM is to run almost all instructions directly on native hardware Advanced Computer Architecture

  28. Other Uses of VMs • Focus so far was on protection • Two other commercially important uses of VMs • Managing Software • VMs provide an abstraction that can run the complete SW stack, even including old OSes like DOS • Some VMs running legacy OSes, many running current stable OS release, few testing next OS release • Managing Hardware • VMs allow separate SW stacks to run independently yet share HW, thereby consolidating number of servers • Some run each application with compatible version of OS on separate computers, as separation helps dependability • Migrate running VM to a different computer • Either to balance load or to evacuate from failing HW Advanced Computer Architecture

  29. Requirements of a Virtual Machine Monitor • A VM Monitor • Presents a SW interface to guest software, • Isolates state of guests from each other, and • Protects itself from guest software (including guest OSes) • Guest software should behave on a VM exactly as if running on the native HW • Except for performance-related behavior or limitations of fixed resources shared by multiple VMs • Guest software should not be able to change allocation of real system resources directly • Hence, VMM must control • everything even though guest VM and OS currently running is temporarily using them • Access to privileged state, Address translation, I/O, Exceptions and Interrupts, … Advanced Computer Architecture

  30. Requirements of a Virtual Machine Monitor • VMM must be at higher privilege level than guest VM, which generally run in user mode • Execution of privileged instructions handled by VMM • For example, timer interrupt: VMM suspends currently running guest VM, saves its state, handles interrupt, determine which guest VM to run next, and then load its state • Guest VMs that rely on timer interrupt provided with virtual timer and an emulated timer interrupt by VMM • Requirements of system virtual machines are same as paged-virtual memory: • At least 2 processor modes: system and user • Privileged subset of instructions available only in system mode, trap if executed in user mode • All system resources controllable only via these instructions Advanced Computer Architecture

  31. ISA Support for Virtual Machines • If VMs are planned for during design of ISA, easy to reduce instructions that must be executed by a VMM and how long it takes to emulate them • Since VMs have been considered for desktop/PC server applications only recently, most ISAs were created without virtualization in mind, including 80x86 and most RISC architectures • VMM must ensure that guest system only interacts with virtual resources • conventional guest OS runs as user mode program on top of VMM • If guest OS attempts to access or modify information related to HW resources via a privileged instruction– e.g., reading or writing the page table pointer--it will trap to the VMM • If not, VMM must intercept instruction and support a virtual version of the sensitive information as the guest OS expects (examples soon) Advanced Computer Architecture

  32. Impact of VMs on Virtual Memory • Virtualization of virtual memory if each guest OS in every VM manages its own set of page tables? • VMM separates real and physical memory • Makes real memory a separate, intermediate level between virtual memory and physical memory • Some use the terms virtual memory, physical memory, and machine memory to name the 3 levels • Guest OS maps virtual memory to real memory via its page tables, and VMM page tables map real memory to physical memory • VMM maintains a shadow page table that maps directly from the guest virtual address space to the physical address space of HW • Rather than pay extra level of indirection on every memory access • VMM must trap any attempt by guest OS to change its page table or to access the page table pointer Advanced Computer Architecture

  33. ISA Support for VMs & Virtual Memory • IBM 370 architecture added additional level of indirection that is managed by the VMM • Guest OS keeps its page tables as before, so the shadow pages are unnecessary • To virtualize software TLB, VMM manages the real TLB and has a copy of the contents of the TLB of each guest VM • Any instruction that accesses the TLB must trap • TLBs with Process ID tags support a mix of entries from different VMs and the VMM, thereby avoiding flushing of the TLB on a VM switch Advanced Computer Architecture

  34. Impact of I/O on Virtual Memory • Most difficult part of virtualization • Increasing number of I/O devices attached to the computer • Increasing diversity of I/O device types • Sharing of a real device among multiple VMs, • Supporting the myriad of device drivers that are required, especially if different guest OSes are supported on the same VM system • Give each VM generic versions of each type of I/O device driver, and let VMM to handle real I/O • Method for mapping virtual to physical I/O device depends on the type of device: • Disks partitioned by VMM to create virtual disks for guest VMs • Network interfaces shared between VMs in short time slices, and VMM tracks messages for virtual network addresses to ensure that guest VMs only receive their messages Advanced Computer Architecture

  35. Example: Xen VM • Xen: Open-source System VMM for 80x86 ISA • Project started at University of Cambridge, GNU license model • Original vision of VM is running unmodified OS • Significant wasted effort just to keep guest OS happy • “paravirtualization” - small modifications to guest OS to simplify virtualization • 3 Examples of paravirtualization in Xen: • To avoid flushing TLB when invoke VMM, Xen mapped itself into upper 64 MB of address space of each VM • Guest OS allowed to allocate pages, just check that didn’t violate protection restrictions • To protect the guest OS from user programs in VM, Xen takes advantage of 4 protection levels available in 80x86 • Most OSes for 80x86 keep everything at privilege levels 0 or at 3. • Xen VMM runs at the highest privilege level (0) • Guest OS runs at the next level (1) • Applications run at the lowest privilege level (3) Advanced Computer Architecture

  36. Xen changes for paravirtualization • Port of Linux to Xen changed  3000 lines, or  1% of 80x86-specific code • Does not affect application-binary interfaces of guest OS • OSes supported in Xen 2.0 http://wiki.xensource.com/xenwiki/OSCompatibility

  37. Xen and I/O • To simplify I/O, privileged VMs assigned to each hardware I/O device: “driver domains” • Xen Jargon: “domains” = Virtual Machines • Driver domains run physical device drivers, although interrupts still handled by VMM before being sent to appropriate driver domain • Regular VMs (“guest domains”) run simple virtual device drivers that communicate with physical devices drivers in driver domains over a channel to access physical I/O hardware • Data sent between guest and driver domains by page remapping Advanced Computer Architecture

  38. Xen Performance • Performance relative to native Linux for Xen for 6 benchmarks from Xen developers Advanced Computer Architecture

  39. Xen Performance • Subsequent study noticed Xen experiments based on 1 Ethernet network interfaces card (NIC), and single NIC was a performance bottleneck Advanced Computer Architecture

  40. Xen Performance • > 2X instructions for guest VM + driver VM • > 4X L2 cache misses • 12X – 24X Data TLB misses Advanced Computer Architecture

  41. Xen Performance • > 2X instructions: page remapping and page transfer between driver and guest VMs and due to communication between the 2 VMs over a channel • 4X L2 cache misses: Linux uses zero-copy network interface that depends on ability of NIC to do DMA from different locations in memory • Since Xen does not support “gather DMA” in its virtual network interface, it can’t do true zero-copy in the guest VM • 12X – 24X Data TLB misses: 2 Linux optimizations • Superpages for part of Linux kernel space, and 4MB pages lowers TLB misses versus using 1024 4 KB pages. Not in Xen • PTEs marked global are not flushed on a context switch, and Linux uses them for its kernel space. Not in Xen • Future Xen may address 2. and 3., but 1. is inherent? Advanced Computer Architecture

  42. AMD Opteron Memory Hierarchy Advanced Computer Architecture

  43. Advanced Computer Architecture

  44. Advanced Computer Architecture

  45. Conclusion • Memory wall inspires optimizations since so much performance lost there • Reducing hit time: Small and simple caches, Way prediction, Trace caches • Increasing cache bandwidth: Pipelined caches, Multibanked caches, Nonblocking caches • Reducing Miss Penalty: Critical word first, Merge write buffer • Reducing Miss Rate: Compiler optimizations • Reducing miss penalty or miss rate via parallelism: Hardware prefetching, Compiler prefetching • “Auto-tuners” search replacing static compilation to explore optimization space? • DRAM – Continuing Bandwidth innovations: Fast page mode, Synchronous, Double Data Rate Advanced Computer Architecture

  46. Conclusion • VM Monitor presents a SW interface to guest software, isolates state of guests, and protects itself from guest software (including guest OS) • Virtual Machine Revival • Overcome security flaws of large OSes • Manage Software, Manage Hardware • Processor performance no longer highest priority • Virtualization challenges for processor, virtual memory, and I/O • Paravirtualization to cope with those difficulties • Xen as example VMM using paravirtualization • 2005 performance on non-I/O bound, I/O intensive apps: 80% of native Linux without driver VM, 34% with driver VM Advanced Computer Architecture

More Related