1 / 12

Memory Architecture

Memory Architecture. Jeffrey Ellak CS 147. Topics. What is memory hierarchy? What are the different types of memory? What is in charge of accessing memory?. What is memory hierarchy?.

lorne
Download Presentation

Memory Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Architecture Jeffrey Ellak CS 147

  2. Topics • What is memory hierarchy? • What are the different types of memory? • What is in charge of accessing memory?

  3. What is memory hierarchy? • Memory hierarchy is the hierarchical arrangement of storage in current computer architectures. • It is designed to take advantage of memory locality in computer programs. • Each level of the hierarchy has the properties of higher bandwidth, smaller size, and lower latency than lower levels. • For example, L1 Cache is closer to the CPU, so it can be accessed quickly, is more expensive, and holds less data. A mass storage device like a Hard Drive is accessed the slowest, is the least expensive, and holds large amounts of data.

  4. Processor Registers • CPU registers are the fastest possible memory access (usually only takes 1 CPU cycle), and are only hundreds of bytes in size. • One property of computer programs is locality of reference, where the same values are often accessed repeatedly, so having these frequently used values held in the registers greatly improves execution performance.

  5. Cache • When the processor needs to read or write a location in main memory, it first checks whether that memory location is in the cache. • This is accomplished by comparing the address of the memory location to all tags in the cache that might contain that address. • If the processor finds that the memory location • is in the cache, we say that a cache hit has occurred, • otherwise it is a cache miss. The proportion of cache • hits to cache misses is known as the hit rate. • The LRU (least recently used) replacement policy • is popular for its efficient hit rating.

  6. Physical Memory (DRAM/SRAM) • The advantage of DRAM (dynamic RAM) over SRAM (static RAM) is its structural simplicity: only one transistor and a capacitor are required per bit, compared to six transistors in SRAM. This allows DRAM to reach very high density. • The information eventually fades unless the capacitor charge is refreshed periodically. • DRAM loses its data when the power supply is removed. • For economic reasons, the main memories found in personal computers, workstations, etc. normally consist of DRAM. Other parts of the computer, such as cache memory and data buffers in hard disks, normally use SRAM.

  7. Disk Storage • A HDD (hard disk drive) is a non-volatile (does not require a constant charge to maintain data) storage device which stores digitally encoded data on rapidly rotating platters with magnetic surfaces. • A typical desktop HDD today may consist of hundreds of GB (gigabytes). • Current speeds clock in at 5.4k RPMs (revolutions per minute) to 10k RPMs. • Media transfer rates are typically above 1 Gbit/s (gigabits per second) • Disk storage is usually the bottom of the memory hierarchy, and is utilized via Virtual Memory

  8. Funny picture to make sure everyone is still paying attention

  9. Virtual Memory • “Virtual Memory" is not just "using disk space to extend physical memory size". • Virtual memory is a computer system technique which gives an application program the impression that it has contiguous working memory, while in fact it may be physically fragmented. • Systems that use this technique make programming of large applications easier and use real physical memory more efficiently than those without virtual memory. • All modern general-purpose computer operating systems use virtual memory techniques for ordinary applications, such as word processors, spreadsheets, multimedia players, accounting, etc.

  10. Memory Management Unit (MMU) • A memory management unit (MMU), is a computer hardware component responsible for handling accesses to memory requested by the CPU. • Its functions include translation of virtual addresses to physical addresses (virtual memory management), memory protection, cache control, and bus arbitration. • An MMU also reduces the problem of discontinuous blocks of memory. After blocks of memory have been allocated and freed, the free memory may become discontinuous, which is when the MMU utilizes virtual memory to create continuous blocks.

  11. Memory management through programming • Modern programming languages mainly assume two levels of memory, main memory and disk storage, though in certain languages such as Assembly and C, registers can be directly accessed. Taking optimal advantage of the memory hierarchy requires the cooperation of programmers, hardware, and compilers (as well as underlying support from the operating system) • Programmers are responsible for moving data between disk and memory through file I/O. • Hardware is responsible for moving data between memory and caches. • Optimizing compilers are responsible for generating code that, when executed, will cause the hardware to use caches and registers efficiently.

  12. That’s it!

More Related