1 / 11

C SINGH, JUNE 7-8, 2010

Advanced Computers Architecture UNIT 2 CACHE MEOMORY Lecture7 By Rohit Khokher Department of Computer Science, Sharda University, Greater Noida, India. C SINGH, JUNE 7-8, 2010. Advanced Computers Architecture, UNIT 2. IWW 2010 , ISATANBUL, TURKEY. OUTLINE OF MY TALK.

dewei
Download Presentation

C SINGH, JUNE 7-8, 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Computers Architecture UNIT 2 CACHE MEOMORY Lecture7 By Rohit Khokher Department of Computer Science, Sharda University, Greater Noida, India C SINGH, JUNE 7-8, 2010 Advanced Computers Architecture, UNIT 2 IWW 2010, ISATANBUL, TURKEY

  2. OUTLINE OF MY TALK • What is Memory Organisation? • Primary and Secondary Memory • Virtual Memory C SINGH, JUNE 7-8, 2010 Advanced Computers Architecture, UNIT 2 IWW 2010, ISATANBUL, TURKEY

  3. Different Terminology • Memory Hierarchy • Main Memory • Auxiliary Memory • Associative Memory • Cache Memory • Virtual Memory • Memory Management Hardware C SINGH, JUNE 7-8, 2010 Advanced Computers Architecture, UNIT 2 IWW 2010, ISATANBUL, TURKEY

  4. Memory Hierarchy MEMORY HIERARCHY Memory Hierarchy is to obtain the highest possible access speed while minimizing the total cost of the memory system Auxiliary memory Magnetic I/O Main tapes memory processor Magnetic disks Cache CPU memory Register Cache Main Memory Magnetic Disk Magnetic Tape

  5. Memory Main memory consists of a number of storage locations, each of which is identified by a unique address The ability of the CPU to identify each location is known as its addressability Each location stores a word i.e. the number of bits that can be processed by the CPU in a single operation. Word length may be typically 16, 24, 32 or as many as 64 bits. A large word length improves system performance, though may be less efficient on occasions when the full word length is not used

  6. Types of main memory There are two types of main memory, Random Access Memory (RAM) and Read Only Memory (ROM)a Random Access Memory (RAM) • holds its data as long as the computer is switched on • All data in RAM is lost when the computer is switched off • Described as being volatile • It is direct access as it can be both written to or read from in any order Its purpose is to temporarily hold programs and data for processing. In modern computers it also holds the operating system

  7. Types of RAM • 1. Dynamic Random Access Memory (DRAM) • Contents are constantly refreshed 1000 times per second • Access time 60 – 70 nanoseconds • Note: a nanosecond is one billionth of a second! • 2. Synchronous Dynamic Random Access Memory (SDRAM) • Quicker than DRAM • Access time less than 60 nanoseconds • 3. Direct Rambus Dynamic Random Access Memory (DRDRAM) • New type of RAM architecture • Access time 20 times faster than DRAM • More expensive

  8. Types of RAM • 4. Static Random Access Memory (SRAM) • Doesn’t need refreshing • Retains contents as long as power applied to the chip • Access time around 10 nanoseconds • Used for cache memory • Also for date and time settings as powered by small battery • 5. Cache memory • Small amount of memory typically 256 or 512 kilobytes • Temporary store for often used instructions • Level 1 cache is built within the CPU (internal) • Level 2 cache may be on chip or nearby (external) • Faster for CPU to access than main memory

  9. Main Memory (DRAM) CPU Cache Memory (SRAM) = Bus connections The operation of cache memory 1. Cache fetches data from next to current addresses in main memory 2. CPU checks to see whether the next instruction it requires is in cache 3. If it is, then the instruction is fetched from the cache – a very fast position 4. If not, the CPU has to fetch next instruction from main memory - a much slower process

  10. Performance Capacity Memory/storage hierarchies • Balancing performance with cost • Small memories are fast but expensive • Large memories are slow but cheap • Exploit locality to get the best of both worlds • locality = re-use/nearness of accesses • allows most accesses to use small, fast memory

  11. L1 cache holds cache lines retrieved from the L2 cache memory. L2 cache holds cache lines retrieved from main memory. Main memory holds disk blocks retrieved from local disks. Local disks hold files retrieved from disks on remote network servers. An Example Memory Hierarchy Smaller, faster, and costlier (per byte) storage devices L0: registers CPU registers hold words retrieved from L1 cache. on-chip L1 cache (SRAM) L1: off-chip L2 cache (SRAM) L2: main memory (DRAM) L3: Larger, slower, and cheaper (per byte) storage devices local secondary storage (local disks) L4: remote secondary storage (tapes, distributed file systems, Web servers) L5:

More Related