1 / 113

Computer Organization and Architecture

Computer Organization and Architecture. Lecture 1: Introduction. Architecture & Organization (Stalling). Architecture is those attributes visible to the programmer Instruction set, number of bits used for data representation, I/O mechanisms, addressing techniques.

toyah
Download Presentation

Computer Organization and Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Organization and Architecture Lecture 1: Introduction

  2. Architecture & Organization (Stalling) • Architecture is those attributes visible to the programmer • Instruction set, number of bits used for data representation, I/O mechanisms, addressing techniques. • e.g. Is there a multiply instruction? • Organization is how features are implemented • Control signals, interfaces, memory technology. • e.g. Is there a hardware multiply unit or is it done by repeated addition?

  3. Computer Architecture Is … the attributes of a [computing] system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flows and controls the logic design, and the physical implementation. Amdahl, Blaaw, and Brooks, 1964

  4. Parallelism Technology Programming Languages Applications Interface Design (Inst. Set Arch.) Computer Architecture: • Instruction Set Design • Organization • Hardware Operating Measurement & Evaluation History Systems Cont. • Computer Architecture is the design of computers, including their instruction sets, hardware components, and system organization[Patterson]. • Thus two essential parts of computer architecture: • Instruction-set Architecture (ISA) • Hardware-system Architecture (HSA)

  5. Instruction-set Architecture (ISA) • The instruction set architecture of a computer includes anything a programmer would need to know to make the computer run correctly. This include: • (a) The number and types of registers • (b) Instruction set (what operations can be performed?) • (c) Instruction format (how are they specified?) • (d) Addressing mode (how is data obtained? - direct vs. indirect) • (e) Exception handling (what happens when something goes wrong?) • Instruction-set architecture includes the specifications that determine how machine-language programs will interact with the computer. That is, in general, two computers with the same ISA will run the same programs. This is the notion of a computer-family architecture.

  6. Hardware-system Architecture (HSA) • The Hardware-system architecture deals with the computer's major hardware subsystems, including central processing unit (CPU), its storage system, and its input-output system. • The computer hardware design determines the implementation of the various computer components. This includes • (a) Capabilities and performance of the functional units (e.g., registers, ALUs, shifters) • (b) Methods for connecting the functional units (e.g., data bus) • (c) Control logic for the functional units • Typically, the computer hardware is designed based on the instruction set architecture.

  7. A successful ISA generally has many implementations (a computer-family) which are different in their HSA. • Compatibility is the ability of different computers to run the same programs. • Upward compatibility allows high-performance members of a family to run the same program as do the low-performance members • Downward compatibility is not always possible, since high-performance family members often have features not available on lower-performance members.

  8. Computer Family • A computer family is a set of implementations that share the same or similar ISA (using a variety of technologies, memory sizes, and speeds). For example, IBM system/360 (1960s), PDP-8 family (1965), PDP-11 family (1965), and IBM system/370 (1970s). • All Intel x86 family share the same basic architecture • The IBM System/370 family share the same basic architecture • This gives code compatibility • At least backwards • Organization differs between different versions

  9. Computer Evolution

  10. Historical Perspective

  11. Early Computing 1946: ENIAC, us Army, 18,000 Vacuum Tubes 1949: UNIVAC I, $250K, 48 systems sold 1954: IBM 701, Core Memory 1957: Moving Head Disk 1958: Transistor, FORTRAN, ALGOL, CDC & DEC Founded 1964: IBM 360, CDC 6600, DEC PDP-8 1969: UNIX 1970: FLOPPY DISK 1981: IBM PC, 1st Successful Portable (Osborne1) 1986: Connection Machine, MAX Headroom Debut

  12. Underlying Technologies Year Logic Storage Prog. Lang. O/S 54 Tubes core (8 ms) 58 Transistor (10µs) FORTRAN 60 ALGOL, COBOL Batch 64 Hybrid (1µs) thin film (200ns) Lisp, APL, Basic 66 IC (100ns) PL/1, Simula,C 67 Multiprog. 71 LSI (10ns) 1k DRAM O.O. V.M. 73 (8-bit µP) 75 (16-bit µP) 4k DRAM 78 VLSI (10ns) 16k DRAM Networks 80 64k DRAM 84 (32-bit µP) 256k DRAM ADA 87 ULSI 1M DRAM 89 GAs 4M DRAM C++ 92 (64-bit µP) 16M DRAM Fortran90 Generation Evolutionary Parallelism

  13. What has happened in the 1990s • “Network-Integrated Computing” • Wide-area AND local-area integration of cluster-based computing, and high performance networks • Scalable Technologies for Computing, Networking, and Information Systems • Systems that scale DOWN as well as UP • High performance workstations • Clusters and distributed systems • Massively parallel I/O and computer servers • National Information Infrastructure

  14. What has been predicted for the Late 1990s and Early 2000s • Technology • Very large dynamic RAM: 64 Mbits and beyond • Large fast Static RAM: 1 MB, 10ns • Complete systems on a chip • 10+ Million Transistors • Parallelism • Superscalar, Superpipeline, Vector, Multiprocessors • Processor Arrays

  15. What has been predicted for the Late 1990s and Early 2000s • Low Power • 50% of PCs portable by 1995 • Performance per watt • Parallel I/O • Many applications is I/O limited, not computation • Computation scaling but memory, I/O bandwidth not keeping pace • Multimedia • New interface technologies • Video, speech, handwriting, virtual reality, …

  16. Review of Technology Trends and Cost /Performance

  17. Original Food Chain Picture Big Fishes Eating Little Fishes

  18. 1988 Computer Food Chain Mainframe Work- station PC Mini- computer Mini- supercomputer Supercomputer Massively Parallel Processors

  19. 1998 Computer Food Chain Mini- supercomputer Mini- computer Massively Parallel Processors Mainframe Work- station PC Server Now who is eating whom? Supercomputer

  20. Why Such Change in 10 years? • Performance • Technology Advances • CMOS VLSI dominates older technologies (TTL, ECL) in cost AND performance • Computer architecture advances improves low-end • RISC, superscalar, RAID, … • Price: Lower costs due to … • Simpler development • CMOS VLSI: smaller systems, fewer components • Higher volumes • CMOS VLSI : same dev. cost 10,000 vs. 10,000,000 units • Function • Rise of networking/local interconnection technology

  21. Moore’s Law • Gordon Moore - cofounder of Intel • Increased density of components on chip • Number of transistors on a chip will double every year • Since 1970’s development has slowed a little • Number of transistors doubles every 18 months • Cost of a chip has remained almost unchanged • Higher packing density means shorter electrical paths, giving higher performance • Smaller size gives increased flexibility • Reduced power and cooling requirements • Fewer interconnections increases reliability

  22. Performance Mismatch • Processor speed increased • Memory capacity increased • Memory speed lags behind processor speed

  23. DRAM and Processor Characteristics

  24. Trends in DRAM use

  25. Memory Capacity (Single Chip DRAM) Year size(Mb) cyc time 1980 0.0625 250 ns 1983 0.25 220 ns 1986 1 190 ns 1989 4 165 ns 1992 16 145 ns 1996 64 120 ns 2000 256 100 ns

  26. total amount of work done in a given time time between the start and the completion of an event

  27. Performance milestone

  28. Technology Trends(Summary) Capacity Speed (latency) Logic 2x in 3 years 2x in 3 years DRAM 4x in 3 years 2x in 10 years Disk 4x in 3 years 2x in 10 years

  29. Growth in CPU Transistor Count

  30. Technology Trends: Microprocessor Capacity “Graduation Window” Alpha 21264: 15 million Pentium Pro: 5.5 million PowerPC 620: 6.9 million Alpha 21164: 9.3 million Sparc Ultra: 5.2 million Moore’s Law • CMOS improvements: • Die size: 2X every 3 yrs • Line width: halve / 7 yrs

  31. Growth in Processor Performance

  32. Performance Trends(Summary) • Workstation performance (measured in Spec Marks) improves roughly 50% per year (2X every 18 months) • Improvement in cost performance estimated at 70% per year

  33. Measurement and Evaluation • Architecture is an iterative process: • Searching the space of possible designs • At all levels of computer systems Creativity Cost / Performance Analysis Good Ideas Mediocre Ideas Bad Ideas

  34. Computer Engineering Methodology Technology Trends

  35. Computer Engineering Methodology Evaluate Existing Systems for Bottlenecks Benchmarks Technology Trends

  36. Computer Engineering Methodology Evaluate Existing Systems for Bottlenecks Benchmarks Technology Trends Simulate New Designs and Organizations Workloads

  37. Computer Engineering Methodology Evaluate Existing Systems for Bottlenecks Implementation Complexity Benchmarks Technology Trends Implement Next Generation System Simulate New Designs and Organizations Workloads

  38. Summary: Price vs. Cost

  39. System Performance

  40. Measuring and Reporting Performance • Designing high performance computers is one of the major goals of any computer architect. • As a result, assessing the performance of computer hardware is at the heart of computer design and greatly affects the demand and market value of the computer. • However, measuring performance of a computer system is not a straightforward task: • Metrics – How do we describe in a numerical way the performance of a computer? • What tools do we use to find those metrics? • How do we summarize the performance?

  41. Measuring and Reporting Performance • What do we mean by one Computer is faster than another? • program runs less time • Response time or execution time • time that users see the output • Throughput • total amount of work done in a given time

  42. Performance • “Increasing and decreasing” ????? • We use the term “improve performance” or “ improve execution time” When we mean increase performance and decrease execution time . • improve performance = increase performance • improve execution time = decrease execution time

More Related