1 / 28

Review Quiz

Review Quiz. 1) Why a multicycle implementation can be better than a single cycle implementation. It is less expensive It is usually faster Its average CPI is smaller It allows a faster clock rate It has a simpler design. Answer: 1,2,4 . 2) A pipelined implementation can.

Download Presentation

Review Quiz

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Review Quiz csci4203/ece4363

  2. 1) Why a multicycle implementation can be better than a single cycle implementation • It is less expensive • It is usually faster • Its average CPI is smaller • It allows a faster clock rate • It has a simpler design Answer: 1,2,4

  3. 2) A pipelined implementation can • Increase throughput • Decrease the latency of each operation • Decrease cache misses • Increase TLB misses • Decrease clock cycle time Answer: 1,5

  4. 3) Which of the following techniques can reduce penalty of data dependencies • Data forwarding • Instruction scheduling • Out-of-order superscalar implementation • Deeper pipelined implementation Answer: 1,2,3

  5. 4) What is the advantages of loop unrolling? • It increases ILP for more effective scheduling • It reduces branch instructions • It decreases instruction cache misses • It reduces memory operations • It reduces compile time Answer: 1,2

  6. 5) What is the advantages of a split cache • It decreases cache miss rate • It doubles cache access bandwidth • It is less expensive • It is easier to design Answer: 2

  7. 6) Which of the micro-architectures are likely to boost performance of the following loop While (p!= NULL) { m=p->data; p= p->next; } • Deep pipelining • Accurate branch prediction • An efficient cache hierarchy • Speculative execution support Answer: 2, 3, 4

  8. 7) Using the 2-bit branch prediction scheme, what will be the branch misprediction rate for the following loop. For (i=1; i<10000; i++) { for (j=1; j<3; j++) statements; } • About 10% • About 50% • About 66% • About 33% • About 1% Answer: 4

  9. 8) You have decided to design a cache hierarchy with two levels caching – L1 and L2. Which of the following configurations are likely to be used? • L1 write-through and L2 write-back • L1 is united and L2 is split • L1 has a line size larger than L2 line size • L1 is private and L2 is shared in CMP Answer: 1, 4

  10. 9) It takes a long time (millions of cycles) to get the first byte from a disk. So what should be done to reduce the cost. • Use larger pages • Increase the size of TLB • Two levels of TLB • Disk caching – keep frequently used files in memory. Answer: 1, 4

  11. 10) Page table is large and often space inefficient. What techniques can be used to deal with it? • Two level page tables • Hashing tables • Linked list • Sequential search table Answer: 1, 2

  12. 11) Which of the following techniques may reduce the cache miss penalty. • Requested word first (or critical word first) • Multi-level caches • Increase the size of main memory without interleaving • Have a faster memory bus Answer: 1, 2, 4

  13. 12) Which of the following techniques can usually help reducing total penalty of capacity misses? • Split the cache into two • Increase associativity • Increase line size • Cache prefetching Answer: 3, 4

  14. 13) Cache performance is more important in which of the following conditions? • When the bus bandwidth is insufficient • When CPIperfect is low and the clock rate is high • When CPIperfect is high and the clock rate is low • When the main memory is not large enough Answer: 1,2

  15. 14) Which of the following statements are true for TLB • TLB caches frequently used Virtual-to-Physical translations • TLB is usually smaller than caches • TLB uses write-through policy • TLB misses can be handled by either software or hardware Answer: 1, 2, 4

  16. 15) Which of the following designs will see greater impact when we move from the 32-bit MIPS architecture to 64-bit MIPS? • Virtual memory support • Data path design • Control path design • Floating point functional unit • Cache design Answer: 1, 2, 5

  17. 16) Which of the following statements are true for Microprogramming? • Microprogramming can be used to implement structured control design. • Microprogramming simplifies control designs and allows for a faster, more reliable design. • Microprogramming control yield faster processor • Microprogramming is used in recent Intel Pentium processors Answer: 1, 2, 4

  18. 17) In a pipelined implementation, what hazards may often occur? • Control hazards • Data hazards • Floating point exceptions • Structure hazards Answer: 1, 2, 4

  19. 18) My program has a very high cache miss rate (> 50%). I traced it down to the following function. What cache miss type is it? Assume a typical cache with 32B line size. functionA(float *a, float *b, int n) { for (i=1,i<n; i++) *a++ = *a + *b++; } • Capacity miss • Compulsory miss • Conflict miss • Cold miss Answer: 3

  20. 19) What are the major motivations for Virtual Memory? • To support multiprogramming • To increase system throughput • To allow efficient and safe sharing 4) To remove programming burdens of a small physical memory Answer: 3, 4

  21. 20) Among page fault, TLB miss, branch misprediction and cache misses, which of the following statements are true? • Page fault is most expensive • L1 cache miss may cost less than a branch misprediction • TLB misses usually cost more than L1 misses. • A TLB miss will always cause a respective L1 miss Answer: 1, 2, 3

  22. 21) Assume we have a four line fully associative instruction cache. Which replacement algorithm works best for the following loop. 1) LRU 2) Random 3) MRU 4) NRU 5) FIFO For (i=1; i<n; i++) { line1; line2; line3; line 4; line5; } Answer: 3

  23. 22) Which of the following techniques can reduce control hazards in a pipelined processor? • Branch prediction • Loop unrolling • Procedure in-lining • Predicated execution Answer: 1,2,3,4

  24. 23) Which techniques can be used to reduce conflict misses? • Increase associativity • Adding a victim cache • Using large lines • Use shared caches Answer: 1,2

  25. 24) Which of the following statements are true for RAID (Redundant Array of Inexpensive Disks). • RAID0 has no redundancy • RAID1 is most expensive • RAID5 has the best reliability • RAID4 is better than RAID3 because it supports efficient small reads and writes Answer:1, 2, 4

  26. 25) Adding new features to a machine may require changes to be made to the ISA. Which of the following features can get by without changing the ISA? • Predicated instructions • Software controlled data speculation • Static branch prediction • Software controlled cache prefetching Answer: 3,4

  27. 26) A branch predictor is similar to a cache in many aspects. Which of the following cache parameters can be avoided in a simple branch predictor? • Associativity • Line size • Replacement algorithms • Write policy • Tag size Answer: 1,2,3,4,5

  28. 27) Assume cache size=32KB, line size=32B,what are the number of bits used for index and tag in a 4-way associative cache? Assume 16GB of physical memory. 1) Tag = 19, Index= 8 2) Tag = 20, Index= 10 3) Tag = 21, Index= 8 4) Tag = 17, Index= 10 Answer: 3

More Related