1 / 31

Design, Implementation, and Evaluation of Differentiated Caching Services

Design, Implementation, and Evaluation of Differentiated Caching Services. Ying Lu, Tarek F. Abdelzaher, Avneesh Saxena IEEE TRASACTION ON PARALLEL AND DISTRIBUTED SYSTEM, VOL. 15, NO. 5, MAY 2004 Presented by 張肇烜. Outline. Introduction The Case for Differentiated Caching Services

simone
Download Presentation

Design, Implementation, and Evaluation of Differentiated Caching Services

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Design, Implementation, and Evaluation of Differentiated Caching Services Ying Lu, Tarek F. Abdelzaher, Avneesh Saxena IEEE TRASACTION ON PARALLEL AND DISTRIBUTED SYSTEM,VOL. 15, NO. 5, MAY 2004 Presented by 張肇烜

  2. Outline • Introduction • The Case for Differentiated Caching Services • A Differentiated Caching Services Architecture • Implementation of the Differentiation Heuristic in Squid • Evaluation • Conclusions

  3. Introduction • Customizable content delivery architectures with a capability for performance differentiation. • We design and implement a resource management architecture for Web proxy caches.

  4. Introduction (cont.) • We use a control-theoretical approach for resource allocation to achieve the desired performance differentiation.

  5. The Case for Differentiated Caching Services • Basic Synchronous Models: PRAM • A Parallel Random Access Machine (PRAM) is an abstract machine for designing the algorithms applicable to parallel computers. It eliminates the focus on miscellaneous issues such as synchronization and communication.

  6. A Differentiated Caching Services Architecture • The models cited above is that they are accurate enough only while modeling tightly coupled multiprocessor systems. • A proposal that can be cited is the Asynchronous PRAM model (APRAM), which is a fully asynchronous mode.

  7. Heterogeneous LogGP • Reasons for selecting LogGP model • The architecture is very similar to a cluster. • Removes the synchronization points needed in other models. • The model allows overlapping computation and communication operations. • LogGP allows considering both short and long messages.

  8. Heterogeneous LogGP (cont.) • LogGP allows considering both short and long messages. • LogGP assumes finite network capacity. • This model encourages techniques that yield good results in practice.

  9. Heterogeneous LogGP (cont.) • HLogGP Definition: • Latency, L: Communication latency depends on both network technology and topology. • The Latency Matrix of a heterogeneous cluster can be defined as a square matrix L={lij, …, lmm}.

  10. Heterogeneous LogGP (cont.) • Overhead, o: the time needed by a processor to send or receive a message is referred to as overhead. • Sender overhead vector, Os={os1,…,osm}. • Receiver overhead vector, Or={or1,…,orm}. • Gap between message, g: this parameter reflects each node’s proficiency at sending consecutive short messages. • A gap vector g={g1,…,gm} .

  11. Heterogeneous LogGP (cont.) • Gap per byte, G: The Gap per byte depends on network technology. • In a heterogeneous network, a message can cross different switches with different bandwidths. • Gap matrix G={g11,…,Gmm}.

  12. Heterogeneous LogGP (cont.) • Computational power, Pi: The number of nodes cannot be used in a heterogeneous model for measuring the system’s computational power. • A computational power vector P={P1,…,Pm}.

  13. Comparative Study on the Algorithms (cont.) • Each status exchange interval is further divided into equal subintervals denoted as estimation intervals, Te. • The points of division are called estimation epochs.

  14. Comparative Study on the Algorithms (cont.) • Intervals of estimation and status exchange.

  15. Comparative Study on the Algorithms (cont.) • ELISA: • Each node computes the average load on itself and its neighboring nodes. • Nodes in the neighboring set whose estimated queue length is less than the estimated average queue length by more than a threshold θ form an active set.

  16. Comparative Study on the Algorithms (cont.) • ELISA: • The node under consideration transfers jobs to the nodes in the active set until its queue length is not greater than θ and more than the estimated average queue length.

  17. Comparative Study on the Algorithms (cont.) • RLBVR:

  18. Comparative Study on the Algorithms (cont.) • QLBVR caries out coarse adjustment on job transferring and processing rates and fine adjustment on queue length. • Coarse adjustment (on transfer and processing rates). • Fine adjustment (on queue lengths).

  19. Comparative Study on the Algorithms (cont.) • QLBVR: • When the job incoming rates change slightly, coarse adjustment can work well. • When the system load is very high and job incoming rates change rapidly, fine adjustment can balance the queue lengths in a short time.

  20. Performance Evaluation and Discussions • Effect of system loading:

  21. Performance Evaluation and Discussions (cont.) • When the load of the system is light or moderate, RLBVR and QLBVR have a better performance than ELISA. • When the rate of jobs becomes high, ELISA and QLBVR have a much better performance than RLBVR.

  22. Performance Evaluation and Discussions (cont.) • Effect of Ts :System loading is light.

  23. Performance Evaluation and Discussions (cont.) • Effect of Ts :System loading is moderate.

  24. Performance Evaluation and Discussions (cont.) • Effect of Ts :System loading is moderate.

  25. Extension to Large Scale Cluster Systems • The mesh-connected cluster system.

  26. Extension to Large Scale Cluster Systems (cont.) • Mean response time of jobs for five different algorithms under different system utilization. • System utilization is light or moderate. • System utilization is high.

  27. Extension to Large Scale Cluster Systems (cont.) • System utilization is light or moderate.

  28. Extension to Large Scale Cluster Systems (cont.) • System utilization is high.

  29. Extension to Large Scale Cluster Systems (cont.) • Experiments when the arrival of loads is varying rapidly.

  30. Extension to Large Scale Cluster Systems (cont.)

  31. Conclusion • We proposed a relative differentiated caching services model that achieves differentiation of cache hit rates between different classes. • Evaluation suggests that the control theoretical approach results in a very good controller design.

More Related