1 / 21

December 7, 2007

Scalable Resource Information Service for Computational Grids Nian-Feng Tzeng Center for Advanced Computer Studies University of Louisiana at Lafayette. December 7, 2007. Computational Grids. Comp. Grid. Grid Resource Information Service. Computational Grid. Grid Resource Information Service.

thy
Download Presentation

December 7, 2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scalable Resource Information Servicefor Computational GridsNian-Feng TzengCenter for Advanced Computer StudiesUniversity of Louisiana at Lafayette December 7, 2007

  2. Computational Grids Comp. Grid

  3. Grid Resource Information Service Computational Grid

  4. Grid Resource Information Service Core GIIS Core Grid

  5. Grid Resource Information Service Under GRAM reporter of GT: – at least, one system-wide GIIS – each resource provider (e.g., cluster header) runs GRIS to provide queuing information and others

  6. Typical MDS-4 deployment

  7. What is P2P? Client Client Client • A distributed system architecture: • No centralized control • Nodes are symmetric in function • Typically many nodes, but unreliable and heterogeneous Internet Client Client

  8. Example P2P problem: lookup N2 N1 N3 Key=“title” Value=file data… Internet ? Client Publisher Lookup(“title”) N4 N6 N5 • publish/lookup at the heart of all P2P systems

  9. Another approach: distributed hash tables (DHTs) • Nodes are the hash buckets • Key identifies data uniquely • DHT balances keys and data across nodes • DHT replicates, caches, routes lookups, etc. Distributed applications data Lookup (key) Insert(key, data) Distributed hash tables …. node node node

  10. K5 K20 N105 Circular ID space N32 N90 K80 N60 Chord lookups • Map keys to nodes in a load-balanced way • Hash a node IP addr. into a long string of digit (node ID) • Hash a key to the same string length (key ID) • Assign hashed key to “closest” node (i.e., its successor) • Refer hashed node ID & key ID, as ID & key, respectively • Forward key lookup to a closer node • Insert: lookup + store • Join: insert node in ring

  11. Chord’s routing table: fingers ½ ¼ 1/8 1/16 1/32 1/64 1/128 N80

  12. Lookups take O(log(N)) hops N5 N10 N110 K19 N20 N99 N32 Lookup(K19) N80 N60 • Lookup: route to closest predecessor

  13. held Steps for Node 3 to find successor of key = 1: a. key = 1 belongs to 3.finger[3].interval b. Node 3 checks its 3rd finger entry, succ. = 0 c. Node 0 checks its 1st finger entry, gets its succ. = 1 (key is within the smallest possible interval, 1st entry  answer = succ.) held 3 keys existing, 1, 2, 6, held in different nodes held Figure. Finger tables at existing nodes 0, 1, 3, to specify subsequent intervals and their corresponding successors.

  14. Prefix Hash Trees (PHTs) • Easy deployment using OpenDHT • 3 APIs – put, delete, get • Application – Place Lab • Range queries • Multiple attributes – combined using linearization • Hash on prefixes • Beacon ID hashed to SHA-1

  15. PHT search takes O(log(log(N))) hops

  16. Well data user attribute/query translation query well ID = 2701 well ID name bore pressure temp query lookups resource publishing get hash DHT (Chord) get hash internal nodes search PHT over DHT search PHT until finding leaf node user notification find successors until getting leaf node check node capacity update PHT Well ID = 2701 Name = geiser Bore = 89.99 Pressure = 28.8 Temp = 512 validate selection create leaf nodes update DHT Fast and Scalable Resource DiscoveryOur Workby Denvil Smith

  17. Our Workby Denvil Smith

  18. Our Workby Denvil Smith

  19. Questions? Please Ask!

More Related