1 / 50

Enabling Data-Intensive Science with Tactical Storage Systems

Enabling Data-Intensive Science with Tactical Storage Systems. Douglas Thain http://www.cse.nd.edu/~dthain. Sharing is Hard!. Despite decades of research in distributed systems and operating systems, sharing computing resources is still technically and socially difficult!

rusty
Download Presentation

Enabling Data-Intensive Science with Tactical Storage Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EnablingData-Intensive Sciencewith Tactical Storage Systems Douglas Thain http://www.cse.nd.edu/~dthain

  2. Sharing is Hard! • Despite decades of research in distributed systems and operating systems, sharing computing resources is still technically and socially difficult! • Most existing systems for sharing require: • Kernel level software. • A privileged login. • Centralized trust. • Loss of control over resources that you own.

  3. Example: Grid Computing Robert Gardner, et al. (102 authors) The Grid2003 Production Grid Principles and Practice IEEE HPDC 2004 The Grid2003 Project has deployed a multi-virtual organization, application-driven grid laboratory that has sustained for several months the production-level services required by… ATLAS, CMS, SDSS, LIGO…

  4. Grid Computing Experience The good news: • 27 sites with 2800 CPUs • 40985 CPU-days provided over 6 months • 10 applications with 1300 simultaneous jobs The bad news: • 40-70 percent utilization • 30 percent of jobs would fail • 90 percent of failures were site problems • Most site failures were due to disk space.

  5. A Strange Problem • Storage is Plentiful! • Large disks on every CPU, PDA, and iPod. • Typ. cluster has unused disks on each node. • MS filesystem study: most disks 90% free. • Tools for sharing: AFS, NFS, FTP, SCP... • The problem: • Users are fixed to the abstractions provided by administrators: e.g. one NFS file system. • Result: 1000 people share one 40 GB disk.

  6. What if... • Users could use any storage anywhere? • I could borrow an unused disk for NFS? • An entire cluster can be used as storage? • Multiple clusters could be combined? • All this could be done without root? • Solution: Tactical Storage System (TSS)

  7. Outline • Why is Sharing Data so Hard? • Tactical Storage Systems • File Servers, Abstractions, Adapters • Performance Comparison • Application: High-Energy Physics • Application: Bioinformatics Database • Conclusion

  8. Tactical Storage Systems (TSS) • A TSS allows any node to serve as a file server or as a file system client. • All components can be deployed without special privileges – but with security. • Users can build up complex structures. • Filesystems, databases, caches, ... • Two Independent Concepts: • Resources – The raw storage to be used. • Abstractions – The organization of storage.

  9. App App Adapter Central Filesystem Distributed Filesystem Abstraction Adapter Distributed Database Abstraction file server file server file server file server file server file server file server UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX Cluster administrator controls policy on all storage in cluster Workstations owners control policy on each machine. App Adapter ??? file system file system file system file system file system file system file system

  10. Three Components • User-Level File Servers • Secure Remote File Access w/out root • Storage Abstractions • Combine several file servers into one. • Application Adapters • Attach existing applications w/out root.

  11. User-Level File Servers • Unix-Like Access to Existing File Systems • Complete Independence • choose friends • limit bandwidth • evict users? • Trivial to Deploy • three steps • Flexible Access Control Chirp Protocol Chirp Protocol file server file server file system

  12. Access Control in File Servers • Unix Security is not Sufficient for the Job • Authentication • Globus, Kerberos, Unix, Hostname, Address • Authorization • Each directory has an access control: globus:/O=INFN/CN=Paolo_Mazzanti RWLA kerberos:dthain@nd.edu RWL hostname:*.bo.infn.it RL address:192.168.1.* RWLA

  13. test.c test.dat a.out cms.exe Widely Shared Storage Servers file server globus:/O=INFN/CN=* RWLAX

  14. /O=INFN/CN=Berlusconi mkdir mkdir /O=INFN/CN=Mazzanti /O=INFN/CN=Mazzanti RWLA /O=INFN/CN=Berlusconi RWLA test.c a.out test.c a.out Reservation Right (V) file server mkdir only! globus:/O=INFN/CN=* V(RWLA)

  15. Abstractions • Users Create Higher Level Structures • Admins do not know/care about abstractions. • Current Abstraction Types: • CFS – Central File System • DSFS – Dist Shared File System • DSDB – Dist Shared Database • Abstractions Under Development: • Striped File System • Distributed Time Travel Backup System

  16. file file file CFS: Central File System appl appl appl adapter adapter adapter file server

  17. ptr ptr ptr DSFS: Dist. Shared File System appl appl adapter adapter file server file server file server file file file file file file file file file file

  18. insert query direct access create prepare DSDB: Dist. Shared Database appl appl adapter adapter file server database server file server file file file file index file file file file file file file file

  19. transfer data insert file for /O=INFN/CN=Mazzanti mkdir hostname:database.infn.it RWLA mkdir setacl /O=INFN/CN=Mazzanti RWL file.dat DSDB Authentication appl file server adaper hostname:database.infn.it V(RWLA) hostname:database.infn.it RWLA globus:/O=INFN/CN=Mazzanti RWL database server

  20. tcsh tcsh cat cat vi vi Adapter Enhanced Operating System • Like an OS Kernel • Tracks procs, files, etc. • Adds new capabilities. • Enforces owner’s policies. • Delegated Syscalls • Trapped via ptrace interface. • Action taken by Parrot. • Resources chrgd to Parrot. • Research Platform • Distributed file systems. • Grid appl. environments. • Debugging. • Easier than OS coding! trapped system calls ptrace interface Adapter - Parrot process table file table

  21. file server file system

  22. Prototype Storage in Computer Science Dept - Office Workstations - Instructional Labs - Research Clusters - Storage Bricks Each Owner Controls Local Storage - Access Control List - Evicts Users if Needed. - Collaborate Offsite

  23. Demo Time!

  24. Outline • Why is Sharing Data so Hard? • Tactical Storage Systems • File Servers, Abstractions, Adapters • Performance Comparison • Application: High-Energy Physics • Application: Bioinformatics Database • Conclusion

  25. Performance Considerations • Nothing comes for free! • System calls: order of magnitude slower. • Memory bandwidth overhead: extra copies. • Compared to NFS: • TSS slightly better on small operations. • TSS much better in network bandwidth. • On real applications: • Measurable slowdown • Benefit: far more flexible and scalable.

  26. Performance – System Calls

  27. Performance - Applications parrot only

  28. Performance – I/O Calls

  29. Performance – Bandwidth

  30. Performance – DSFS

  31. Performance Conclusion • TSS has measurable slowdown. • TSS is comparable to NFS. • TSS can create scalable, parallel filesys. • To do better, must modify kernel.

  32. Outline • Why is Sharing Data so Hard? • Tactical Storage Systems • File Servers, Abstractions, Adapters • Performance Comparison • Application: High-Energy Physics • Application: Bioinformatics Database • Conclusion

  33. Application: High-Energy Physics • SP5 Monte Carlo Simulation • Component of BaBar at SLAC • Collaboration with Sander Klous at NIKHEF • Difficult to Deploy on a Grid • Complex Software Structure • Custom Shared Libraries • Objectivity Database • (Similar Difficulties with Other Applications)

  34. SP5 on a Standalone Machine manually started application sp5 libobjy file system operations database lock operations sp5 sp5 sp5 lock server sp5 sp5 sp5 libobjy sp5 sp5 scripts data

  35. Ideal SP5 Deployment sp5 sp5 sp5 sp5 sp5 sp5 libobjy libobjy libobjy libobjy libobjy libobjy database lock ops file system ops sp5 sp5 sp5 lock server sp5 sp5 sp5 libobjy sp5 sp5 scripts data

  36. SP5 with Tactical Storage GSI GSI GSI GSI GSI GSI sp5 sp5 sp5 sp5 sp5 sp5 adapter adapter adapter adapter adapter adapter file system ops database lock ops file server sp5 sp5 sp5 lock server sp5 sp5 sp5 libobjy libobjy libobjy libobjy libobjy libobjy libobjy sp5 sp5 scripts data

  37. Performance on EDG Testbed

  38. Thoughts on SP5 + TSS “With this project we have shown that computer scientists can solve the complications of grid computing and physicists can just use it.” “The most important issue is: Who has to do the work?”

  39. Outline • Why is Sharing Data so Hard? • Tactical Storage Systems • File Servers, Abstractions, Adapters • Performance Comparison • Application: High-Energy Physics • Application: Bioinformatics Database • Conclusion

  40. Application: Molecular Dynamics • Researchers in MD are much like HEP: • Long running simulations, explore space. • Collaborating/competing on similar siml. • “What parameters have I explored?” • “How can I share results with friends?” • “Replicate these data for safety.” • GEMS: Grid Enabled Molecular Sims • Distributed database for MD siml at Notre Dame. • Collaborators: Dr. Jesus Izaguirre, Dr. Aaron Striegel

  41. XML+ Temp>300K Mol==CH4 data host5:fileZ host6:fileX XML -> host6:fileX host2:fileY host5:fileZ XML -> host1:fileA host7:fileB host3:fileC A Y C Z X B GEMS Distributed Database database server catalog server catalog server

  42. GEMS and Tactical Storage • Dynamic System Configuration • Add/remove servers, discovered via catalog • Policy Control in File Servers • Groups can Collaborate within Constraints • Security Implemented within File Servers • Direct Access via Adapters • Unmodified Simulations can use Database

  43. Survivability

  44. Outline • Why is Sharing Data so Hard? • Tactical Storage Systems • File Servers, Abstractions, Adapters • Performance Comparison • Application: High-Energy Physics • Application: Bioinformatics Database • Conclusion

  45. Tactical Storage Systems • Separate Abstractions from Resources • Components: • File servers, abstractions, adapters. • Completely user level. • Performance acceptable for real applications. • Independent but Cooperating Components • Owners of file servers set policy. • Users must work within policies. • Large numbers of users: V right.

  46. Future Work • More powerful abstractions • Striping, replicating, indexing, searching. • More fine grained control of storage • Allocation, accounting, and management of bandwidth and storage space. • Applications and Deployment

  47. Tactical Storage Systems put power in the hands of the users, not administrators!

  48. Collaborators • NIKHEF and Vrije University • Sander Klous • University of Notre Dame • Aaron Striegel, Jesus Izaguirre • Hard working students: • Justin Wozniak, Paul Brenner • Paul Madrid, Chris Moretti

  49. Publications • Tactical Storage Systems • UND CSE Dept Tech Report 2005-07, May 2005. • Transparent Access to Grid Resources for User Software • Accepted to Concurrency and Computation: Practice and Experience, 2005. • Gluttony and Generosity in GEMS: Grid Enabled Molecular Storage • High Performance Distributed Comp, 2005. • Parrot: Transparent User-Level Middleware for Data-Intensive Computing • Workshop on Adaptive Grid Middleware, 2003.

  50. For more information... Cooperative Computing Lab http://www.cse.nd.edu/~ccl Cooperative Computing Tools http://www.cctools.org Douglas Thain • dthain@cse.nd.edu • http://www.cse.nd.edu/~dthain

More Related