1 / 24

Storage Element Status

Storage Element Status. GridPP7 Oxford, 31 June – 02 July 2003. Contents. What is an SE? The SE Today What will it provide Really Soon Now™? What is left for future developments?. What is the Storage Element?. Collaborations. DataGrid Storage Element

Download Presentation

Storage Element Status

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Storage Element Status GridPP7 Oxford, 31 June – 02 July 2003

  2. Contents • What is an SE? • The SE Today • What will it provide Really Soon Now™? • What is left for future developments?

  3. What is the Storage Element?

  4. Collaborations • DataGrid Storage Element • Integrate with WP2 Data Replication Services (Reptor, Optor) • Jobs running on worker nodes in a ComputingElement cluster may read or write files to an SE • SRM – Storage Resource Manager • Collaboration between Lawrence Berkeley, FermiLab, Jefferson Lab, CERN, Rutherford Appleton Lab

  5. EU DataGrid Storage Element • Grid Interface to Mass Storage Systems (MSS) • Flexible and extensible • Provides additional services for underlying MSS • E.g. access control, file metadata,…

  6. Control Interface • E.g. • cache file: stage file in from mass storage and place in disk cache • register existing file • SRM style interface • Interface was written before we had the SRM v.1 WSDL • Web service interface

  7. More about SRM • Users know Site File Names (SFN) or Physical File Names (PFN) • lxshare0408.cern.ch/bongo/mumble SE cache file request id Disk cache MSS file

  8. More about SRM • Client queries the status of a request • Better that client polls than server callbacks • Server (ideally) able to give time estimate SE status? “not ready, try later” Disk cache MSS file

  9. More about SRM • When request is ready, client gets a Transfer URL (TURL) • gsiftp://lxshare0408.cern.ch/flatfiles/01/data/16bd30e2a899b7321baf00146acbe953 SE status? ready: TURL Disk cache MSS file file

  10. More about SRM • Client accesses the file in the SE’s disk cache using (usually) non-SE tools SE Disk cache MSS globus- url-copy file file file

  11. More about SRM • Finally, client informs SE that data transfer is done • This is required for cache management etc SE done ok Disk cache MSS file file

  12. Data transfer interface • GridFTP • The standard data transfer protocol in SRM collaboration • Some SEs will be NFS mounted • Caching and pinning still required before the file is accessed via NFS • Easy to add new data transfer protocols • E.g. http, ftp, https,…

  13. Information Interface • Can publish into MDS • Can publish (via GIN) R-GMA • Using GLUE schema for StorageElement • http://hepunx.rl.ac.uk/edg/wp3/documentation/doc/schemas/Glue-SE.html • Also a file metadata function as part of the control interface

  14. The SE Today

  15. Implementation • XML messages passed between applications • Functionality implemented in handlers • A handler is a specialised program reading a message, parsing it, and returning a message • Each handler does one specific thing

  16. Handlers processing request Request Manager Handler 1 User lookup Handler 2 File lookup Handler n Return Time

  17. Architecture handler3 handler4 handler5 Global Data handler1 handler2 request • The request contains the sequence of names of handlers • As each handler processes the request, it calls a library that moves the name to an audit section • The library allows easy access to global data • Handlers may also have handler-specific data in the XML • Storing the XML output from each handler as the request gets processed makes it easy to debug the SE sequence XML audit

  18. Mapping to local user • SE runs as unprivileged user with a highly constrained setuid executable to map to local users • Mapping done using gridmap file, using pooled accounts if available • Needed for access to some (many) mass storage systems

  19. Deployment • MSS working CERN – Castor and disk RAL – Atlas DataStore and Disk • Testing HPSS via RFIO (CC-IN2P3) ESA/ESRIN testing disk, then plan tape MSS (AMS) NIKHEF disk • Others SARA building from source on SGI INSA/WP10 – to build support for DICOM servers UAB Castor – testing

  20. Installation • RPMs and source available • Source compiles with gcc 2.95.x and 3.2.3 • Configures using LCFG-ng • Tools available to build and install Ses without LCFG: ./configure –mss-type=disk make make install

  21. Roadmap

  22. Priorities for September • WP2 TrustManager adopted for security but need to LCFG configure • Currently building proper queuing system • SRM v.1.1 interface • Access control – use GACL • Improved disk cache management (including pinning) • VOMS support • SRM v2.1 interface (partial support)

  23. Future developments

  24. Future Directions • Guaranteed reservations • SRM2 recommendations: volatile, durable, permanent files and space • Full SRM v2.1 • Scalability • Scalability will be achieved by making a single SE distributed • Not hard to do

More Related