180 likes | 349 Views
CASTOR Project Status. CERN IT-PDP/DM February 2000. Agenda. CASTOR objectives CASTOR components Current status Early tests Possible enhancements Conclusion. CASTOR. CASTOR stands for “CERN Advanced Storage Manager” Evolution of SHIFT
E N D
CASTOR Project Status CERN IT-PDP/DM February 2000
Agenda • CASTOR objectives • CASTOR components • Current status • Early tests • Possible enhancements • Conclusion CASTOR project status/CHEP2000
CASTOR • CASTOR stands for “CERN Advanced Storage Manager” • Evolution of SHIFT • Short term goal: handle NA48 data (25 MB/s) and COMPASS data (35 MB/s) in a fully distributed environment • Long term goal: prototype for the software to be used to handle LHC data • Development started in January 1999 • CASTOR being put in production at CERN • See: http://wwwinfo.cern.ch/pdp/castor CASTOR project status/CHEP2000
CASTOR objectives • CASTOR is a disk pool manager coupled with a backend store which provides: • Indirect access to tapes • HSM functionality • Major objectives are: • High performance • Good scalability • Easy to clone and deploy • High modularity to be able to easily replace components and integrate commercial products • Focussed on HEP requirements • Available on most Unix systems and Windows/NT CASTOR project status/CHEP2000
CASTOR components • Client applications use the stager and RFIO • The backend store consists of: • RFIOD (Disk Mover) • Name server • Volume Manager • Volume and Drive Queue Manager • RTCOPY daemon + RTCPD (Tape Mover) • Tpdaemon (PVR) • Main characteristics of the servers • Distributed • Critical servers are replicated • Use CASTOR Database (Cdb) or commercial databases like Raima and Oracle CASTOR project status/CHEP2000
CASTOR layout TMS VDQM server NAME server STAGER RTCOPY TPDAEMON (PVR) VOLUME manager RTCPD (TAPE MOVER) RFIOD (DISK MOVER) MSGD DISK POOL CASTOR project status/CHEP2000
Basic Hierarchical Storage Manager (HSM) • Automatic tape volume allocation • Explicit migration/recall by user • Automatic migration by disk pool manager CASTOR project status/CHEP2000
Current status • Development complete • New stager with Cdb in production for DELPHI • Mover and HSM being extensively tested CASTOR project status/CHEP2000
Early tests • RTCOPY • Name Server • ALICE Data Challenge CASTOR project status/CHEP2000
Hardware configuration for RTCOPY tests (1) Linux PCs STK Redwood IBM 3590E SUN E450 STK 9840 SCSI disks (striped FS), ~30MB/s CASTOR project status/CHEP2000
RTCOPY test results (1) CASTOR project status/CHEP2000
Hardware configuration for RTCOPY tests (2) Linux PCs EIDE EIDE Linux PCs STK Redwood 100BaseT Linux PC Gigabit STK Redwood EIDE disks, ~14MB/s SUN E450 STK 9840 SCSI disks (striped FS), ~30MB/s CASTOR project status/CHEP2000
RTCOPY test results (2) • A short (1/2 hour) scalability test was run in a distributed environment: • 5 disk servers • 3 tape servers • 9 drives • 120 GB transferred • 70 MB/s aggregate (if mount time overhead included) • 90 MB/s aggregate (if mount time overhead excluded) • This exceeds COMPASS requirements and is just below the ATLAS/CMS requirements CASTOR project status/CHEP2000
Name server test results (1) CASTOR project status/CHEP2000
Name server test results (2) CASTOR project status/CHEP2000
3COM Fast Ethernet Switch 3COM Fast Ethernet Switch Gigabit Switch Gigabit Switch ALICE Data Challenge 10 * PowerPC 604 200 MHz 32MB HP Kayak 7 * PowerPC 604 200 MHz 32MB Smart Switch Router 12 * Linux disk servers 4 * Linux tape servers 12 * Redwoods CASTOR project status/CHEP2000
Possible enhancements • RFIO client - name server interface • 64 bits support in RFIO (collaboration with IN2P3) • GUI and WEB interface to monitor and administer CASTOR • Enhanced HSM functionality: • Transparent migration • Intelligent disk space allocation • Classes of service • Automatic migration between media types • Quotas • Undelete and Repack functions • Import/Export CASTOR project status/CHEP2000
Conclusion • 2 man years of design and development • Easy deployment because of modularity and backward compatibility with SHIFT • Performance limited only by hardware configuration • See: http://wwwinfo.cern.ch/pdp/castor CASTOR project status/CHEP2000