120 likes | 244 Views
US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001. User Facility Hardware. Tier 1: CMSUN1 (User Federation Host) 8 400 MHz processors with ~ 1 TB RAID Wonder (User machine) 4 500 MHz CPU linux machine with ¼ TB RAID Production farm
E N D
US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001
User Facility Hardware • Tier 1: • CMSUN1 (User Federation Host) • 8 400 MHz processors with ~ 1 TB RAID • Wonder (User machine) • 4 500 MHz CPU linux machine with ¼ TB RAID • Production farm • Gallo, Velveeta: 4 500 MHz CPU linux servers with ¼ TB RAID each • 40 dual CPU 750 MHz linux farm nodes Vivian O’Dell, US CMS User Facilities Status
CMS Cluster WORKERS: SERVERS: GALLO,WONDER,VELVEETA, CMSUN1 Popcrn01 - popcrn40 Vivian O’Dell, US CMS User Facilities Status
Prototype Tier 2 Status • Caltech/UCSD • Hardware at Each Site • 20 dual 800MHz PIII’s, 0.5 GB RAM • Dual 1GHz CPU Data Server, 2 GB RAM • 2 X 0.5 TB fast (Winchester) RAID (70 MB/s sequential) • CMS Software installed, ooHit and ooDigi tested. • Plans to buy another 20 duals this year at each site. See http://pcbunn.cacr.caltech.edu/Tier2/Tier2_Overall_JJB.htm • University of Florida • 72 computational nodes • Dual 1GHz PIII • 512MB PC133 MHz SDRAM • 76GB IBM IDE Disks • Sun Dual Fiber Channel RAID Array 660GB (raw) • Connected to Sun Data Server • Not yet delivered. Performance numbers to follow. Vivian O’Dell, US CMS User Facilities Status
Tier 2 Hardware Status (CalTech) Vivian O’Dell, US CMS User Facilities Status
UF Current (“Physics”) Tasks • Full digitization of JPG Fall Monte Carlo Sample.. • Fermilab, CalTech & UCSD are working on this • Fermilab is hosting the User Federation (currently 1.7 TB) • Full sample should be processed (pileup/nopileup in ~1-2 weeks(?)) • Of course things are not optimally smooth • For up to date information see: http://computing.fnal.gov/cms/Monitor/cms_production.html • Full JPG sample will be hosted at Fermilab • User Federation Support • Contents of the federation and how to access it is at the above url. We keep this up to date with production. • JPG NTUPLE Production at Fermilab • Yujun Wu and Pal Hidas are generating the JPG NTUPLE from the FNAL user federation. They are updating information linked to the JPG web page. Vivian O’Dell, US CMS User Facilities Status
Near Term Plans • Continue User Support • Hosting User Federations. Currently hosting JPG federation with combination of disk/tape (AMS server <-> Enstore connection working). Would like feedback. • Host MPG group User Federation at FNAL? • Continue JPG ntuple production, hosting and archiving • Would welcome better technology here. Café starting to address this problem. • Code distribution support • Start Spring Production using more “grid aware” tools • More efficient use of CPU at prototype T2’s • Continue commissioning 2nd prototype T2 center • Strategy for dealing with new Fermilab Computer Security • Means “kerberizing” all CMS computing • Impact on users! • Organize another CMS software tutorial this summer(?) • coinciding with kerberizing CMS machines • Need to come up with a good time. Latter ½ of August before CHEP01? Vivian O’Dell, US CMS User Facilities Status
T1 Hardware Strategy • What we are doing • Digitization of JPG fall production with Tier 2 sites • New MC (spring) production with Tier 2 sites • Hosting JPG User Federation at FNAL • For fall production, this implies ~4 TB storage (e.g. ~1 TB on disk, 3 TB on tape). • Hosting MPG User Federation at FNAL? • For fall production, this implies ~4 TB storage (~ 1 TB disk, 3 TB tape) • Also hosting User Federation from spring production, AOD or even NTUPLE for users • Objectivity testing/R&D in data hosting • What we need • Efficient use of CPU at Tier 2 sites – so we don’t need additional CPU for production • Fast, efficient, transparent storage for hosting user federation • Mixture of disk/tape • R&D for RAID/disk/OBJY efficient matching • This will also serve as input to RC simulation • Build & operate R&D systems for analysis clusters Vivian O’Dell, US CMS User Facilities Status
Hardware Plans FY01 • We have defined hardware strategy for T1 for FY2001. • ~ Consistent with project plan and concurrence from ASCB. • Start User Analysis Cluster at Tier 1. This will also be an R&D cluster for “data intensive” computing. • Upgrade networking for CMS cluster • Production User Federation Hosting for physics groups (more disk/tape storage) • Test and R&D systems to continue path towards full prototype T1 center. We are focusing this year on data server R&D systems. • Have started writing requisitions. Plans to acquire most hardware over the next 2-3 months. Vivian O’Dell, US CMS User Facilities Status
FY01 Hardware Acquisition Overview Vivian O’Dell, US CMS User Facilities Status
Funding Proposal for 2001 Some costs may be overestimated – (but) also we may need to augment our farm CPU Vivian O’Dell, US CMS User Facilities Status
Summary • User facility has a dual mission • Supporting Users • Mostly successful (I think) • Open to comments/critiques and requests! • Hardware/Software R&D • We will be concentrating on this more over the next year • This will be done in tandem with T2 centers and international CMS • We have developed a hardware strategy taking these two missions into account • We now have 2 prototype Tier 2 centers. • CalTech/UCSD have come online • University of Florida is in the process of installing/commissioning hardware Vivian O’Dell, US CMS User Facilities Status