120 likes | 235 Views
Southgrid Status Report. Pete Gronbech: July 2005 GridPP 13 - Durham. Southgrid Member Institutions. Oxford RAL PPD Cambridge Birmingham Bristol Warwick. Status at RAL PPD . SL304 cluster running LCG_2.4.0 CPUs: 24 2.4 GHz, 30 2.8GHz 100% Dedicated to LCG 0.5 TB Storage
E N D
Southgrid Status Report Pete Gronbech: July 2005 GridPP 13 - Durham
Southgrid Member Institutions • Oxford • RAL PPD • Cambridge • Birmingham • Bristol • Warwick
Status at RAL PPD • SL304 cluster running LCG_2.4.0 • CPUs: 24 2.4 GHz, 30 2.8GHz • 100% Dedicated to LCG • 0.5 TB Storage • 100% Dedicated to LCG • dcache testing to commence next week. h/w problems with server delayed progress.
Status at Cambridge • Currently LCG 2.4.0 on SL3 • CPUs: 32 2.8GHz – increase to 40 soon. • 100% Dedicated to LCG • Cluster integrated with Condor based CamGrid. • 3 TB Storage • 100% Dedicated to LCG
Status at Bristol • Status • LCG involvement limited (“black dot”) so far, but Green status imminent. • Existing resources • 80-CPU BaBar farm to be moved to Birmingham • ~ 2TB storage resources to be LCG – accessible • LCG head nodes installed by SouthGrid support team with 2.4.0 • New resources • Funding now confirmed for large University investment in hardware • Includes CPU, high quality and scratch disk resources • Humans • New system manager post (RG) being filled starts August. • New SouthGrid support / development post (GridPP / HP) being filled • HP keen to expand industrial collaboration • HP-Bristol • LCG_2.3.1++ installed. ia64 so waiting for port of all 2.4.0 features. Approx 100cpu’s
Status at Birmingham • Currently SL3 with LCG-2_4_0 • CPUs: 22 2.0GHz Xenon • 100% LCG • Babar Cluster being integrated, currently 20 800MHz CPU’s, setup as a subcluster. This will expand to include 80 cpu’s from the Bristol Babar farm which will be rehoused at Birmingham later this month. QML’s farm will also be moved here providing another 80cpu’s. • 2 TB Storage • 100% LCG.
Status at Oxford • Currently LCG 2.4.0 on SL304 • CPUs: 88 2.8 GHz • 100% LCG • All nodes currently up and running, Air Con and power improved in current Computer room, second rack slowly heating up new computer room location. • 1.5 TB Storage – Additional 1.5TB will be used for dcache. • 100% LCG.
Site on Level 1 for new Oxford Computer Room • First rack in very crowded computer room (650) • SRIF funding for new room approved, jointly with the Oxford Super Computer Group’s new facility. • Very large area, to include additional room to one shown. • Second rack currently warming the area up.
Tier-2 planning for next quarter • Setup and purchase integration test bed for SouthGrid use • Coordinate use of this cluster within UK Testzone • Install LCG-2.6.0 • Investigate dcache at RAL PPD, then install at other sites starting with Birmingham. • Possible new Hardware purchases. • Start some inter site performance tests
Future Upgrades • All sites now have good installation infrastructure which should ensure rapid upgrades. • Sharing resources – Cambridge is integrated with local cluster, Birmingham and Oxford planning to integrate. • Consolidating current clusters, upgrades at Oxford will now be early 2006 once new computer room is ready. • Birmingham to get a share of new SRIF funded (£2M) eScience cluster • Bristol hope to get a share of a new escience cluster. • RALPPD will upgrade at the same time as RAL Tier 1
SouthGrid Status • South Grid technical meeting held in May continued to focus sites on rapid upgrades. All sites have migrated to Scientific Linux and are running the latest release. • There are continuing manpower issues at Bristol but this will ease shortly as the new Systems Administrator starts in August. • Oxford will be able to expand resources once the new computer room is built. SRIF funding for this has been obtained. • Oxford new Sys Admin is in place, allowing the T2C more time to Coordinate! • Yves Coppens is providing valuable help across SouthGrid. • Bristol will be on line within the week, and its resources will rapidly expand once the new Sys Admin is in place.