1 / 8

USATLAS and STAR Grid Computing Facility Overview

This document provides an overview of the RHIC/USATLAS and STAR Grid Computing Facility, including hardware and software configuration, list of gatekeepers, and future plans. It also details the computational clusters, user and account model, and ongoing R&D projects.

jarreguin
Download Presentation

USATLAS and STAR Grid Computing Facility Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab

  2. Overview • US ATLAS Computing Facility Overview. • Star Computing Facility Overview. • Future Plan

  3. BNL USATLAS Grid Configuration HPSS giis01 Information Server LSF (Condor) Server1 amds Mover LSF (Condor) Server2 AFS server Globus RC GDMP Server aftpexp00 Globus-client Gatekeeper Job manager aafs 70MB/S GridFtp atlas00 Grid Job Requests 17TB Disks Submit Grid Jobs Internet

  4. BNL USATLAS site status • hardware and software configuration • list of principal gatekeeper and specialized gatekeepers • gremlin.usatlas.bnl.gov: • Dual PII 550, 18GB local disks, Fast ethernet • Redhat 7.2,Globus 2.0 Beta suite, Condor, lsf software. BU:ATLAS packages • spider.usatlas.bnl.gov: • Dual PIII 700, 36GB raid disks, Gigabit Network connection • Redhat 7.2, Globus 2.0 suite, Globus Replica Catalog, Patched version of GridFtp which supports stable parallel data transfer. • GDMP 3.0.11. • Aftpexp.bnl.gov • PIII 800, 72 GB Cheetah SCSI, Dual gigabit network connection • Redhat 7.2, Globus Replica Catalog, Patched version of GridFtp. • Giis01.usatlas.bnl.gov • BNL MDS site organization server, Backup USATLAS giis server. • Participate ivdgl testbed • BNL VO Server.

  5. Continued • computational cluster (type, size, queues) • Lsf, two worker nodes (Dual 700Mhz, 9GB), can be scaled up to 50 worker nodes. • Condor, two worker nodes (Dual 700Mhz, 9GB), • user and account model: each user has NIS account. Some of Grid users are mapped to grid_a (group account), some of users are mapped to their local accounts. • software environment: • http://www.acf.bnl.gov/UserInfo/Facilities/Grid/bnl-atlas-pacman-installation.html

  6. STAR Grid Configuration HPSS giis01 Information Server rmds Mover LSF & Condor rftpexp00 Stargrid Globus RC GDMP Server Globus-client STAR Gatekeeper Job manager AFS server 70MB/S GridFtp HRM Submit Grid Jobs NSF Server Disks Cabinets Grid Job Requests Internet

  7. BNL site status • hardware and software configuration • list of principal gatekeeper and specialized gatekeepers • Stargrid01.rcf.bnl.gov: • Dual PII 450, 62GB local disks, Fast ethernet. • Redhat 6.2,Globus 2.0 suite, Condor, lsf software. • HRM software. • Stargrid02.rcf.bnl.gov: • Dual PIII 1.4GHz, 146GB raid disks, Dual Gigabit Network connection • Redhat 7.3, Globus 2.0 suite, Globus Replica Catalog 2.1, GDMP 3.0.11. • Patched version of GridFtp 1.0 which supports stable parallel data transfer. • HRM Software • rftpexp.bnl.gov • PIII 800, 68 GB SCSI, Dual gigabit network connection • Redhat 7.1, Patched version of GridFtp 2.1. • Giis01.usatlas.bnl.gov • BNL MDS site organization server. • Participate ivdgl testbed • MDS 2.0, 2.1

  8. Current and Future Works • List and brief description of Grid related R&D project your site (and people) are participating in or supplying resources • Deploy HRM & SRM: storage resource management. Can be used to replica data from HPSS to another HPSS, Dantong • HPSS enabled GridFtp. Enhance GridFtp to copy files from/to HPSS, Dantong • Facility (Grid) Monitoring, use home grow monitoring tools and Ganglia to monitor the local fabrication (http://130.199.81.28/ganglia/), Integrate it into MDS, Jason Smith and Dantong Yu • Network research: network performance monitoring, tuning. Work on difference network tools, iperf, netperf, network weather server. • Near future plans for upgrades and projects: Deploy VDT 1.1.2, increase the size of Grid lsf and condor pool to all available atlas computing nodes. Continue on Grid Monitoring Project.

More Related