1 / 16

A Design for KCAF for CDF Experiment

November 8-9, 2002. The International Workshop on HEP Data Grid. A Design for KCAF for CDF Experiment. Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center). Contents. CHEP and Fermilab CDF CDF Data/Analysis Flow

fell
Download Presentation

A Design for KCAF for CDF Experiment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. November 8-9, 2002 The International Workshop on HEP Data Grid A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center)

  2. Contents • CHEP and Fermilab CDF • CDF Data/Analysis Flow • CDF Grid RoadMap (bottom –up method) • Step 1. CAF (Central Analysis Farm) • Step 2. DCAF (DeCentralized Analysis Farm) • Step 3. Grid • KCAF (DeCenteralized Analysis Farm in Korea) • Future Plan

  3. CHEP and Fermilab CDF Grid Fermilab in USA KCAF (DCAF in Korea) KoreaCHEP

  4. KOREN L3 Switch (3군) Servers Network between CHEP and overseas FNAL CERN APII 45Mbps Gigabit Ethernet TEIN 10Mbps Gigabit Ethernet APII 2 X 1Gbps C6509 KEK Gigabit Ethernet Gigabit Switch (CHEP) IBM 8271 … … PCs Servers

  5. Booster p source Main Injector and Recycler Overview of Fermilab CDF Fixed Target Experiment D0

  6. CDF Collaboration North America Europe Asia 3 Natl. Labs 25 Universities 2 Universities 1 Research Lab 6 Universities 1 University 4 Universities 2 Research Labs 1 University 1 University 1 Research Lab 4 Universities 1 University KOREA • Totals • 11 countries • 52 institutions • 525 physicists Center for High Energy Physics: Kyungpook National University Seoul National University SungKyunKwan University

  7. Production Farm

  8. CDF Computer Resource CAF Stage 1 FCC (Feynman Computer Center) • Current configuration: Stage 1 • 63 dual worker nodes. • Roughly 160 GHz total, compared to 38 GHz for fcdfsgi2, and 8 GHz for cdfsga (Run I). • 7 fileservers for physics (~50% full), 5 for DH, 2 for development, 2 for user scratch.

  9. Integer CAF Design • CAF design considerations: • Submit jobs from 'anywhere' • Job output can be: sent directly to desktop or stored on CAF for later retrieval or • input to subsequent job

  10. DCAF/Grid at CDF Experiment • Requirement in 2005 • 200 simultaneous users will analyze 107 events of secondary data set in a day. • Need ~700 TB of disk and ~5THz of CPU by end of FY’05 • Current CAF (Central Analysis Farm) is not enough: • Limited resources and spaces at FCC • In case of network problems at FCC, it is dangerous. • We need DCAF (DeCentralized Analysis Farm) and Grid • DCAF (DeCentralized Analysis Farm) • Users around regional area and/or around the world • Korea, Toronto, Karlsruhe, …. • Let us call DCAF in Korea as “KCAF”

  11. DCAF for CDF Experiment APII (Asia Pacific Information Infrastructure) DCAF in Karlsruhe DCAF in Toronto CAF in USA KCAF (DeCentralized Analysis Farm in Korea) Center for HEP (CHEP) Kyungpook National U. Daegu, Korea

  12. Proposed Schemes for Final Goal CAF or DCAF in Korea (KCAF)

  13. The Road Map for Goal • Step 1. To make MC production farm using KCAF • First, we start to construct 12 CPU test bed for KCAF. • After policy inside of CHEP (another test bed for EDG and iVDGL), we will decide how many CPUs for actual MC generation farm will be used among this year’s planed 140 CPUs. • Step 2. To handle real data • To extend the KCAF to the real data handling system using SAM (Sequential data Access via Meta-data), Gridftp, etc after settling down real data handling system. • Step 3. Final goal of CDF Grid • A gridification for KCAF related with EDG, iVDGL and CDF Grid

  14. The Road Map in Detail • Software • CDF Software • FBSNG for batch job • Kerborse • Client : login from Korea to Fermilab • Server : login from Fermilab to Korea • KDC : Client & Server Trust • Data Handling System • Glasgow, Italy and Yale use Data Handling System. • SAM station needs to be installed on 1 PC work node. • OS for Fermilab Red Hat Linux 7.3 with Kernel 2.4.18 • Fundamental license problem for DB

  15. A Design of KCAF for CDF (Fermilab) Grid FCC (CAF) STKen fcdfsam Technical Request stager 1TB buffer (FCC) dCache CDFen (Raw Data) rcp, GridFTP bbftp Calibration Data rcp dCache CHEP (KCAF) User Desktop KCAF Head Node CAFGUI ICAFGUI KCAF Cluster stager SAM Station (KCAF) rcp cp Smaster FSS, Stager RemoteDesktop ICAF FTP server

  16. Future Plan (This year) • Around 140 CPU linux clusters will be constructed at the end of this year at CHEP. • We are constructing KCAF for CDF at CHEP, Kyungpook National University. • Contribute 1 T byte hard disk (now) + 4 T byte hard disk (December) to CDF for the network buffer between CHEP and Fermilab. • The DCAF demonstration will be shown at SC2002 on November 16, 2002.

More Related