1 / 69

HENP Networks and Grids for Global Science

HENP Networks and Grids for Global Science. Harvey B. Newman California Institute of Technology 3 rd International Data Grid Workshop Daegu, Korea August 26, 2004. Challenges for Global HENP Experiments. BaBar/D0 Example - 2004 500+ Physicists 100+ Institutes 35+ Countries.

winka
Download Presentation

HENP Networks and Grids for Global Science

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HENP Networks and Grids for Global Science Harvey B. Newman California Institute of Technology3rd International Data Grid WorkshopDaegu, Korea August 26, 2004

  2. Challenges for Global HENP Experiments BaBar/D0 Example - 2004 500+ Physicists 100+ Institutes 35+ Countries LHC Example- 2007 5000+ Physicists 250+ Institutes 60+ Countries Major Challenges (Shared with Other Fields) • Worldwide Communication and Collaboration • Managing Globally Distributed Computing & Data Resources • Cooperative Software Development and Data Analysis

  3. Large Hadron Collider (LHC) CERN, Geneva: 2007 Start • pp s =14 TeV L=1034 cm-2 s-1 • 27 km Tunnel in Switzerland & France CMS TOTEM pp, general purpose; HI First Beams: Summer 2007 Physics Runs: from Fall 2007 ALICE : HI LHCb: B-physics Atlas Higgs, SUSY, QG Plasma, CP Violation, … the Unexpected

  4. Challenges of Next Generation Science in the Information Age • Flagship Applications • High Energy & Nuclear Physics, AstroPhysics Sky Surveys: TByte to PByte “block” transfers at 1-10+ Gbps • Fusion Energy: Time Critical Burst-Data Distribution; Distributed Plasma Simulations, Visualization and Analysis; Preparations for Fusion Energy Experiment • eVLBI: Many real time data streams at 1-10 Gbps • BioInformatics, Clinical Imaging: GByte images on demand • Provide results with rapid turnaround, coordinating large but limited computing and data handling resources,over networks of varying capability in different world regions • Advanced integrated applications, such as Data Grids, rely on seamless operation of our LANs and WANs • With reliable, quantifiable high performance Petabytes of complex data explored and analyzed by 1000s of globally dispersed scientists, in hundreds of teams

  5. Tier2 Center Tier2 Center Tier2 Center Tier2 Center Tier2 Center LHC Data Grid Hierarchy:Developed at Caltech CERN/Outside Resource Ratio ~1:2Tier0/( Tier1)/( Tier2) ~1:1:1 ~PByte/sec ~100-1500 MBytes/sec Online System Experiment CERN Center PBs of Disk; Tape Robot Tier 0 +1 Tier 1 10 - 40 Gbps FNAL Center IN2P3 Center INFN Center RAL Center ~10 Gbps Tier 2 ~10 Gbps Tier 3 Institute Institute Institute Institute Tens of Petabytes by 2007-8.An Exabyte ~5-7 Years later. Physics data cache 1 to 10 Gbps Tier 4 Workstations Emerging Vision: A Richly Structured, Global Dynamic System

  6. ICFA and Global Networks for Collaborative Science • National and International Networks, with sufficient (rapidly increasing) capacity and seamless end-to-end capability, are essential for • The daily conduct of collaborative work in both experiment and theory • Experiment development & construction on a global scale • Grid systems supporting analysis involving physicists in all world regions • The conception, design and implementation of next generation facilities as “global networks” • “Collaborations on this scale would never have been attempted, if they could not rely on excellent networks”

  7. History of Bandwidth Usage – One Large Network; One Large Research Site ESnet Accepted Traffic 1/90 – 1/04Exponential Growth Since ’92;Annual Rate Increased from 1.7 to 2.0X Per Year In the Last 5 Years SLAC Traffic ~300 Mbps; ESnet LimitGrowth in Steps: ~ 10X/4 YearsProjected: ~2 Terabits/s by ~2014

  8. Int’l Networks BW on Major Links for HENP: US-CERN Example • Rate of Progress >> Moore’s Law (US-CERN Example) • 9.6 kbps Analog (1985) • 64-256 kbps Digital (1989 - 1994) [X 7 – 27] • 1.5 Mbps Shared (1990-3; IBM) [X 160] • 2 -4 Mbps (1996-1998) [X 200-400] • 12-20 Mbps (1999-2000) [X 1.2k-2k] • 155-310 Mbps (2001-2) [X 16k – 32k] • 622 Mbps (2002-3) [X 65k] • 2.5 Gbps  (2003-4) [X 250k] • 10 Gbps  (2005) [X 1M] • 4x10 Gbps or 40 Gbps (2007-8) [X 4M] • A factor of ~1M Bandwidth Improvement over 1985-2005 (a factor of ~5k during 1995-2005) • A prime enabler of major HENP programs • HENP has become a leading applications driver, and also a co-developer of global networks

  9. Internet Growth in the World At Large Amsterdam Internet Exchange Point Example 5 MinuteMax 30 Gbps 20 Gbps Average 11.08.04 Some Annual Growth Spurts;Typically In Summer-Fall The Rate of HENP Network Usage Growth (~100% Per Year) is Similar to the World at Large

  10. Internet 2 Land Speed Record (LSR) • Judged on product of transfer speed and distance end-to-end, using standard Internet (TCP/IP) protocols. • IPv6 record: 4.0 Gbps between Geneva and Phoenix (SC2003) • IPv4 Multi-stream record with Windows & Linux: 6.6 Gbps between Caltech and CERN (16 kkm; “Grand Tour d’Abilene”) June 2004 • Exceeded 100 Petabit-m/sec • Single Stream 7.5 Gbps X 16 kkm with Linux Achieved in July • Concentrate now on reliable Terabyte-scale file transfers • Note System Issues: CPU, PCI-XBus, NIC, I/O Controllers, Drivers LSR History – IPv4 single stream Petabitmeter (10^15 bit*meter) • Monitoring of the Abilene traffic in LA: June 2004 Record Network http://www.guinnessworldrecords.com/

  11. Evolving Quantitative Science Requirements for Networks (DOE High Perf. Network Workshop)

  12. HENP Lambda Grids:Fibers for Physics • Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Stores • Survivability of the HENP Global Grid System, with hundreds of such transactions per day (circa 2007)requires that each transaction be completed in a relatively short time. • Example: Take 800 secs to complete the transaction. Then Transaction Size (TB)Net Throughput (Gbps) 1 10 10 100 100 1000 (Capacity of Fiber Today) • Summary: Providing Switching of 10 Gbps wavelengthswithin ~2-4 years; and Terabit Switching within 5-8 years would enable “Petascale Grids with Terabyte transactions”,to fully realize the discovery potential of major HENP programs, as well as other data-intensive research.

  13. SCIC in 2003-2004http://cern.ch/icfa-scic Three 2004 Reports; Presented to ICFA in February • Main Report: “Networking for HENP” [H. Newman et al.] • Includes Brief Updates on Monitoring, the Digital Divide and Advanced Technologies [*] • A World Network Overview (with 27 Appendices): Status and Plans for the Next Few Years of National & Regional Networks, and Optical Network Initiatives • Monitoring Working Group Report [L. Cottrell] • Digital Divide in Russia [V. Ilyin]August 2004 Update Reports at the SCIC Web Site: See http://icfa-scic.web.cern.ch/ICFA-SCIC/documents.htm • Asia Pacific, Latin America, GLORIAD (US-Ru-Ko-China);Brazil, Korea, etc.

  14. SCIC Main Conclusion for 2003Setting the Tone for 2004 • The disparity among regions in HENP could increase even more sharply, as we learn to use advanced networks effectively, and we develop dynamic Grid systems in the most favored” regions • We must take action, and workto Close the Digital Divide • To make Physicists from All World Regions Full Partners in Their Experiments; and in the Process of Discovery • This is essential for the health of our global experimental collaborations, our plans for future projects, and our field.

  15. ICFA Report: Networks for HENPGeneral Conclusions (2) • Reliable high End-to-end Performance of networked applications such as large file transfers and Data Grids is required. Achieving this requires: • End-to-end monitoring extending to all regions serving our community.A coherent approach to monitoring that allows physicists throughout our community to extract clear information is required. • Upgrading campus infrastructures.These are still not designed to support Gbps data transfers in most HEP centers. One reason for under-utilization of national and Int’l backbones, is the lack of bandwidth to end-user groups in the campus. • Removing local, last mile, and nat’l and int’l bottlenecks end-to-end, whether technical or political in origin.While National and International backbones have reached 2.5 to 10 Gbps speeds in many countries, the bandwidths across borders, the countryside or the city may be much less. This problem is very widespread in our community, with examples stretching from the Asia Pacific to Latin America to the Northeastern U.S. Root causes for this vary, from lack of local infrastructure to unfavorable pricing policies.

  16. ICFA Report (2/2004) Update: Main Trends Continue, Some Accelerate • Current generation of 2.5-10 Gbps network backbones and major Int’l links arrived in the last 2-3 Years [US+Europe+Japan; Now Korea and China] • Capability: 4 to Hundreds of Times; Much Faster than Moore’s Law • Proliferation of 10G links across the Atlantic Now; Will Begin use of Multiple 10G Links (e.g. US-CERN) Along Major Paths by Fall 2005 • Direct result of Falling Network Prices: $ 0.5 – 1M Per Year for 10G • Ability to fully use long 10G paths with TCP continues to advance: 7.5 Gbps X 16kkm (August 2004) • Technological progress driving equipment costs in end-systems lower • “Commoditization” of Gbit Ethernet (GbE) ~complete: ($20-50 per port) 10 GbE commoditization (e.g. < $ 2K per NIC with TOE) underway • Some regions (US, Europe) moving to owned or leased dark fiber • Emergence of the “Hybrid” Network Model: GNEW2004; UltraLight, GLIF • Grid-based Analysis demands end-to-end high performance & management • The rapid rate of progress is confined mostly to the US, Europe, Japan and Korea, as well as the major Transatlantic routes; this threatens to cause the Digital Divide to become a Chasm

  17. Work on the Digital Divide:Several Perspectives • Work on Policies and/or Pricing: pk, in, br, cn, SE Europe, … • Find Ways to work with vendors, NRENs, and/or Gov’ts • Exploit Model Cases: e.g. Poland, Slovakia, Czech Republic • Inter-Regional Projects • GLORIAD, Russia-China-US Optical Ring • South America: CHEPREO (US-Brazil); EU CLARA Project • Virtual SILK Highway Project (DESY): FSU satellite links • Workshops and Tutorials/Training Sessions • For Example: Digital Divide and HEPGrid Workshop,UERJ Rio, February 2004; Next Daegu May 2005 • Help with Modernizing the Infrastructure • Design, Commissioning, Development • Tools for Effective Use: Monitoring, Collaboration • Participate in Standards Development; Open Tools • Advanced TCP stacks; Grid systems

  18. Grid and Network Workshop at CERN March 15-16, 2004 CONCLUDING STATEMENT "Following the 1st International Grid Networking Workshop (GNEW2004) that was held at CERN and co-organized by CERN/DataTAG, DANTE, ESnet, Internet2 & TERENA, there is a wide consensus that hybrid network services capable of offering both packet- and circuit/lambda-switching as well as highly advanced performance measurements and a new generation of distributed system software, will be required in order to support emerging data intensive Grid applications, Such as High Energy Physics, Astrophysics, Climate and Supernova modeling, Genomics and Proteomics, requiring 10-100 Gbps and up over wide areas." • WORKSHOP GOALS • Share and challenge the lessons learned by nat’l and international projects in the past three years; • Share the current state of network engineering and infrastructure and its likely evolution in the near future; • Examine our understanding of the networking needs of Grid applications (e.g., see the ICFA-SCIC reports); • Develop a vision of how network engineering and infrastructure will (or should) support Grid computing needs in the next three years.

  19. National Lambda Rail (NLR) SEA POR SAC BOS NYC CHI OGD DEN SVL CLE WDC PIT FRE KAN RAL NAS STR LAX PHO WAL ATL SDG OLG DAL JAC 15808 Terminal, Regen or OADM site Fiber route Transition beginning now to optical, multi-wavelength Community owned or leased “dark fiber” networks for R&E • NLR • Coming Up Now • Initially 4 10G Wavelengths • Northern Route Operation by 4Q04 • Internet2 HOPI Initiative (w/HEP) • To 40 10G Waves in Future • nl, de, pl, cz,jp • 18 US States

  20. [Legends ] <10G>  ・Ishikawa Hi-tech Exchange Center (Tatsunokuchi-machi, Ishikawa Prefecture) <100M>  ・Toyama Institute of Information Systems (Toyama)  ・Fukui Prefecture Data Super Highway AP * (Fukui) 20Gbps 10Gbps 1Gbps Optical testbeds Access points <100M>  ・Hokkaido Regional Network Association AP * (Sapporo) Core network nodes <1G>  ・Teleport Okayama (Okayama)  ・Hiroshima University (Higashi Hiroshima) <100M>  ・Tottori University of Environmental Studies (Tottori)  ・Techno Ark Shimane (Matsue)  ・New Media Plaza Yamaguchi (Yamaguchi) <1G>  ・Tohoku University (Sendai)  ・NICT Iwate IT Open Laboratory (Takizawa-mura, Iwate Prefecture) <100M>  ・Hachinohe Institute of Technology (Hachinohe, Aomori Prefecture)  ・Akita Regional IX *(Akita)  ・Keio University Tsuruoka Campus (Tsuruoka, Yamagata Prefecture)  ・Aizu University (Aizu Wakamatsu) <10G>  ・Kyoto University (Kyoto)  ・Osaka University (Ibaraki) <1G>  ・NICT Kansai Advanced Research Center (Kobe) <100M>  ・Lake Biwa Data Highway AP * (Ohtsu)  ・Nara Prefectural Institute of Industrial Technology (Nara)  ・Wakayama University (Wakayama)  ・Hyogo Prefecture Nishiharima Technopolis (Kamigori-cho, Hyogo Prefecture) Sapporo <100M>  ・Niigata University (Niigata)  ・Matsumoto Information Creation Center (Matsumoto, Nagano Prefecture) <10G>  ・Kyushu University (Fukuoka) <100M>  ・NetCom Saga (Saga)  ・Nagasaki University (Nagasaki)  ・Kumamoto Prefectural Office (Kumamoto)  ・Toyonokuni Hyper Network AP *(Oita)  ・Miyazaki University (Miyazaki)  ・Kagoshima University (Kagoshima) <10G>  ・Tokyo University (Bunkyo Ward, Tokyo)  ・NICT Kashima Space Research Center (Kashima, Ibaraki Prefecture) <1G>  ・Yokosuka Telecom Research Park (Yokosuka, Kanagawa Prefecture) <100M>  ・Utsunomiya University (Utsunomiya)  ・Gunma Industrial Technology Center (Maebashi)  ・Reitaku University (Kashiwa, Chiba Prefecture)  ・NICT Honjo Information and Communications Open Laboratory (Honjo, Saitama Prefecture)  ・Yamanashi Prefecture Open R&D Center (Nakakoma-gun, Yamanashi Prefecture) Fukuoka Sendai Kanazawa NICT Kita Kyushu IT Open Laboratory Nagano NICT Koganei Headquarters Osaka Okayama Kochi NICT Tsukuba Research Center Nagoya NICT Keihannna Human Info-Communications Research Center Otemachi USA Okinawa <100M>  ・Kagawa Prefecture Industry Promotion Center (Takamatsu)  ・Tokushima University (Tokushima)  ・Ehime University (Matsuyama)  ・Kochi University of Technology (Tosayamada-cho, Kochi Prefecture) <100M>  ・Nagoya University (Nagoya)  ・University of Shizuoka (Shizuoka)  ・Softopia Japan (Ogaki, Gifu Prefecture)  ・Mie Prefectural College of Nursing (Tsu) *IX:Internet eXchange AP:Access Point JGN2: Japan Gigabit Network (4/04 – 3/08)20 Gbps Backbone, 6 Optical Cross-Connects

  21. APAN-KR : KREONET/KREONet2 II

  22. UltraLight Collaboration:http://ultralight.caltech.edu • Caltech, UF, FIU, UMich, SLAC,FNAL,MIT/Haystack,CERN, UERJ(Rio), NLR, CENIC, UCAID,Translight, UKLight, Netherlight, UvA, UCLondon, KEK, Taiwan • Cisco, Level(3) • Integrated hybrid experimental network, leveraging Transatlantic R&D network partnerships; packet-switched + dynamic optical paths • 10 GbE across US and the Atlantic: NLR, DataTAG, TransLight, NetherLight, UKLight, etc.; Extensions to Japan, Taiwan, Brazil • End-to-end monitoring; Realtime tracking and optimization; Dynamic bandwidth provisioning • Agent-based services spanning all layers of the system, from the optical cross-connects to the applications.

  23. GLIF: Global Lambda Integrated Facility “GLIF is a World Scale Lambda based Lab for Application and Middleware development, where Grid applications ride on dynamically configured networks based on optical wavelengths ... Coexisting with more traditional packet-switched network traffic 4th GLIF Workshop:Nottingham UK Sept. 2004 10 Gbps Wavelengths For R&E Network Development Are Prolifering, Across Continents and Oceans

  24. PROGRESS in SE Europe (Sk, Pl, Cz, Hu, …) 1660 km of Dark Fiber CWDM Links, up to 112 km. 1 to 4 Gbps (GbE) August 2002:First NREN in Europe to establish Int’l GbE Dark Fiber Link, to AustriaApril 2003 to Czech Republic. Planning 10 Gbps Backbone; dark fiber link to Poland this year.

  25. Dark Fiber in Eastern EuropePoland: PIONIER Network 2650 km Fiber Connecting16 MANs; 5200 km and 21 MANs by 2005 Support • Computational GridsDomain-Specific Grids • Digital Libraries • Interactive TV • Add’l Fibers for e-Regional Initiatives

  26. The Advantage of Dark Fiber CESNET Case Study (Czech Republic) 2513 km Leased Fibers (Since 1999) Case Study ResultWavelength ServiceVs. Fiber Lease: Cost Savings of 50-70% Over 4 Yearsfor Long 2.5G or 10G Links

  27. ICFA/SCIC Network Monitoring Prepared by Les Cottrell, SLAC, for ICFA www.slac.stanford.edu/grp/scs/net/talk03/icfa-aug04.ppt

  28. PingER: World View from SLAC • Now monitoring 650 sites in 115 countries • In last 9 months: • Several sites in Russia (GLORIAD) • Many hosts in Africa (27 of 54 Countries) • Monitoring sites in Pakistan, Brazil C. Asia, Russia, SE Europe, L. America, M. East, China: 4-5 yrs behind India, Africa: 7 yrs behind View from CERNConfirms This View Important for policy makers

  29. AmPath Research Networking in Latin America: August 2004 • The only Countries with research network connectivity now in Latin America: • Argentina, Brazil, Chile, Mexico, Venezuela • AmPath Provided connectivity for some South American countries • New Sao Paolo-Miami Link at 622 MbpsStarting This Month New: CLARA (Funded by EU) • Regional Network Connecting 19 Countries: Argentina Dominican Republic Panama Brasil Ecuador ParaguayBolivia El Salvador PeruChile Guatemala UruguayColombia Honduras VenezuelaCosta Rica Mexico NicaraguaCuba 155 Mbps Backbone with 10-45 Mbps Spurs;4 Mbps Satellite to Cuba; 622 Mbps to Europe Also NSF Proposals To Connect at 2.5G to US

  30. HEPGRID (CMS) in Brazil 622 Mbps Italy France BRAZIL USA Germany HEPGRID-CMS/BRAZIL is a project to build a Grid that • At Regional Level will include CBPF,UFRJ,UFRGS,UFBA, UERJ & UNESP • At International Level will be integrated with CMS Grid based at CERN; focal points include iVGDL/Grid3 and bilateral projects with Caltech Group Brazilian HEPGRID On line systems T0 +T1 2.5 - 10 Gbps CERN T1 UNESP/USP SPRACE-Working T2 T1 UERJ Regional Tier2 Ctr Gigabit T3 T2 UFRGS UERJ: T2T1,100500 Nodes; Plus T2s to 100 Nodes CBPF UERJ UFBA UFRJ Individual Machines T4

  31. Latin America Science Areas Interested in Improving Connectivity ( by Country) Networks and Grids: The Potential to Spark a New Era of Science in the Region

  32. Asia Pacific Academic Network Connectivity APAN Status July 2004 RU 200M 20.9G JP Europe 34M 2G KR 1.2G US 155M 310M Connectivity to US from JP, KO, AU is Advancing Rapidly.Progress in the Region, and to Europe is Much Slower 9.1G 622M CN • TW 777M 45M `722M 90M HK 2M 7.5M • IN 45M 155M 1.5M TH PH VN 155M 1.5M 932M (to 21 G) MY • LK 2M 12M SG ID 2.5M Access Point Exchange Point Current Status 2004 (plan) 16M AU Better North/South Linkages within Asia JP-SG link: 155Mbps in 2005 is proposed to NSF by CIREN JP- TH link: 2Mbps  45Mbps in 2004 is being studied. CIREN is studying an extension to India

  33.         

  34.    

  35. APAN Recommendations(at July 2004 Meeting in CAIRNS, Au) • Central Issues for APAN this decade • Stronger linkages between applications and infrastructure - neither can exist independently • Stronger application and infrastructure linkages among APAN members. • Continuing focus on APAN as an organization that represents infrastructure interests in Asia • Closer connection between APAN the infrastructure & applications organization and regional political organizations (e.g. APEC, ASEAN) • New issues demand attention • Application measurement, particularly end-to-end network performance measurement is increasingly critical (deterministic networking) • Security must now be a consideration for every application and every network.

  36. KR-US/CA Transpacific connection • Participation in Global-scale Lambda Networking • Two STM-4 circuits (1.2G) : KR-CA-US • Global lambda networking : North America, Europe, • Asia Pacific, etc. Global Lambda Networking KREONET/SuperSIReN CA*Net4 StarLight STM-4 * 2 Chicago APII-testbed/KREONet2 PacWave Seattle

  37. New Record!!! 916 Mbps from CHEP to Caltech (22/06/’04) Subject: UDP test on KOREN-TransPAC-Caltech Date: Tue, 22 Jun 2004 13:47:25 +0900 From: "Kihwan Kwon" <kihwan@bh.knu.ac.kr> To: <son@knu.ac.kr> [root@sul Iperf]# ./iperf -c socrates.cacr.caltech.edu -u -b 1000m ------------------------------------------------------------ Client connecting to socrates.cacr.caltech.edu, UDP port 5001 Sending 1470 byte datagrams; UDP buffer size: 64.0 KByte (default) ------------------------------------------------------------ [ 5] local 155.230.20.20 port 33036 connected with 131.215.144.227 [ ID] Interval Transfer Bandwidth [ 5] 0.0-2595.2 sec 277 GBytes 916 Mbits/sec USA TransPAC G/H-Japan KNU/Korea Max. 947.3Mbps

  38. OC3 circuits Moscow-Chicago-Beijing since January 2004 • OC3 circuit Moscow-Beijing July 2004 (completes the ring) • Korea (KISTI) joining US, Russia, China as full partner in GLORIAD • Plans developing for Central Asian extension (w/Kyrgyz Government) • Rapid traffic growth with heaviest US use from DOE (FermiLab), NASA, NOAA, NIH and Universities (UMD, IU, UCB, UNC, UMN, PSU, Harvard, Stanford, Wash., Oregon, 250+ others) Global Ring Network for Advanced Applications Development • Aug. 8 2004: P.K. Young, Korean IST Advisor to President Announces • Korea Joining GLORIAD • TEIN gradually to 10G, connected to GLORIAD • Asia Pacific Info. Infra- Structure (1G) will be backup net to GLORIAD > 5TBytes now transferred monthly via GLORIAD to US, Russia, China GLORIAD 5-year Proposal Pending (with US NSF) for expansion: 2.5G Moscow-Amsterdam-Chicago-Seattle-Hong Kong-Pusan-Beijing circuits early 2005; 10G ring around northern hemisphere 2007; multiple wavelength service 2009 – providing hybrid circuit-switched (primarily Ethernet) and routed services

  39. Total Wireline Dial Up ISDN Broadband 68M 23.4M 45.0M 4.9M 9.8M Internet in China (J.P.Wu APAN July 2004) • Internet users in China: from 6.8 Million to 78 Million within 6 months • IP Addresses: 32M(1A+233B+146C) • Backbone:2.5-10G DWDM+Router • International links:20G • Exchange Points:> 30G(BJ,SH,GZ) • Last Miles • Ethernet,WLAN,ADSL,CTV,CDMA,ISDN,GPRS,Dial-up • Need IPv6

  40. China: CERNET Update • 1995, 64K Nation wide backbone connecting 8 cities, 100 Universities • 1998, 2M Nation wide backbone connecting 20 cities, 300 Universities • 2000, Own dark fiber crossing 30+ major cities and 30,000 kilometers • 2001, CERNET DWDM/SDH network finished • 2001, 2.5G/155M Backbone connecting 36 cities, 800 universities • 2003,1300 + universities and institutes, over 15 million users

  41. CERNET2 and Key Technologies • CERNET 2: Next Generation Education and Research Network in China • CERNET 2 Backbone connecting 15-20 GigaPOPs at 2.5G-10Gbps (I2-like Model) • Connecting 200 Universities and 100+ Research Institutes at 1Gbps-10Gbps • Native IPv6 and Lambda Networking • Support/Deployment of the following technologies: • E2E performance monitoring • Middleware and Advanced Applications • Multicast

  42. AFRICA: Key Trends M. Jensen and P. Hamilton Infrastructure Report, March 2004 • Growth in traffic and lack of infrastructure Predominance of Satellite;But these satellites are heavily subscribed • Int’l Links: Only ~1% of traffic on links is for Internet connections;Most Internet traffic (for ~80% of countries) via satellite • Flourishing Grey market for Internet & VOIP traffic using VSAT dishes • Many Regional fiber projects in “planning phase” (some languished in the past); Only links from South Africa to Nimibia, Botswana done so far • Int’l fiber Project: SAT-3/WASC/SAFE Cable from South Africa to PortugalAlong West Coast of Africa • Supplied by Alcatel to Worldwide Consortium of 35 Carriers • 40 Gbps by Mid-2003; Heavily Subscribed. Ultimate Capacity 120 Gbps • Extension to Interior Mostly by Satellite: < 1 Mbps to ~100 Mbps typical Note: World Conference on Physics and Sustainable Development,10/31 – 11/2/05 in Durban South Africa; Part of World Year of Physics 2005. Sponsors: UNESCO, ICTP, IUPAP, APS, SAIP

  43. AFRICA: Nectar Net Initiative Growing Need to connect academic researchers, medicalresearchers & practitioners to many sites in Africa Examples: • CDC & NIH: Global AIDS Project, Dept. of Parasitic Diseases,Nat’l Library of Medicine (Ghana, Nigeria) • Gates $ 50M HIV/AIDS Center in Botswana; Project Coord at Harvard • Africa Monsoon AMMA Project, Dakar Site [cf. East US Hurricanes] • US Geological Survey: Global Spatial Data Infrastructure • Distance Learning: Emory-Ibadan (Nigeria); Research Channel Content But Africa is Hard: 11M Sq. Miles, 600 M People, 54 Countries • Little Telecommunications Infrastructure Approach: Use SAT-3/WASC Cable (to Portugal), GEANT Across Europe, Amsterdam-NY Link Across the Atlantic, then Peer with R&E Networks such as Abilene in NYC • Cable Landings in 8 West African Countries and South Africa • Pragmatic approach to reach end points: VSAT satellite, ADSL, microwave, etc. W. MatthewsGeorgia Tech

  44. Bandwidth prices in Africa vary dramatically; are in general many times what they could be if universities purchase in volume Sample Bandwidth Costs for African Universities Sample size of 26 universitiesAverage Cost for VSAT service: Quality, CIR, Rx, Tx not distinguished Roy Steiner Internet2 Workshop

  45. Grid2003: An Operational Production Grid, Since October 2003 • 27 sites (U.S., Korea) • 2300-2800 CPUs • 700-1100 Concurrent Jobs • Trillium: • PPDGGriPhyNiVDGL www.ivdgl.org/grid2003 Korea • Prelude to Open Science Grid: www.opensciencegrid.org

  46. HENP Data Grids, and Now Services-Oriented Grids • The original Computational and Data Grid concepts are largely stateless, open systems • Analogous to the Web • The classical Grid architecture had a number of implicit assumptions • The ability to locate and schedule suitable resources, within a tolerably short time (i.e. resource richness) • Short transactions with relatively simple failure modes • HENP Grids are Data Intensive & Resource-Constrained • Resource usage governed by local and global policies • Long transactions; some long queues • Analysis: 1000s of users competing for resources at dozens of sites: complex scheduling; management • HENP Stateful, End-to-end Monitored and Tracked Paradigm • Adopted in OGSA [Now WS Resource Framework]

  47. The Move to OGSA and then Managed Integrated Systems App-specific Services ~Integrated Systems Stateful; Managed Open Grid Services Arch Web ServicesResrc Framwk Web services + … Increased functionality, standardization GGF: OGSI, … (+ OASIS, W3C) Multiple implementations, including Globus Toolkit Globus Toolkit X.509, LDAP, FTP, … Defacto standards GGF: GridFTP, GSI Custom solutions Time

  48. Managing Global Systems: Dynamic Scalable Services Architecture MonALISA: http://monalisa.cacr.caltech.edu 24 X 7 OperationsMultiple Orgs. • Grid2003 • US CMS • CMS-DC04 • ALICE • STAR • VRVS • ABILENE • Soon: GEANT • + GLORIAD • “Station Server” Services-engines at sites host many“Dynamic Services” • Scales to thousands of service-Instances • Servers autodiscover and interconnect dynamically to form a robust fabric • Autonomous agents + CLARENS: Web Services Fabric and Portal Architecture

  49. Grid Analysis Environment Analysis Client Analysis Client Analysis Client HTTP, SOAP, XML/RPC Grid Services Web Server Scheduler Catalogs Fully- Abstract Planner Metadata Partially- Abstract Planner Virtual Data Applications Data Management Monitoring Replica Fully- Concrete Planner Grid Execution Priority Manager Grid Wide Execution Service CLARENS: Web Services Architecture • Analysis Clients talk standard protocols to the CLARENS “Grid Services Web Server”, with a simple Web service API • The secure Clarens portal hides the complexity of the Grid Services from the client • Key features: Global Scheduler, Catalogs, Monitoring, and Grid-wide Execution service;Clarens servers forma Global Peer to peer Network

More Related