1 / 57

Networks and Grids for High Energy Physics and Global e-Science, and the Digital Divide

Standing Committee on Inter-regional Connectivity. Networks and Grids for High Energy Physics and Global e-Science, and the Digital Divide. Harvey B. Newman California Institute of Technology ICFA SCIC Report IHEPCCC Meeting, January 10, 2007.

ggee
Download Presentation

Networks and Grids for High Energy Physics and Global e-Science, and the Digital Divide

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Standing Committee on Inter-regional Connectivity Networks and Grids for High Energy Physics and Global e-Science, and the Digital Divide Harvey B. Newman California Institute of Technology ICFA SCIC Report IHEPCCC Meeting, January 10, 2007

  2. The LHC Data “Grid Hierarchy” Evolved:MONARC  DISUN, ATLAS & CMS Models … CERN/Outside Ratio ~1:4 T0/(T1)/(T2) ~1:2:2~40% of Resources in Tier2sUS T1s and T2s Connect to US LHCNet PoPs Online GEANT2+NRENS USLHCNet + ESnet 10 – 40 Gbps CC In2P3 BNL T1 10 Gbps UltraLight/DISUN Outside/CERN Ratio Larger; Expanded Role of Tier1s & Tier2s: Greater Reliance on Networks Emerging Vision: A Richly Structured, Global Dynamic System

  3. “Onslaught of the LHC” 0.2 to 1.1 PBytes/Mo. 2 Months (Apr.-June 2006) 1.0 0.8 Petabytes Per Month 0.6 0.4 0.2 0

  4. SC4 (2006):CMS PhEDEx Tool Used to Transfer 1-2 PBytes/Mo. for 5 Months By Destination By Source FNAL CERN

  5. UCSD Tier2 at 200-300 MB/sec 1 GByte/sec Recently other US Tier2s at 250-300 MB/sec;Fermilab working to bring Tier2s in Europe “up to speed”

  6. Caltech/CERN & HEP at SC2006: Petascale Transfers for Physics ~200 CPU 56 10GE Switch Ports50 10GE NICs 100 TB Disk Research PartnersFNAL, BNL, UF, UM, ESnet, NLR, FLR, Internet2, ESNet, AWave, SCInet,Qwest, UERJ, UNESP, KNU, KISTI Corporate PartnersCisco, HP NeterionMyricom DataDirectBlueArc NIMBUS New Disk-Speed WAN Transport Apps. for Science (FDT, LStore)

  7. New Capability Level: 40-70 Gbps per rack of low cost 1U servers FDT: Fast Data Transport Results 11/14 – 11/15/06 Efficient Data Transfers • Reading and writing at disk speed over WANs (with TCP) for the first time • Highly portable: runs on all major platforms. • Based on an asynchronous, multithreaded system, using Java NIO libraries • Streams a dataset (list of files) continuously, from a managed pool of buffers in kernel space, through an open TCP socket • Smooth data flow from each disk to/from the network • No protocol start-phase between files • Stable disk-to-disk flows Tampa-Caltech: Stepping up to 10-to-10 and 8-to-8 1U Server-pairs 9 + 7 = 16 Gbps; then Solid overnight. Using One 10G link

  8. LHCNet, ESnet Plan 2006-2009:20-80Gbps US-CERN, ESnet MANs, IRNC US-LHCNet: Wavelength Triangle to NY-CHI-GVA-AMS Quadrangle 2007-10: 30, 40, 60, 80G AsiaPac SEA Europe Europe ESnet SDN Core: 30-50G Aus. BNL Japan Japan SNV CHI NYC GEANT2 SURFNet IN2P3 DEN DC Metro Rings FNAL Aus. ESnet IP Core ≥10 Gbps ALB SDG ATL CERN ELP ESnet hubs New ESnet hubs US-LHCNet Data Network (3 to 8 x 10 Gbps US-CERN) Metropolitan Area Rings 10Gb/s 10Gb/s 30Gb/s2 x 10Gb/s Major DOE Office of Science Sites High-speed cross connects with Internet2/Abilene Production IP ESnet core, 10 Gbps enterprise IP traffic Science Data Network core, 40-60 Gbps circuit transport Lab supplied Major international LHCNet Data Network ESNet MANs to FNAL & BNL; Dark fiber to FNAL NSF/IRNC circuit; GVA-AMS connection via Surfnet or Geant2

  9. HEP Major Links: Bandwidth Roadmap in Gbps Moderating a Trend: >~1000X to 100X BW Growth Per Decade Roadmap may be modified once 40-100 Gbps channels appear.Note Role of other networks across the Atlantic: notably GEANT2

  10. ESnet4 1625 miles / 2545 km Production IP core (10Gbps) SDN core (20-30-40-50 Gbps) MANs (20-60 Gbps) or backbone loops for site access International connections Primary DOE Labs High speed cross-connectswith Ineternet2/Abilene Possible hubs 2700 miles / 4300 km W. Johnston ESnet Core networks: 50-60 Gbps by 2009-2010, 200-600 Gbps by 2011-2012 Canada (CANARIE) Europe (GEANT) CERN(30+ Gbps) Canada (CANARIE) Asia-Pacific Asia Pacific CERN(30+ Gbps) GLORIAD(Russia and China) USLHCNet Europe (GEANT) Asia-Pacific Science Data Network Core Seattle Boston Chicago IP Core Boise Australia New York Kansas City Cleveland Denver Washington DC Sunnyvale Atlanta Tulsa LA Albuquerque Australia South America (AMPATH) San Diego Houston South America (AMPATH) IP core hubs Jacksonville SDN hubs Core network fiber path is~ 14,000 miles / 24,000 km

  11. LHCOPN: Overlay T0-T1 Network (CERN-NA-EU) Renater - lightpaths CERN AS513 IN2P3 AS789 Renater Switch Lightpath RAL UKlight Geant2 - Lightpaths DFN - lightpaths Gridka AS680 to: CNAF, SARA Geant2 - IP to: GridKA CNAF AS137 GARR Netherlight RedIRIS - lightpaths PIC AS766 US LHCNet ESnet NORDUgrid NORDUnet to: GridKA SARA AS1126 SURFnet ASnet CANARIE TRIUMF AS BNL AS43 FNAL AS3152 ASCC AS 9264 E. Martelli Backup path T1-T1 path Main path L1/L2 network L3 network Tier 1

  12. Next Generation LHCNet:Add Optical Circuit-Oriented Services Geant2: GFP, VCAT & LCAS on Alcatel Based on CIENA “Core Director” Optical Multiplexers (Also Internet2) • Robust fallback, at the optical layer • Circuit-oriented services: Guaranteed Bandwidth Ethernet Private Line (EPL) • New standards-based software: VCAT/LCAS: Virtual, Dynamic Channels

  13. Internet2’s “NewNet” Backbone Level(3) Footprint;Infinera 10 X 10G Core;CIENA Optical Muxes • Initial deployment – 10 x 10 Gbps wavelengths over the footprint • First round maximum capacity – 80 x 10 Gbps wavelengths; expandable • Scalability – potential migration to 40 Gbps or 100 Gbps capability • Reliability – carrier-class standard assurances for wavelengths • The community will transition to NewNet from now, over period of 15 months +Paralleled by Initiatives in: nl, ca, jp, uk, kr; pl, cz, sk, pt, ei, gr, hu, si, lu, no, is, dk … + >30 US states

  14. ICFA Standing Committee on Interregional Connectivity (SCIC) • Created in July 1998 in Vancouver ; Following ICFA-NTF CHARGE: • Make recommendations to ICFA concerning the connectivity between the Americas, Asia and Europe • As part of the process of developing theserecommendations, the committee should • Monitor traffic on the world’s networks • Keep track of technology developments • Periodically review forecasts of future bandwidth needs, and • Provide early warning of potential problems • Main focus since 2002: the Digital Divide in the HEP Community

  15. SCIC in 2005-2006http://cern.ch/icfa-scic Three 2006 Reports; Update for 2007 Soon:Rapid Progress, Deepening Digital Divide • Main Report: “Networking for HENP” [H. Newman, et al.] • Includes Updates on the Digital Divide, WorldNetwork Status; Brief updates on Monitoring and Advanced Technologies • 29 Appendices: A World Network Overview Status and Plans for the Next Few Years of Nat’l & Regional Networks, HEP Labs, & Optical Net Initiatives • Monitoring Working Group Report [L. Cottrell] Also See: • TERENA (www.terena.nl) 2005 and 2006 Compendiums:In-depth Annual Survey on R&E Networks in Europe • http://internetworldstats.com: Worldwide Internet Use • SCIC 2003 Digital Divide Report [A. Santoro et al.]

  16. ICFA Report 2006 Update: Main Trends Deepen and Accelerate • Current generation of 10 Gbps network backbones and major Int’l links arrived in 2001-5 in US, Europe, Japan, Korea; Now China • Bandwidth Growth: from 4 to 2500 Times in 5 Years; >> Moore’s Law • Rapid Spread of “Dark Fiber” and DWDM: the emergence of Continental, Nat’l, State & Metro “Hybrid” Networks in Many Nations • Cost-effective 10G or N X 10G Backbones, complemented by Point-to-point “Light-paths” for “Data Intensive Science”, notably HEP • First large scale 40G project: CANARIE (Ca): 72 waves and ROADMs • Proliferation of 10G links across the Atlantic & Pacific; Use of multiple 10G Links (e.g. US-CERN) along major paths began in Fall 2005 • On track for ~10 X 10G networking for LHC, in production by 2007-8 • Technology evolution continues to drive performance higher, equipment costs Lower • Commoditization of Gigabit and now 10-Gigabit Ethernet on servers • Use of new busses (PCI Express) in PC’s and network interfaces in 2006 • Improved Linux kernel for high speed data transport; multi-CPUs • 2007 Outlook: Continued growth in bandwidth deployment & use

  17. Transition to Community Owned or Operated Optical Infrastructures National Lamba Rail Example: NLRwww.nlr.net • Each Link to 32 X 10G • Cost Recovery Model • Supports: Cisco Research Wave, UltraScience Net,Atlantic & Pacific Wave;Initiatives w/HEP A Network of Networks • WaveNet: point-to-point lambdas • FrameNet: Ethernet based services • PacketNet: IP Routed Nets

  18. GÉANT2 November 2006 Dark Fiber Connections Among 16 Countries: • Austria • Belgium • Bosnia-Herzegovina • Czech Republic • Denmark • France • Germany • Hungary • Ireland • Italy, • Netherland • Slovakia • Slovenia • Spain • Switzerland • United Kingdom Multi-Wavelength Core + 0.6-10G Loops

  19. Internet Growth in the World At Large Amsterdam Internet Exchange Point 1/09/07 Traffic Doubled (to 226 Gbps Peak) in <1 Year 226 Gbps 5 MinuteMax 200 G 150 G 100 G Average Some Annual Growth Spurts;Typically In Summer-Fall “Acceleration” Last Summer The Rate of HENP Network Usage Growth (80-100+% Per Year) is Matched by the Growth of Traffic in the World at Large

  20. Work on the Digital Dividefrom Several Perspectives • Share Information: Monitoring, Tracking BW Progress; Dark Fiber Projects & Pricing • Model Cases: Poland, Slovakia, Brazil, Czech Rep., China … • Encourage Access to Dark Fiber • Encourage, and Work on Inter-Regional Projects • GLORIAD, Russia-China-Korea-US-Europe Optical Ring • Latin America: CHEPREO/WHREN (US-Brazil); RedCLARA • Mediterranean: EUMEDConnect; Asia-Pacific: TEIN2 • India Link to US, Japan and Europe • Technical Help with Modernizing the Infrastructure: • Provide Tools for Effective Use: Data Transport, Monitoring, Collaboration • Design, Commissioning, Development • Raise Awareness: Locally, Regionally & Globally • Digital Divide Workshops • Diplomatic Events: WSIS, RSIS, Bilateral: e.g. US-India

  21. SCIC Monitoring WG PingER (Also IEPM-BW) R. Cottrell Monitoring & Remote Sites (1/06) • Measurements from 1995 On Reports link reliability & quality • Countries monitored • Contain 90% of world population • 99% of Internet users • 3700 monitor-remote site pairs • 35 monitors in 14 countriesCapetown,Rawalpindi, Bangalore • 1000+ remote sites in 120 Countries New Countries:N. America (2), Latin America (18), Europe (25), Balkans (9), Africa (31), Mid East (5), Central Asia (4), South Asia (5), East Asia (4), SE Asia (6), Russia includes Belarus & Ukraine (3), China (1)  and Oceania (5)

  22. SCIC Monitoring WG - Throughput Improvements 1995-2006 40% annual improvement Factor ~10/7 yrs Progress: but Digital Divide is Mostly Maintained Behind Europe 6 Yrs: Russia, Latin America7 Yrs: Mid-East, SE Asia 10 Yrs: South Asia 11 Yrs: Cent. Asia 12 Yrs: Africa India, Central Asia, and Africa are in Danger of Falling Even Farther Behind Bandwidth of TCP < MSS/(RTT*Sqrt(Loss)) Matthis et al., Computer Communication Review 27(3), July 1997

  23. SCIC Digital Divide Workshops and Panels • 2002-2005:An effective way to raise awareness of the problems, anddiscuss approaches and opportunities for solutions with national and regional communities, and gov’t officials • ICFA Digital Divide Workshops: Rio 2/2004; Daegu 5/2005 • CERN & Internet2 Workshops on R&E Networks in Africa • February 2006 • CHEP06 Mumbai: Digital Divide Panel, Network Demos, & Workshop [SCIC, TIFR, CDAC, Internet2, Caltech]“Moving India into the Global Community Through Advanced Networking” • October 9-15 2006:  ICFA Digital Divide Workshops in Cracow & Sinaia • April 14-17 2007: “Bridging the Digital Divide”: Sessions at APS Meeting in Jacksonville; sponsored by Forum for International Physics

  24. International ICFA Workshop on HEP Networking, Grids, and Digital Divide Issues for Global e-Science http://chep.knu.ac.kr/HEPDG2005 • Workshop Missions • Review the status and outlook, and focus on issues in data-intensive Grid computing, inter-regional connectivity and Grid enabled analysis for high energy physics • Relate these to the key problem of the Digital Divide • Promote awareness of these issues in various regions, focusing on the Asia Pacific, Latin America, Russia, and Africa • Develop approaches to eliminate the Divide and • Help ensure that the basic requirements for global collaboration are met, related to all of these aspects May 23-27, 2005 Daegu, Korea Dongchul Son Center for High Energy Physics Harvey Newman California Institute of Technology

  25. International ICFA Workshop onHEP Networking, Grid and Digital Divide Issues for Global E-Science National Academy of Arts and SciencesCracow, October 9-11, 2006 http://icfaddw06.ifj.edu.pl/index.html

  26. Sinaia, RomaniaOctober 13-18, 2006http://niham.nipne.ro/events2006/

  27. Highest Bandwidth Link in European NREN’s Infrastructure; The Trend to Dark Fiber 10.0G Dk Is Nl Pl Cz Ch It No Pt Si Sk Ei 1.0G Percentage of Dark Fiber in Backbone Lu 0.1G 80-100 [7] 50-80 [2] 0.01G 5-50 [4] <5 [12] No Data NRENs with dark fiber can deploy light paths, to support separate communities &/or large applications. Up to 100X gain in some cases, at moderate cost. New with > 50% in ‘06: de, gr, se; az, bl, sb/mn. More planned Source: TERENA www.terena.nl

  28. SLOVAK Academic Network May 2006: All Switched Ethernet http://www.sanet.sk/en/index.shtm 120km CBDF Cost 4 k Per Month 1 GE 2/16/05 T. Weis • 1660 km of Dark Fiber CWDM Links • August 2002: Dark Fiber Link, to Austria • April 2003: Dark Fiber Link to Czech Republic • 2004: Dark Fiber Link to Poland • 10 GbE Cross-Border Dark Fiber to Austria & Czech Republic;(11/2006); 8 X 10G over 224 km with Nothing In-Line shown 2500x: 2002-2006

  29. Czech Republic: CESNET2 2500 km Leased Dark Fibers(since 1999) 1 GbE Light-Paths in CzechLight; 1 GbE to Slovakia; 1 GbE to Poland 2005-6: 32 Wavelength Software-Configurable DWDM Ring + More 10GE Connections Installed

  30. Poland: PIONIER 20G + 10G Cross Border Dark Fiber Network (Q4 2006) GDAŃSK • 6000 km of Owned Fiber;Multi-Lambda Core21 Academic MANs5 HPCCs  Moved to 20G on all major links Cross Border Dark Fibers • 20G to Germany • 10G Each to Cz, Sk • Moved to Connect All Neighbors at 10G in 2006 + 20G to GEANT;10G to Internet2 KOSZALIN OLSZTYN BASNET 155 Mb/s SZCZECIN BYDGOSZCZ BIAŁYSTOK GÉANT2 10+10 Gb/s TORUŃ 2 x 10 Gb/s (2 l) DFN 2x10 Gb/s POZNAŃ WARSZAWA CBDF 10Gb/s (2 l) GÉANT2/Internet 7,5 Gb/s ZIELONA GÓRA ŁÓDŹ Internet 5 Gb/s RADOM WROCŁAW CBDF 10Gb/s (1 l) CZĘSTOCHOWA KIELCE PUŁAWY OPOLE LUBLIN MAN KATOWICE RZESZÓW KRAKÓW BIELSKO-BIAŁA CESNET, SANET 2x10 Gb/s • CBDF in Europe: 8 Links now, including CCIN2P3 link to CERN; 12 more links planned in the near future [source: Terena]

  31. Romania: RoEduNet Topology155 Mbps Inter-City; 2 X 622 Mbps to GEANT • Connects 610Institutions to GEANT: • 38 Universities • 32 Research Institutes • 500 Colleges & High Schools • 40 Others RoGrid Plans for 2006 • 10G Experimental Link UPB-RoEdunet • Upgrade 3-4 Local Centers to 2.5G 2007 Plan: Dark Fiber Infrastructure with 10G Light-paths (help from Caltech and CERN) N. Tapus

  32. Brazil: RNP2 Next-Generation Backbone New vs. OldA factor of 70 to 300 in Bandwidth 2006: • Buildout of dark fiber nets in 27 cities with RNP PoPs underway • 200 Institutions Connected at 1 GbE in 2006-7 (well-advanced) • 2.5G (to 10G) WHREN (NSF/RNP) Link to US;622M Link to GEANT Plan: Extend to the Northwest; Dark fiber across Amazon jungle to Manaus M. Stanton

  33. President of India Collaborating with US, CERN, Slovakia via VRVS/EVO Coincident with Data Transfers of ~500 Mbps 15 TBytes to/from India in 2 Days

  34. Mumbai-Japan-US Links TIFR to Japan Connectivity International IPLC (4 X STM-1) JAPAN INDIA JAPAN LAND STANDING Chennai POP VSNL LANDING STATIONS SINGAPORE LANDING STATION TIC Cable EAC Cable TIC MUX EAC MUX EAC MUX EAC Mux VSNL MUX TIC MUX TIFR Link to Japan+ Onward to US & Europe Loaned Link from VSNL at CHEP06 End to End Bandwidth 4 X 155 Mbpson SeMeWe3 Cable Goal is to Move to 10 Gbps on SeMeWe4Sparked Planning fora Next Generation R&E Network in India EAC Tokyo Backhaul STM-64 Ring Caltech, TIFR, CDAC, JGN2, World Bank, IEEAF, Internet2, VSNL ANC Comspace VSNL Prabhadevi POP VSNL VSNL MUX EAC Mux TGN Mux TTML MUX Dark Fibre VSNL Shinagawa PoP LL Mux TGN Mux STM-16 Ring Express Towers TTML TTML MUX Dark Fibre OR Juniper M10 with STM-4 interface Foundry BI15000 with OC-12 interface STM-16 Ring LL Mux TTML MUX INTERFACE TYPES INTERFACE TYPES NTT Otemachi Bldg, JAPAN OC-12 STM 4 TIFR Mumbai, INDIA + Onward to US, Europe 

  35. The HEP Community: Network Progress and Impact • The national, continental and transoceanic networks used by HEP and other fields of DIS are moving to the N X 10G range • Growth rate much faster than Moore’s Law • 40 – 100G Tests; ; Canada moving to first N X 40G network • “Dark Fiber”-based, hybrid networks, owned and/or operated by the R&E community are emerging, and fostering rapid progress, in a growing list of nations: • ca, nl, us, jp, kr; pl, cz, fr, br, no, cn, pt, ie, gr, sk, si, … • HEP is learning to use long range networks effectively • 7-10 Gbps TCP flows over 10-30 kkm; 151 Gbps Record • Fast Data Transport Java Application: 1-1.8 Gbps per 1U Node, disk to disk; i.e. 40-70 Gbps per 40U rack

  36. Working to Close the Digital Divide, for Science, Education and Economic Development • HEP groups in US, EU, Japan, Korea, Brazil, Russia are working with int’l R&E networks and advanced net projects, and Grid Organizations; helping by • Monitoring connectivity worldwide to/from HEP groups and other sites (SLAC’s IEPM project) • Co-developing and deploying next generation Optical nets, monitoring and management systems • Developing high throughput tools and systems • Adapting the tools & best practices for broad use in the science and Grid communities • Providing education and training in state of the art technologies & methods • A Long Road Ahead Remains: Eastern Europe, Central & SE Asia, India, Pakistan, Africa

  37. Extra Slides Follow

  38. SCIC Main Focus Since 2002 • As we progress we are in danger of leaving the communities in the less-favored regions of the world behind • We must Work to Close the Digital Divide • To make physicists from all world regions full partners in the scientific discoveries • This is essential for the health of our global collaborations, and our field

  39. Digital Divide Illustrated by Network Infrastructures: TERENA Core Capacity Source: www.terena.nl Core capacity goes up in Leaps: 1 to 2 to N X 10 Gbps;1-2.5 to 10 Gbps;0.6-1 to 2.5 Gbps SE Europe, Medit., FSU, Mid East:Slower Progress With Older Technologies (10-622 Mbps). Digital Divide Will Not Be Closed By ~2007   N X 10G Lambdas    Current  1 M 10M 100M 1G 10G 20G

  40. SURFNet6 in the Netherlands 5300 km of Owned Dark Fiber Optical Layer: 5 Rings Up to 72 Wavelengths Support for HEP, Radioastronomers Medical Research K. Neggers

  41. 4 Years Ago: 4 Mbps was the highest bandwidth link in Slovakia

  42. HENP Bandwidth Roadmap for Major Links (in Gbps): Updated 12/06  Continuing Trend: ~400 Times Bandwidth Growth Per DecadeParalleled by ESnet Roadmap for Data Intensive Sciences

  43. Internet2 Land Speed Records & SC2003-2005 Records 7.2G X 20.7 kkm • IPv4 Multi-stream record 6.86 Gbps X 27kkm: Nov 2004 • PCI-X 2.0: 9.3 Gbps Caltech-StarLight: Dec 2005 • PCI Express: 9.8 Gbps Caltech – Sunnyvale, July 2006 • Concentrate now on reliable Terabyte-scale file transfers • Disk-to-disk Marks: 536 Mbytes/sec (Windows); 500 Mbytes/sec (Linux) • System Issues: PCI Bus, Network Interfaces, Disk I/O Controllers, Linux kernel,CPU • SC2003-5: 23, 101, 151 Gbps • SC2006: FDT app.: Stable disk-to-disk at 16+ Gbps on one 10G link Internet2 LSRs:Blue = HEP Throuhgput (Petabit-m/sec)

  44. Fast Data Transport Across the WAN: Solid 10.0 Gbps  “12G + 12G”

  45. FDT – Fast Data Transport A New Application for Efficient Data Transfers • Capable of reading and writing at disk speed over wide area networks (with standard TCP) for the first time • Highly portable and easy to use: runs on all major platforms. • Based on an asynchronous, flexible multithreaded system, using the Java NIO libraries, that: • Streams a dataset (list of files) continuously, from a managed poolof buffers in kernel space, through an open TCP socket • Smooth flow of data from each disk • No protocol start phase between files • Uses independent threads to read and write on each physical device • Transfers data in parallel on multiple TCP streams, when necessary • Uses appropriate-sized buffers for disk I/O and for the network • Restores the files from buffers asynchronously • Resumes a file transfer session without loss, when needed

  46. FDT Test Results 11/14-11/15 • Memory to memory ( /dev/zero to /dev/null ), using two 1U systems with Myrinet 10GbE PCI Express NIC cards • Tampa-Caltech (RTT 103 msec): 10.0 Gbps Stable: indefinitely • Long range WAN Path (CERN – Chicago – New York – Chicago – CERN VLAN, RTT 240 msec) ~8.5 Gbps  10.0 GbpsOvernight • Disk to Disk: performs very close to the limit for the disk or network speed 1U Disk server at CERN sending data to a 1U server Caltech (each with 4 SATA disks):~0.85 TB/hr per rack unit = 9 Gbytes/sec per rack 

  47. FDT Test Results (2) 11/14-11/15 • Stable disk-to-disk flows Tampa-Caltech: Stepping up to 10-to-10 and 8-to-8 1U Server-pairs 9 + 7 = 16 Gbps; then Solid overnight • Cisco 6509E Counters: • 16 Gbps disk traffic and 13+ Gbps FLR memory traffic • Maxing out the 20 Gbps Etherchannel (802.3ad) between our two Cisco switches ?

  48. L-Store: File System Interface to Global Storage • Provides a file system interface to (globally) distributed storage devices (“depots”) • Parallelism for high performance and reliability • Uses IBP (from UTenn) for data transfer & storage service • Generic, high performance, wide-area-capable storage virtualization service; transport plug-in support • Write: break file into blocks, upload blocks simultaneously to multiple depots (reverse for reads) • Multiple metadata servers increase performance & fault tolerance • L-Store supports beyond-RAID6-equivalent encoding of stored files for reliability and fault tolerance • SC06 Goal: 4 GBytes/sec from 20-30 clients at the Caltech booth to ~30 depots at the Vanderbilt booth • Across the WAN (FLR)

  49. 3 GB/s 30 Mins L-Store Performance • Multiple simultaneous writes to 24 depots • Each depot is a 3 TB disk server in a 1U case • 30 clients on separate systems uploading files • Rate has scaled linearly as depots added: 3 Gbytes/sec so far; Continuing to add • REDDnet deployment of 167 depots can sustain 25 GB/s

  50. Science Network Requirements Aggregation Summary W. JohnstonESnet Immediate Requirements

More Related