420 likes | 547 Views
October 4 th 2011 – Fall Member Meeting Jason Zurawski, Internet2 Research Liaison. High Energy & Nuclear Physics (HENP) SIG. Agenda. Group Name/Future Meetings LHCONE DYNES SC11 Planning AOB. Group Name/Future Meetings.
E N D
October 4th 2011 – Fall Member Meeting Jason Zurawski, Internet2 Research Liaison High Energy & Nuclear Physics (HENP) SIG
Agenda • Group Name/Future Meetings • LHCONE • DYNES • SC11 Planning • AOB
Group Name/Future Meetings • “HENP SIG” is too hard for people to dereference when looking at the agenda • “Physics SIG”? • “Science SIG” – more embracing … • Others? • Alternate Proposal – Do we need a ‘LHC BoF’ – topics to focus on network support?
Agenda • Group Name/Future Meetings • LHCONE • DYNES • SC11 Planning • AOB
“Joe’s Solution” • Two “issues” identified at the DC meeting as needing particular attention: • Multiple paths across Atlantic • Resiliency • Agreed to have the architecture group work out a solution
LHCONE Status • LHCONE is a response to the changing dynamic of data movement in the LHC environment. • It is composed of multiple parts: • North America, Transatlantic Links, Europe • Others? • It is expected to be composed of multiple services • Multipoint service • Point-to-point service • Monitoring service
LHCONE Multipoint Service • Initially created as a shared Layer 2 domain. • Uses 2 VLANs (2000 and 3000) on separate transatlantic routes in order to avoid loops. • Enables up to 25G on the Trans-Atlantic routes for LHC traffic. • Use of dual paths provides redundancy.
LHCONE Point-to-Point Service • Planned point-to-point service • Suggestion: Build on efforts of DYNES and DICE-Dynamic service • DICE-Dynamic service being rolled out by ESnet, GÉANT, Internet2, and USLHCnet • Remaining issues being worked out • Planned commencement of service: October, 2011 • Built on OSCARS (ESnet, Internet2, USLHCnet) and AUTOBAHN (GÉANT), using IDC protocol
LHCONE Monitoring Service • Planned monitoring service • Suggestion: Build on efforts of DYNES and DICE-Diagnostic service • DICE-Diagnostic service, being rolled out by ESnet, GÉANT, and Internet2 • Remaining issues being worked out • Planned commencement of service: October, 2011 • Built on perfSONAR
LHCONE Pilot (Late Sept 2011) Mian Usman, DANTE, LHCONE technical proposal v2.0 13
LHCONE Pilot • Domains interconnected through Layer 2 switches • Two vlans (nominal IDs: 3000, 2000) • Vlan 2000 configured on GEANT/ACE transatlantic segment • Vlan 3000 configured on US LHCNet transatlantic segment • Allows to use both TA segments, provides TA resiliency • 2 route servers per vlan • Each connecting site peers will all 4 route servers • Keeping in mind this is a “now” solution, does not scale well to more transatlantic paths • Continued charge to Architecture group
Internet2 (NA) – New York Status • VLANS 2000 and 3000 for the multipoint service are configured. • Transatlantic routes, Internet2, and CANARIE all are participating in the shared VLAN service. • New switch will be installed at MAN LAN in October. • Will enable new connection by BNL • Peering with Univ of Toronto through the CANARIE link to MAN LAN is complete • End sites that have direct connections to MAN LAN are: • MIT • BNL • BU/Harvard
LHCONE (NA) - Chicago • VLANS for multipoint service configured on 9/23. • Correctly configured shortly thereafter to prevent routing loop • Testing on the link can start any time. • Status of FNAL Cisco. • Resource constraints on the Chicago router have prevented this from happening. • Port availability is the issue. • End Sites • See diagram from this summer
MAN LAN • New York Exchange Point • Ciena Core Director and Cisco 6513 • Current Connections on the Core Director: • 11 OC-192’s • 9 1 Gig • Current Connection on the 6513 • 16 10G Ethernets • 7 1G Ethernet
MAN LAN Roadmap • Switch upgrade: • Brocade MLXe-16 was purchased with: • 24 10G ports • 24 1 G ports • 2 100G ports • Internet2 and ESnet will be connected at 100G. • The Brocade will allow landing transatlantic circuits of greater then 10G. • An IDC for Dynamic circuits will be installed. • Comply with GLIF GOLE definition
MAN LAN Services • MAN LAN is an Open Exchange Point. • 1 Gbps, 10 Gbps, and 100 Gbps interfaces on the Brocade switch. • 40 Gbps could be available by 2012. • Map dedicated VLANs through for Layer2 connectivity beyond the ethernet switch. • With the Brocade the possibility of higher layer services should there be a need. • This would include OpenFlow being enabled on the Brocade. • Dynamic services via an IDC. • perfSONAR-ps instrumentation.
WIX • WIX = Washinton DC International Exchange Point • Joint project being developed by MAX and Internet2 and transferred for MAX to manage once in operation. • WIX is a state‐of‐the‐art international peering exchange facility, located at the Level 3 POP in McLean VA, designed to serve research and education networks. • WIX is architected to meet the diverse needs of different networks. • Initially, WIX facility will hold 4 racks, expandable to 12 racks as needed. • Bulk cables between the existing MAX and Internet2 suites will also be in place. • WIX is implemented with a Ciena Core Director and a Brocade MLXe-16.
WIX Roadmap • Grow the connections to existing Exchange Points. • Expand the facility with “above the net” capabilities located in the suite. • Allows for easy access both domestically and internationally • Grow the number of transatlantic links to insure adequate connectivity as well as diversity.
WIX Services • Dedicated VLANs between participants for traffic exchange at Layer 2. • WDC-IX will be an Open Exchange Point. • Access to Dynamic Circuit Networks such as Internet2 ION. • With the Brocade, there exists the possibility of higher layer services, should there be a need. • Possibility of OpenFlow being enabled on the Brocade • 1 Gbps, 10 Gbps, and 100 Gbps interfaces are available on the Brocade switch. • 40 Gbps could be available by 2012. • perfSONAR instrumentation
Agenda • Group Name/Future Meetings • LHCONE • DYNES • SC11 Planning • AOB
DYNES Hardware • Inter-domain Controller (IDC) Server and Software • IDC creates virtual LANs (VLANs) dynamically between the FDT server, local campus, and wide area network • IDC software is based on the OSCARS and DRAGON software which is packaged together as the DCN Software Suite (DCNSS) • DCNSS version correlates to stable tested versions of OSCARS. The current version of DCNSS is v0.5.4. • Initial DYNES deployments will include both DCNSSv0.6 and DCNSSv0.5.4 virtual machines • Currently XEN based • Looking into KVM for future releases • A Dell R410 1U Server has been chosen, running CentOS 5.x
DYNES Hardware • Fast Data Transfer (FDT) server • Fast Data Transfer (FDT) server connects to the disk array via the SAS controller and runs the FDT software • FDT server also hosts the DYNES Agent (DA) Software • The standard FDT server will be a DELL 510 server with dual-port Intel X520 DA NIC. This server will a PCIe Gen2.0 card x8 card along with 12 disks for storage. • DYNES Ethernet switch options: • Dell PC6248 (48 1GE ports, 4 10GE capable ports (SFP+, CX4 or optical) • Dell PC8024F (24 10GE SFP+ ports, 4 “combo” ports supporting CX4 or optical)
Our Choices • http://www.internet2.edu/ion/hardware.html • IDC • Dell R410 1U Server • Dual 2.4 GHz Xeon (64 Bit), 16G RAM, 500G HD • http://i.dell.com/sites/content/shared-content/data-sheets/en/Documents/R410-Spec-Sheet.pdf • FDT • Dell R510 2U Server • Dual 2.4 GHz Xeon (64 Bit), 24G RAM, 300G Main, 12TB through RAID • http://i.dell.com/sites/content/shared-content/data-sheets/en/Documents/R510-Spec-Sheet.pdf • Switch • Dell 8024F or Dell 6048 • 10G vs 1G Sites; copper ports and SFP+; Optics on a site by site basis • http://www.dell.com/downloads/global/products/pwcnt/en/PC_6200Series_proof1.pdf • http://www.dell.com/downloads/global/products/pwcnt/en/switch-powerconnect-8024f-spec.pdf
Phase 3 Group A Members • AMPATH • Mid-Atlantic Crossroads (MAX) • The Johns Hopkins University (JHU) • Mid‐Atlantic Gigapop in Philadelphia for Internet2 (MAGPI)* • Rutgers (via NJEdge) • University of Delaware • Southern Crossroads (SOX) • Vanderbilt University • CENIC* • California Institute of Technology (Caltech) • MREN* • University of Michigan (via MERIT and CIC OmniPoP) • Note: USLHCNet will also be connected to DYNES Instrument via a peering relationship with DYNES * temp configuration of static VLANs until future group
Phase 3 Group B Members • Mid‐Atlantic Gigapop in Philadelphia for Internet2 (MAGPI) • University of Pennsylvania • Metropolitan Research and Education Network (MREN) • Indiana University (via I-Light and CIC OmniPoP) • University of Wisconsin Madison (via BOREAS and CIC OmniPoP) • University of Illinois at Urbana‐Champaign (via CIC OmniPoP) • The University of Chicago (via CIC OmniPoP) • Lonestar Education And Research Network (LEARN) • Southern Methodist University (SMU) • Texas Tech University • University of Houston • Rice University • The University of Texas at Dallas • The University of Texas at Arlington • Florida International University (Connected through FLR)
Phase 3 Group C Members • Front Range GigaPop (FRGP) • University of Colorado Boulder • Northern Crossroads (NoX) • Boston University • Harvard University • Tufts University • CENIC** • University of California, San Diego • University of California, Santa Cruz • CIC OmniPoP*** • The University of Iowa (via BOREAS) • Great Plains Network (GPN)*** • The University of Oklahoma (via OneNet) • The University of Nebraska‐Lincoln ** deploying own dynamic infrastructure *** static configuration based
Agenda • Group Name/Future Meetings • LHCONE • DYNES • SC11 Planning • AOB
It’s the Most Wonderful Time of the Year • SC11 is ~ 1Month Out • What’s brewing? • LHCONE Demo • Internet2, GEANT, and end sites in the US and Europe (UMich, CNAF initially targeted, any US end site open to get connected) • Idea will be to show “real” applications, and use of the new network • DYNES Demo • Booths (Inernet2, Caltech, Vanderbilt) • External Deployments (Group A and some Group B) • External to DYNES (CERN, SPRACE, HEPGrid)
It’s the Most Wonderful Time of the Year • What’s brewing? • 100G Capabilities • ESnet/Internet2 coast to coast 100G network • Lots of other demos using this • SRS (SCinet Research Sandbox) • Demonstration of high speed capabilities • Lots of entries • Use of OpenFlow devices • Speakers at the Internet2 Booth • CIOs from Campus/Fed installations • Scientists • Networking Experts
Agenda • Group Name/Future Meetings • LHCONE • DYNES • SC11 Planning • AOB
AOB • UF Lustre work? • MWT2 Upgrades?
High Energy & Nuclear Physics (HENP) SIG October 4th 2011 – Fall Member Meeting Jason Zurawski - Internet2 Research Liaison For more information, visit http://www.internet2.edu/science