1 / 22

FEDERICA

SUNET TREFpunkt 20. May 14, 2009. FEDERICA. Federated E-infrastructure Dedicated to European Researchers Innovating in Computing network Architectures Björn Rhoads KTH/CSC. Agenda. Overview (at a Glance and Goals) Network (Layout, PoP’s and Hardware) http://www.fp7-federica.eu/.

junior
Download Presentation

FEDERICA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SUNET TREFpunkt 20 May 14, 2009 FEDERICA Federated E-infrastructure Dedicated to European Researchers Innovating in Computing network Architectures Björn Rhoads KTH/CSC

  2. Agenda Overview (at a Glance and Goals) Network (Layout, PoP’s and Hardware)http://www.fp7-federica.eu/

  3. FEDERICA at a Glance • What:European Commission co-funded project in its 7th Framework Programme in the area “Capacities - Research Infrastructures” • 3.7 MEuro EC contribution, 5.2 ME budget, 461 Man Months • When:1stJanuary 2008 - 30 June 2010 (30 months) • Who:20 partners, stakeholders on network operations & research: • 11 National Research and Education Networks, DANTE (GÉANT2), TERENA, 4 Universities, Juniper Networks, 1 SME, 1 research centre - Coordinator: GARR (Italian NREN) • Where:Europe-wide shared infrastructure, based on NREN & GEANT2 facilities, open to external connections

  4. FEDERICA partners National Research & Education Networks (11) • CESNET Czech Rep. • DFN Germany • FCCN Portugal • GARR (coordinator) Italy • GRNET Greece • HEAnet Ireland • NIIF/HUNGARNET Hungary • NORDUnet Nordic countries • PSNC Poland • Red.es Spain • SWITCH Switzerland • Small Enterprise • Martel Consulting Switzerland • NREN Organizations • TERENA The Netherlands • DANTEUnited Kingdom

  5. FEDERICA partners Universities - Research Centers • i2CAT Spain • KTH Sweden • ICCS (NTUA) Greece • UPC Spain • PoliTO Italy • System Vendor • Juniper NetworksIreland

  6. FEDERICA Goals Summary Forum and support for researchers/projects on “Future Internet” Support of experimental activities to validate theoretical concepts, scenarios, architectures, control & management solutions; users have full control of their virtual slice Provide on European scale network and system agnostic e-infrastructure to be deployed in phases. Provide its operation, maintenance and on-demand configuration Validate and gather experimental information for the next generation of research networking also through basic tool validation Dissemination and favor cooperation of NRENs and User community Contribution to standards in form of requirements and experience

  7. FEDERICA Goals – Out of Scope • Extended research, e.g. advanced optical technology developments • Development and support of Grid applications • Offer computing power • Offer transit capacity

  8. Federica Substrate FEDERICA substrate

  9. FEDERICA substrate • The substrate is configured as a single domain • Makes it easier to interoperate with remote networks and users • Own IP-space and AS-number • Public AS-number is: 47630 • Public IPv4 address 194.132.52.0/23 • IPv6 block: 2001:760:3801::/48 • Currently full internet peering through 4 NRNs • GARR, PSNC, CESNET and DFN • fp7-federica.eu registred • access granted only to users

  10. Network Topology version 8.4 NORDUNET SUNET KTH DE PL IE IT CZ CH HU PT ES GR i2CAT 1 GbE VLAN or L2MPLS 1 GbE GN2+ 1 GbE tbd Core Nodes

  11. Network Topology • FEDERICA in GÉANT infrastructure

  12. Typical Core PoPs Infrastructure • Core PoPs architecture: • 2x Virtualization Server • 1x Additional Server • Juniper MX 480 • Connections to GÉANT PoP • BGP Peering enabled with local NREN infrastructure • Optional non GÉANT connections through local infrastructure

  13. Juniper Core switch • Core PoPs are equipped with Juniper MX 480 with the following configuration: • 6 FPC slots, 40 Gbps throughput each • JUNOS OS • DPC combines packet forwarding and Ethernet interfaces on a single board • Switch Control Board (SCP) – allows remote management of box hardware (power on/off cards, controls clocking, system reset and rebooting, booting, and monitors and controls system functions including fan speed, board power status, PDM status and control) • 4x AC Power Supplies • L2 L3 support • MPLS support • Logical Router capabilities

  14. Juniper Core switch • Each Juniper has a FPC card with: • 40 x 1GE SFP interface • 4 Packet Forwarding Engines (10 Gbps capacity each) • IPv4/IPv6 support • L2/L3 support • IEEE 802.3ad link aggregation support • Firewall filters • BGP, OSPF support • MPLS support • Packet mirroring • IEEE 802.1q VLANs support • VPLS and VPN support • DPCE-R-40GE-SFP in CESNET and GARR • DPCE-R-Q-40GE-SFP (with enhanced queuing) in PSNC and DFN

  15. Non-core PoPs Infrastructure • Non-Core PoPs are less restricted about neighbor connectivity • Only 1 server is obligatory at non-Core PoPs • Router is replaced by less powerful switch

  16. Non-core PoPs switches • Non-core PoPs will be equipped with Juniper EX3200 • There are 24 10/100/1000BaseT ports • + 4x SFP • Due to large number of connecting sites to RedIRIS, there is not enough SFPs in 3200 chassis. Thus RedIRIS will be equipped with two EX4200 switches stacked together.

  17. V-nodes equipment • Servers configuration • 2x AMD Opteron 1.9GHz Quad-core • Up to 64 MB RAM in 8 DIMM slots per processor • 16-32 MB RAM installed • 3x 10/100/1000 Base-T interfaces • 1x 10/100/1000 Base-T eLOM interface • Serial DB-9 port • Two SATA II 500GB HD disks • DVD ROM • RAID controller • Size of 1U • 2x PCI-E dual ports 10/100/1000 Base-T interfaces (7+1 eLOM 10/100/1000 Base-T Interfaces in total)

  18. Access to VMWare and virtual machines • Users access their slices via SSH console • For each slice a Virtual Slice Management Node is created, which performs as a proxy between the FEDERICA infrastructure and the Internet

  19. Slice access

  20. NOC Operations • Contact with users primarily over email • federica-noc@fp7-federica.eu • Adopted RT: Request Tracker to keep track of cases • Open source solution • Central repository for logs from equipment • Dispatcher duty varies between partners in SA2.1 • KTH and FCCN are the main contributors • The substrate is manually configured • Terminal sessions to routers • VMWare infrastructure client for managing vnodes • If number of nodes we might move to VMware vCenter • Evaulating tools for aiding in slice creation • Coordination point for Dante and NRN NOCs

  21. Monitoring • Example of monitoring information for single slice

  22. Eudemo slice – test path host4 - host9 5 8 4 7 3 6 2 PSNC 1303 9 1304 CESNET KTH 1 1302 poz.pl pra.cz 1301 10 13 mil.it erl.de DFN GARR 14 12 11 15

More Related