1 / 27

Enabling Data Intensive Applications with Advanced Optical Technologies

Enabling Data Intensive Applications with Advanced Optical Technologies Joe Mambretti, Director, ( j-mambretti@northwestern.edu ) International Center for Advanced Internet Research ( www.icair.org ) Director, Metropolitan Research and Education Network ( www.mren.org )

Download Presentation

Enabling Data Intensive Applications with Advanced Optical Technologies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Enabling Data Intensive Applications with Advanced Optical Technologies Joe Mambretti, Director, (j-mambretti@northwestern.edu) International Center for Advanced Internet Research (www.icair.org) Director, Metropolitan Research and Education Network (www.mren.org) Partner, StarLight/STAR TAP, PI-OMNINet (www.icair.org/omninet) iGRID 2005 University of California, San Diego Sept. 26-30, 2005

  2. Introduction to iCAIR: Accelerating Leading Edge Innovation and Enhanced Global Communications through Advanced Internet Technologies, in Partnership with the Global Community • Creation and Early Implementation of Advanced Networking Technologies - The Next Generation Internet All Optical Networks, Terascale Networks • Advanced Applications, Middleware, Large-Scale Infrastructure, NG Optical Networks and Testbeds, Public Policy Studies and Forums Related to NG Networks

  3. World Of Tomorrow 2005 September 26-30, 2005 University of California, San Diego California Institute for Telecommunications and Information Technology [Cal-(IT)2] United States i Grid 2oo5 THE GLOBAL LAMBDA INTEGRATED FACILITY Co-Organizers: Tom DeFanti, Maxine Brown

  4. Enabling Applications With Advanced Controllable Optical Transport • Flexibility and Control (Not Simply ‘Bit Blasting”) • Providing Applications With Direct Control of Core Resources, Including at Layer 1 and Layer 2, Nationally and Internationally • AMROEBA-EA Distributed Computational Astrophysics Modeling • DataWave: Ultra-High-Performance File Transfer Enabled by Dynamic Lightpaths (Parallel Optical Data Transport) • LightForce: High-Performance Data Multicast Enabled by Dynamic Lightpaths • Exploring Remote and Distributed Data Using Teraflows International 10Gb Line Speed Security • Virtual Machine Turntable • Multiple OptIPuter Applications

  5. LambdaGrid Control Plane Paradigm Shift Traditional Provider Services: Invisible, Static Resources, Centralized Management Distributed Device, Dynamic Services, Visible & Accessible Resources, Integrated As Required By Apps Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Unlimited Functionality, Flexibility Limited Functionality, Flexibility Ref: OptIPuter Backplane Project, UCLP

  6. A Next Generation Architecture: Distributed Facility Enabling Many Types Network/Services Environment: VO FinancialNet Environment: Sensors SensorNet HPCNet Environment: Real Org1 TransLight Environment: Real Org Commodity Internet Environment: Intelligent Power Grid Control Environment: Real Org2 R&DNet GovNet1 Environment: Gov Agency Environment: RFIDNet MedNet RFIDNet Environment: Control Plane Environment: Bio Org PrivNet Environment: Large Scale System Control Environment: Lab BioNet MediaGridNet Environment: International Gaming Fabric Environment: Global App Environment: Financial Org

  7. HP-PPFS HP-APP2 HP-APP3 HP-APP4 VS VS VS VS Previously OGSA/OGSI, Soon OGSA/OASIS WSRF tcp • Lambda Routing: • Topology discovery, DB of physical links • Create new path, optimize path selection • Traffic engineering • Constraint-based routing • O-UNI interworking and control integration • Path selection, protection/restoration tool - GMPLS tcp Architecture ODIN Server Creates/Deletes LPs, Status Inquiry Access Policy (AAA) Process Registration GMPLS Tools (with CR-LDP) LP Signaling for I-NNI Attribute Designation, eg Uni, Bi directional LP Labeling Link Group designations System Manager Discovery Config Communicate Interlink Stop/Start Module Resource Balance Interface Adjustments Process Instantiation Monitoring Discovery/Resource Manager, Incl Link Groups Addresses OSM ConfDB UNI-N Physical Processing Monitoring and Adjustment Data Plane Resource Resource Resource Resource Control Channel monitoring, physical fault detection, isolation, adjustment, connection validation etc

  8. New: Intelligent Application Signaling * Client Layer Control Plane: Communications Service Layer Service Layer, Policy Based Access Control, Client Message Receiver, Signal Transmission, Data Plane Controller, Data Plane Monitor IAS Server Optical Layer Control Plane UNI Controller Controller Controller Controller I-UNI CI CI CI Client Data Plane Server Client Layer Traffic Plane Optical Layer – Switched Traffic (Data) Plane Multiiservice: Unicast, BiDirectional, Multicast, Burst Switching * Also Control Signaling, et al

  9. Multilayer Layer Control Planes and Optical Packet Switching Edge Device-Router Ubiquitous Management Plane Access Engineering Restoration Performance Resource Use Audits Edge Device Cluster Optical Packet Router Optical Packet Router Optical Packet Router Optical Routing Optical Packet Router Optical Packet Router Ubiquitous Control Plane Provisioning Wavelength Assignment Wavelength Routing Data Plane – Optical Transport Optical Layer – Switched Lightpaths

  10. 10GE Links GE Links CSW ASW IEEE 802.3 LAN PHY Interface, eg, 15xx nm 10GE serial l1 l2 l3 l4 10GE Links Multiwavelength Fiber Grid Clusters Multiple l Per Fiber ASW DWDM Links GE Links Near Term Potential for 10 G Elec. to BP Longer Term Potential for Driving Light to BP via Si, New Polymers N*N*N Multiwavelength Optical Amplifier • Optical, • l Monitors, for • Wavelength Precision, etc. Power Spectral Density Processor, Source + Measured PSD Multiple Optical Impairment Issues, Including Accumulations Grid Clusters Computer Clusters Each Node = 1GE Multi 10s, 100s, 1000s of Nodes

  11. Optera 5200 OFA 1310 nm 10 GbE WAN PHY interfaces 5200 OFA l l 1 1 l l 2 2 l l 3 3 Optera 5200 10Gb/s TSPR Optera 5200 10Gb/s TSPR Optera 5200 10Gb/s TSPR l l 4 4 Fiber 5200 OFA StarLight Interconnect with other research networks 5200 OFA Fiber in use Fiber not in use OMNInet Network Configuration • 8x8x8l Scalable photonic switch • Trunk side – 10 G WDM • OFA on all trunks Northwestern UIC DOT Clusters Photonic 10 GE l 10 GE Photonic 1 PP PP 10/100/ GIGE Node 10 GE l 10 GE Node 2 NWUEN-1 8600 8600 Optera 5200 10Gb/s TSPR 10/100/ GIGE l 3 l 4 Optera Metro 5200 OFA NWUEN-5 INITIAL CONFIG: 10 LAMBDAS (ALL GIGE) CAMPUS FIBER (16) … CAMPUS FIBER (4) NWUEN-6 NWUEN-2 NWUEN-3 EVL/UIC OM5200 StarLight TECH/NU-E OM5200 10 GE l PP 1 Photonic 10/100/ GIGE 10 GE l INITIAL CONFIG: 10 LAMBDA (all GIGE) 8600 2 Node CAMPUS FIBER (4) l 3 l 4 LAC/UIC OM5200 NWUEN-8 NWUEN-9 NWUEN-7 NWUEN-4 S. Federal 10GE LAN PHY (Dec 03) 10 GE PP Photonic 10 GE 8600 To Ca*Net 4 Node 10/100/ GIGE

  12. 18 pair 4 pair 4 10 pair 4 pair 4 pair 12 pair 12 pair 2 pair 2 pair 2 pair DOT Sites, I-WIRE, and OMNInet OMNInet Starlight (NU-Chicago) Because of SL Renovation This Cluster is at iCAIR Argonne Not Yet Part of Testbed Not Yet Provisioned Qwest455 N. Cityfront UC Gleacher 450 N. Cityfront UIC UIUC/NCSA McLeodUSA 151/155 N. Michigan Doral Plaza All DOT Links Here= GE Level(3) 111 N. Canal Illinois Century Network James R. Thompson Ctr City Hall State of IL Bldg UChicago IIT

  13. Chicago

  14. OMNInet • The OMNInet Testbed is Developing New Architectural Designs for Communication Services Based on Dynamically Provisioned Lightpaths, Supported by Agile Optical Networks • This Research is Investigating New Architecture and Technologies for L1 – L2, While Also Exploring New Complementary L3 and L4 Methods • This Research is Creating Fundamentally New Methods for Agile Optical Transport Enabling Migration From Legacy Architecture, Esp. Those Oriented to Centralized Management and Control • The OMNInet Testbed Reduces Hierarchical Layers and Implements Highly Distributed Controls, e.g., Enabling Applications To Provision Lightpaths Dynamically • Since 2001, the Testbed Has Had No SONET Components, OOO Switches at the Core Have Supported 24 Individually Addressable Lightpaths Among 4 Core Nodes • Next - Integration of SONET-Less Optical Transport W/SONET Switching • Through Various Research Projects, the Testbed has Been Extended to Sites Nationally and Internationally

  15. OMNInet Key Themes and Issues • A Key Goal Is Enhancing Service Layer Abstractions and Enabling Direct Manipulation of Core Optical Resources • Major Improvements Over Centralized Control of Core Resources Via High Distributed Control • Decentralization: Applications Can Directly Control Lightpaths • Advanced Dynamic Lightpath Provisioning Based on Controllable, Deterministic Optical Networks • Increased Integration Between Edge and Core Infrastructure • Agile Solid State Components (e.g, CMOS-Based, PIC-Based) • Availability of Cost-Effective Fiber and DWDM Equipment Provides for Highly Disruptive Price/Capability Ratios

  16. Some Results • Almost Lightpaths Had Minimal to No Packet Loss • In a Number of Tests, Large Scale Data Streams Were Transported For Many Hours With No Packet Losses (Measured) • Measured Performance of Various Provisioning Processes • More Than 1000 Successful Lightpath Setup/Teardown Operations • No Optical Component Failures - Several Electronic Component Failures • Multiple Successful Demonstrations of Multiple New Service/Tech Capabilities including New Provider Services, New Internal Optical Transport Capabilities • For Some Traffic, SONET/Routers Not Required (Would Have Been a Performance Barrier), for Some Traffic, Multi-Service Approach • Exceptional Grid Application Results – Extremely High Performance • Have Created and Successfully Demonstrated Multi Times a Basic Control/Management Plane Architectural Model, & Prototype Implementation • Demonstrated Utility of Dynamic Lightpath Switching to High Perf. Applications • Created “Optical Dynamic Intelligent Network” Service Layer Architecture • Created Lightpath Control Protocol • Demonstrated the Potential of Photonic Data Services, Wavelength SWng, L1 Sec • Demonstrated that Many Emerging Technologies Are Ready for Production (e.g., GMPLS Can be a Basis for Production Services)

  17. OptIPuter • The OptIPuter Meets Precise Needs of Applications vs. Today’s Environments • Centralized Management and Infrastructure Restrictions • Compromised Applications • The OptIPuter Enables Creation of Dynamic Distributed Virtual Computers • Assumes Ubiquitous Lightpaths • Resources Include Optical Networking Components: • Dynamic Lightpaths • Supported by Deterministic Next Generation Optical Networks • For the OptIPuter, the “Network” is • A Large Scale, Distributed System Bus and Distributed Control Architecture • A“Backplane” Based on Dynamically Provisioned Datapaths • The OptIPuter Addresses the Needs of Extremely Large Scale Sustained Data Flows • Even Those Exhibiting Dynamic Unpredictable Behaviors • New Architecture, Methods and New Technologies at All Levels – L1 – L7

  18. AMROEBA-EA • The AMROEBA-EA Project Was Established to Investigate the Potential for Conducting Data Intensive ENZO Simulations On a Large Scale, Distributed Infrastructure Based on Dynamic Lightpath Provisioning. • This Project Is Investigating New Mechanisms That Allow ENZO Processes to Utilize Additional Resources, Including Those at Remote Locations World-Wide.

  19. AMROEBA-EAand AMR-ENZO • AMROEBA-EA: An Adaptive Mesh Refinement Optical Enzo Backplane Architecture Enabled Application • AMR-ENZO is used for Computational Astrophysics Modeling • AMR-ENZO Is Used To Create Many Types of Cosmological Structure Formation Simulations • Originally Created By Greg Bryan Under Supervision of Michael Norman While at NCSA • AMR-ENZO Has Been Parallelized Using the MPI Message-Passing Library • AMR-ENZO and Can Run On Any Shared or Distributed Memory Parallel Supercomputer or Compute Cluster • AMROEBA-EA: Shows How These Types of Applications Can Utilize Distributed Computational Resources And Lightpath Switching

  20. Visualization Source Code: Mike Norman, UCSD

  21. Source Code: Mike Norman, UCSD

  22. Overall Networking Plan Seattle PW/CENIC NetherLight Dedicated Lightpaths Dedicated Lightpaths NLR Pacific Wave CENIC Chicago 4*1Gpbs Paths + One Control Channel San Diego (iGRID,UCSD) NetherLight 4 Dedicated Paths San Diego (iGRID, UCSD) Seattle University of Amsterdam StarLight Route B

  23. AMROEBA Network Topology SURFNet/ University of Amsterdam iGRID Conference StarLight Visualization OME L2SW L2SW L2SW L2SW L3 (GbE) L2SW L2SW Control UvA VanGogh Grid Clusters iGRID Demonstartion iCAIR DOT Grid Clusters

  24. Summary Optical Services: Baseline + 5 Years

  25. Summary Optical Technologies: Baseline + 5 Years

More Related