1 / 28

Byungil Jeong

Visualcasting Scalable Real-Time Image Distribution in Ultra-High Resolution Display Environments. Byungil Jeong. Electronic Visualization Laboratory, University of Illinois at Chicago. Introduction. Data-intensive domains rely on Grid technology and visualization.

helena
Download Presentation

Byungil Jeong

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Visualcasting Scalable Real-Time Image Distribution in Ultra-High Resolution Display Environments Byungil Jeong Electronic Visualization Laboratory, University of Illinois at Chicago

  2. Introduction • Data-intensive domains rely on Grid technology and visualization. • The need for a infrastructure to support collaborative work has grown dramatically. - 2 -

  3. Scalable Adaptive Graphics Environment (SAGE) • SAGE is a specialized middleware for real-time streaming of extremely high-resolution graphics and high-definition video. • The streams come from remote clusters to scalable display walls over ultra high-speed networks (tens of gigabit). • Multiple applications (Multitasking) • Desktop managing: Window move, resize and overlap • Scalable to LambdaVision: an 11x5 tiled display, 100 Mega-pixel resolution - 3 -

  4. Proposed Solution • A fundamental requirement of high-resolution collaborative visualization systems is multicast of visualization. • Visualcasting: scalable real-time image multicasting service in ultra-high resolution display environment. • SAGE Bridge: A high-speed bridging system which distributes pixel data received from rendering clusters to each end-point. • It is deployed on a high-performance PC cluster equipped with 10gigabit network interfaces. • It incrementally allocates bridge nodes as the number of endpoints increases. • It considers heterogeneity of endpoints: different display resolution, computing power and network bandwidth. - 4 -

  5. Current and Proposed Models Current model Latest SAGE prototype Proposed model Visualcasting - 5 -

  6. Visualcasting Pipeline Sending Side (Overloaded) Rendering Duplication Partition Display Endpoints - 6 -

  7. Introducing SAGE Bridge 4Mpix 1Gbps Sending Side Rendering Duplication Partition Display 10Mpix 10Gbps SAGE Bridge Endpoints - 7 -

  8. Major Contribution and Research Questions • Extending SAGE to support a scalable real-time image multicasting for tiled displays. • Enabling distant collaboration with multiple end-points. • How to arbitrarily scale simultaneous data distribution to multiple receivers? • What parameters define ‘arbitrarily’? • How do those parameters affect the distribution performance? • If multiple approaches are possible, how to decide the most appropriate approach based on the parameters? - 8 -

  9. Prior Accomplishments • Designed and implemented a prototype of SAGE • Current Architecture • Dynamic pixel stream reconfiguration • Pixel block based streaming • Achieved Results • Publications Jeong, B., Renambot, L et al, “High-Performance Dynamic Graphics Streaming for Scalable Adaptive Graphics Environment,” accepted by Supercomputing 2006. Leigh, J., Renambot, L., Johnson, A., Jeong, B. et al, “The Global Lambda Visualization Facility: An International Ultra-High-Definition Wide-Area Visualization Collaboratory,” Journal of Future Generation Computer Systems, Volume 22, Issue 8, October 2006, pp. 964-971. Renambot, L., Jeong, B. et al, “Collaborative Visualization using High-Resolution Tiled Displays,” CHI 06 Workshop on Information Visualization and Interaction Techniques for Collaboration Across Multiple Displays, April 2006. Jeong, B., Jagodic, R. et al, “Scalable Graphics Architecture for High-Resolution Displays,” IEEE InfoVis Workshop on Using Large, High-Resolution Displays for Information Visualization, October 2005. Renambot, L., Rao, A., Singh, R., Jeong, B. et al, “SAGE: the Scalable Adaptive Graphics Environment,” WACE 2004, September 2004. - 9 -

  10. Current SAGE Architecture Tiled Display UI client UI client SAGE Receiver SAGE Receiver SAGE Receiver SAGE Receiver FreeSpace Manager SAIL SAIL SAIL App1 App2 App3 SAGE Messages Synchronization Channel Pixel Stream SAIL: SAGE Application Interface Library - 10 -

  11. Dynamic Pixel Stream Reconfiguration • Initial Phase: network connection • Configuration Phase: configure streams • Streaming Phase: streaming pixels - 11 -

  12. Pixel Block Based Streaming Image Frame Streaming Streamer SAGE Display Pixel Block Streaming Streamer SAGE Display • SAGE Bridge needs pixel block based streaming. • Pixel block partition: independent of window layouts • Incorporated control information (new window layout) - 12 -

  13. Achieved Results • Scientific visualization at multi-ten Mega-pixel resolutions with interactive frame rates. • 12 rendering nodes, 1GigE, LAN, UDP: 11.2Gbps, no packet loss. • 10 rendering nodes, 10Gbps WAN (CaveWAVE), UDP, real application: 9.0Gbps, at most 1% packet loss • Pixel block based streaming: a basis of Visualcasting. • Successful International demonstration at iGrid2005 and SC05 • Used by international collaborators Network paths and bandwidth used during iGrid demonstration - 13 -

  14. Prior Related Works • Scalable Graphics Engine (SGE, IBM) • A hardware frame buffer for parallel computers • Sixteen 1GigE inputs, 4 Dual-link DVI outputs, 16 Megapixels • WireGL(Humphreys): sort-first parallel rendering for tiled display • Chromium(Humphreys), Aura(Germans): distributing visualizations to and from cluster driven tiled-displays • Distributed Multi-head X11 (XDMX, Martin): X server for a tiled display, supporting chromium, serial applications • TeraVision(Singh/EVL): scalable, platform-independent, high-resolution video streaming over WAN - 14 -

  15. Comparison with Other Approaches - 15 -

  16. Proposed SAGE Architecture • Multiple Free Space Managers • Each node in the SAGE Bridge cluster is assigned to a sub-image (a group of pixel blocks) • FSManagers control SAGE Bridge • SAGE Bridge controls SAIL - 16 -

  17. Application Launch Procedures • A FSManager no longer launches an application. • A SAGE UI has the information about all the FSManagers. • For the second endpoint, the first FSManager directs the SAGE Bridge to connect to the second FSManager. - 17 -

  18. How to Arbitrarily Scale Simultaneous Data Distribution to Multiple Receivers? • Incremental bridge node allocation: If initially allocated nodes become overloaded, SAGE Bridge allocates additional nodes for the visualcasting session. • SAIL needs to re-partition images considering newly added nodes and load-balancing. • New network connections, reconfiguration of existing streams • What are the conditions for adding or removing SAGE Bridge nodes? • How to minimize jitter on existing streams? • No additional node available: request SAIL to down-sample or compress pixel blocks - 18 -

  19. How to Decide the Most Appropriate Approach Based on the Parameters? - 19 -

  20. Partial Indirect Distribution • Intending to be an optimal solution: a combination of the advantages of other approaches • Less re-routed traffic (Direct distribution - A) • Less hindering display process (Local Bridge - B) • Load Balancing (Whole indirect distribution - C) • Local reconfiguration (B, C) • Bridge node decision strategy (1) Include the nodes now displaying the application (2) Exclude the nodes heavily used by other applications (3) Preferably include the nodes adjacent to the nodes selected by (1) (4) Preferably avoid the change of the node set - 20 -

  21. What Parameters Define ‘Arbitrarily’? • Heterogeneous display resolution and bandwidth • Adapt rendering resolution to the smallest tiled display. • SAGE Bridge down-samples pixel blocks for low resolution displays. • SAGE Bridge compresses pixel blocks for endpoints with low network bandwidth and high display resolution. • Heterogeneous computing power • Global data transfer rate may drop down to the data consume rate of the slowest endpoint. • Down-sample pixel blocks or drop frames for slow endpoints. - 21 -

  22. How Do Those Parameters Affect Performance? - Pixel Block Size - • Small pixel blocks • Increase flexibility in image partitioning • Increase network API function calls at sending side • Increase OpenGL API function calls at display side • Large pixel blocks • Increase network overhead due to indivisible pixel block assumption • Solutions • Find an optimal pixel block size • Aggregating pixel blocks before network transfer and downloading to a graphics card • Exceptions to indivisible pixel block assumption - 22 -

  23. How Do Those Parameters Affect Performance? - Network Protocol Interfaces - • Pixel blocks generated from application images are streamed using blocking network send. • Blocking send slows down wide area reliable network streaming. • Non-blocking send using LambdaStream can improve performance. • Requiring an interface to check if pixel block buffers are completely transferred. • Another interface to request necessary bandwidth and return available bandwidth - 23 -

  24. How Do Those Parameters Affect Performance? - Pixel Stream Compression - • Typical bandwidth utilization of SAGE applications in EVL • Serial application: 60~70% (1GigE) • Parallel application: 20~90% (10Gbps WAN) • Pixel block compression is a good solution. • SAGE Bridge can distribute compressed pixel blocks without decompressing them. • Increases Load on SAIL and SAGE Display • Decreases Load on SAGE Bridge • Increases scalability of SAGE Bridge • Candidate compression techniques: Run Length Encoding, RGB to YUV color transform, DXT compressed texture, Wavelet transform - 24 -

  25. Comparison with Multicasting Approaches • IP multicast and reliable layered multicast: applicable to multicast-enabled networks • No intermediate pixel data processing • Source image should be partitioned considering window layouts of all endpoints. • The number of available multicast addresses limits the number of partition. • Overhead incurred by multicast group membership change increases window operation delay. • Multicasting over 10Gbit networks is very expensive. - 25 -

  26. Task 2006 2007 Metric for Success and Timeline Aug Sep Oct Nov Dec Jan Feb Mar Apr May June Preliminary Exam First Implementation Preparing SC Demo CG&A paper Scalable Version HPDC paper Full Functionality JPDC paper Writing Thesis Preparing Defense • Metric for Success • Scalability, achievable bandwidth and latency • How successfully does this approach support heterogeneous endpoints • Timeline - 26 -

  27. Experiment Plan and Equipment Needed • Local test bed consists of a 28-node cluster, a 10Gbit switch and four-node SAGE Bridge • The SAGE Bridge will be moved to StarLight for real-world test. • Possible endpoints: Univ. of Michigan, Calit2/UCSD, SARA in Amsterdam and KISTI in Korea - 27 -

  28. Conclusion • I propose Visualcasting – a scalable real-time image distribution service for ultra-high resolution display environments. • Visualcasting extends SAGE to support distant collaboration with multiple endpoints. • Scalability, achievable bandwidth and latency as Visualcasting supports heterogeneous endpoints will determine the success of this approach. - 28 -

More Related