160 likes | 315 Views
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage. April 2010. Connectivity Solutions for Efficient Computing. Mellanox Technologies is a leading supplier of
E N D
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most EfficientEnd-to-End Connectivity for Servers and Storage April 2010
Connectivity Solutions for Efficient Computing • Mellanox Technologies is a leading supplier of • high-performance connectivity solutions for servers & storage • End-to-End Ifniniband solutions • 10 GbE and Low Latency Ethernet NIC solutions Leading Connectivity Solution Provider For Servers and Storage
Complete End-to-End Highest Performance Solution Cluster Software Management HPC Application Accelerations CORE-Direct GPU-Direct Networking Efficiency/Scalability Adaptive Routing QoS Congestion Control Servers and Storage High Speed Connectivity Adapters Switches/Gateway Cables ICs
Mellanox’s InfiniBand Leadership Highest Throughput Virtual Protocol Lowest Latency Hardware Bridging CPU Availability Converged Network Highest Performance Auto Sensing Host Message Rate Efficient, Scalable and Flexible Networking Congestion Control Efficient use of CPUs and GPUs GPU-Direct Adaptive Routing CORE-Direct Advanced QoS Transport Offload Multiple Topologies
ConnectX-2 Virtual Protocol Interconnect … App1 App2 App3 App4 AppX Applications Consolidated Application Programming Interface Networking TCP/IP/UDP Sockets Storage NFS, CIFS, iSCSINFS-RDMA, SRP, iSER, Fibre Channel, Clustered Clustering MPI, DAPL, RDS, Sockets Management SNMP, SMI-SOpenView, Tivoli, BMC, Computer Associates Protocols Networking Clustering Storage Virtualization RDMA Acceleration Engines 10GigE LLE 10/20/40 InfiniBand Any Protocol over Any Convergence Fabric
Efficient Data Center Solutions 40G InfiniBand 8G Fibre Channel 10G Ethernet Storage Bridge Switches IB Storage Servers Adapters IB to Eth IB to FC Eth to FC 40G InfiniBand, FCoIB 10G Ethernet, FCoE • 40Gb/s Network • InfiniBand • Eth over IB • FC over IB • FC over Eth* Ethernet Storage FC Storage * via ecosystem products
Highest Performance • Highest throughput • 40Gb/s node to node and 120Gb/s switch to switch • Up to 50M MPI messages per second • Lowest latency • 1usec MPI end-to-end • 0.9us InfiniBand latency for RDMA operations • 100ns switch latency at 100% load • True zero scalable latency – flat latency up to 256 cores per node 7
Efficient use of CPUs and GPUs System Memory 2 System Memory 1 CPU CPU 1 Chip set Chip set GPU GPU Mellanox InfiniBand Mellanox InfiniBand GPU Memory GPU Memory 8 • GPU-direct • Works with existing NVIDIA Tesla and Fermi products • Enables fastest GPU-to-GPU communications • Eliminates CPU copy and write process in system memory • Reduces 30% of the GPU-to-GPU communication time
Efficient use of CPUs and GPUs • CORE-direct (Collectives Offload Resource Engine) • Collectives communication are communications used for system synchronizations, data broadcast or data gathering • CORE-direct performs the collective on the HCA instead of the CPU • Eliminates system noise and jitter issue • Increases the CPU cycles available for applications • Transport Offload • Full transport offload maximizes CPU availability for user applications • Only solutions to achieve 40Gb/s throughput with ~5% CPU overhead 9
Efficient, Scalable and Flexible Networking • Congestion control • Eliminates network congestions (hot-spots) related to many senders and a single receiver
Efficient, Scalable and Flexible Networking 220 server node system, Mellanox InfiniBand HCAs and switches • Adaptive routing • Eliminated networks congestions related to point to point communications sharing the same network path 11
Efficient, Scalable and Flexible Networking • Multiple Topologies • Fat-tree: non-blocking, oversubscription • Mesh, 3D-Torus • Hybrid solutions • Advanced Quality of Service • Fine grained QoS • Consolidation without performance degradation 12
Mellanox Performance/Scalability Advantage • Weather simulations at scale with Mellanox end-to-end
Top100 Interconnect Share Over Time InfiniBand the natural choice for large scale computing All based on Mellanox InfiniBand technology 14
The 20 Most Efficient Top500 Systems #55 US Army Research Laboratory #22 The Earth Simulator Center #93 National Institute for Materials Science #10 Jülich • InfiniBand solutions enables the most efficient system in the Top500 • The only standard interconnect solution in the top 100 highest utilization systems 15
Thank You HPC@mellanox.com