This week Mellanox Technologies, Ltd. announced what it is calling the most advanced 10, 25, 40, 50, 56 and 100Gb/s InfiniBand and Ethernet intelligent adapter on the market, ConnectX-5. ConnectX-5 introduces several new features that that are aimed at enabling higher performance, future-proofing, and ROI. ConnectX-5 introduces smart offloading engines as well as being the first PCI Express 3.0 and 4.0 compatible adapter.
The tremendous growth of data is creating and increasing business potential, especially in areas such as real-time data processing for high performance computing (HPC), data analytics, machine learning, national security and Internet of Things (IoT) applications. However this same growth can put a strain on various parts aspects of storing and analyzing this data. Businesses will need fast interconnects but will look to companies such as Mellanox to provide intelligent interconnects that can perform data algorithms as the data moves throughout the data center.
Key features and benefits include:
- Greater HPC performance with new Message Passing Interface (MPI) offloads, such as MPI Tag Matching and MPI AlltoAll operations, advanced dynamic routing, and new capabilities to perform various data algorithms
- The highest available message rate of 200 million messages per second, which is 33% higher than the previous Mellanox ConnectX-4 adapter and nearly 2X compared to competitive products
- The first interconnect adapter to support PCI Express 3.0 and 4.0 connectivity options including an integrated PCIe switch. For upcoming PCI Express 4.0 enabled systems, ConnectX-5 states that it will deliver an aggregated throughput of 200Gb/s
- New Accelerated Switching and Packet Processing (ASAP2) technology that enhances Open V-Switch (OVS) offloading, which results in significantly higher data transfer performance without overloading the CPU. Together with native RDMA and RoCE support, ConnectX-5 will dramatically improve Cloud and NFV platform efficiency.
- New acceleration engines for NVM Express (NVMe). NVMe over Fabrics (NVMf) enabling end-users to connect remote subsystems with flash appliances, leveraging RDMA technology to achieve faster application response times and better scalability across virtual data centers.