SDN dilemma: Linux kernel networking vs. kernel bypass

Datetime:2017-04-13 05:41:21         Topic: Linue Kernel  SDN          Share        Original >>
Here to See The Original Article!!!

If we've learned anything in the technology business in the last 25 years, it would be to never underestimate the Linux kernel. Why, then, have so many networking companies been so eager to bypass the Linux kernel -- or more specifically, the Linux kernel networking stack? What could be so wrong with the networking packet arteries in the Linux kernel that motivates so many of us to bypass them?

There are two main reasons. First, the kernel networking stack is too slow -- and the problem is only getting worse with the adoption of higher speed networking in servers and switches (10GbE, 25GbE, and 40GbE today, and rising to 50GbE and 100GbE in the near future). Second, handling networking outside the kernel allows for plugging in new technology without the need to change core Linux kernel code.

For those two reasons, and with the additional advantage that many kernel bypass technologies are open source and/or specified by standards bodies, the proponents of bypass solutions continue to push data center operators to adopt them.

Kernel bypass solutions

We have seen many kernel bypass solutions in the past, most notably RDMA (Remote Direct Memory Access), TOE (TCP Offload Engine), and OpenOnload. More recently, DPDK (Data Plane Development Kit) has been used in some applications to bypass the kernel, and then there are new emerging initiatives such as FD.io (Fast Data Input Output) based on VPP (Vector Packet Processing). More will likely emerge in the future.

Technologies like RDMA and TOE create a parallel stack in the kernel and solve the first problem (namely, the "kernel is too slow") while OpenOnload, DPDK and FD.io (based on VPP) move networking into Linux user space to address both speed and technology plug-in requirements. When technologies are built in the Linux user space, the need for changes to the kernel is avoided, eliminating the extra effort required to convince the Linux kernel community about the usefulness of the bypass technologies and their adoption via upstreaming into the Linux kernel.

Netronome

Kernel bypass challenges

The challenges related to adopting parallel stacks outside of the kernel networking stack are obvious to data center operators challenged with scaling their infrastructure to a very large number of servers. With parallel networking stacks comes a seemingly endless list of security, manageability, robustness, hardware vendor lock-in, and protocol compatibility issues.

For example, there are implementations of Open vSwitch and OpenContrail that use DPDK as a kernel bypass approach. The DPDK implementations are constrained in two ways. First, it's difficult and sometimes impossible to evolve features rapidly and in lockstep with kernel-based open source software innovations. Second, although the levels of performance and security needed by VMs and applications can be delivered, it requires a significant number of x86 CPU cores, reducing the overall efficiency of data center infrastructure.

Nonetheless, some data center operators who have perhaps a few hundred servers to manage and who run a single application, such as High Performance Computing or High Frequency Trading clusters, may find it practical to utilize such parallel kernel bypass stacks. The same applies to dedicated storage clusters.

But can the clogging of the kernel networking stack be fixed without resorting to parallel bypass stacks? Yes it can. The right way to solve the two problems above would be to find ways to accelerate performance of the kernel networking stack transparently, using smart networking hardware, and without any vendor lock-in.

SmartNICs seek to solve these problems without bypassing the kernel. SmartNICs are NICSs (network interface cards) that are programmable, enabling the vendors who provide such products to innovate server networking hardware at the speed of software -- a practical requirement in modern software-defined and NFV-enabled data center infrastructure.

Enter SmartNICS

Netronome SmartNICs provide both basic or traditional NIC features and advanced features needed by cloud data center and telco service providers. These advanced features include the ability to offload rich networking functionality, such as that provided by virtual switches and virtual routers used in software-defined networking environments and NFV-optimized compute servers. The ability to offload these compute-intensive networking functions to the SmartNIC brings higher levels of performance and security to VMs, increases the number of applications that can be delivered per server, and provides an overall boost in data center efficiency. SmartNIC features can evolve rapidly with open source networking innovations, such as with Open vSwitch, OpenStack, OpenContrail, and the IO Visor project's eBPF (Extended Berkeley Packet Filter) .

The benefits of deploying SmartNICs aren't limited to increased performance and a richer feature set. There are also significant TCO savings as well, as SmartNICs can replace traditional NICs used in servers. SmartNICs are priced competitively to traditional NICs and provide significant savings by freeing up valuable server CPU resources for VMs and applications, driving up server efficiency. Given that servers consume as much as 60 percent of total data center infrastructure costs, the ability to support greater workloads per server using SmartNICs promises significant savings.

Kernel bypass proponents like to argue that the server networking performance needed in SDN and NFV applications can be achieved using high-performance x86 CPU cores, and therefore traditional NICs are all that are needed. But in practical benchmarks and in real life, kernel bypass mechanisms might need as many as 24 CPU cores to get the required networking performance. That's practically consuming the entire server just for networking.

SmartNIC vendors are in full agreement that kernel network performance is a real problem that will only get worse as operators build out data centers to meet the demands of ever-increasing numbers of mobile and IoT devices. But they don't believe that bypassing the operating system kernel solves the problem. Rather, networking in the Linux kernel needs to be reinvented using implementations that result in parallel, redundant networking stacks inside the operating system, not outside.

SmartNICs address these challenges, offloading kernel-based networking data path implementations available today and evolving rapidly in the wider Linux open source community. Linux kernel stack technologies such as eBPF and the Traffic Classifier hold the promise of allowing SmartNIC vendors like Netronome to stick to the Linux kernel networking stack and allow data center operators to scale efficiently.

The resounding recommendation from the Linux community has always been to avoid kernel bypass. Like all fundamental and simple ideas, this idea has held sway in the past, holds true today, and will remain true in the future.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to  newtechforum@infoworld.com .








New