Where Could We First See SDN Success in the Network Operator Space?

Datetime:2016-08-23 00:22:20          Topic: SDN           Share

Software-defined networking has a lot of potential, but we’ve learned in our industry that vague potential doesn’t create a revolution.  More important than potential are the specific places where network operators see SDN creating a significant shift in capex.  There’s still a lot of variation to contend with regarding exactly how soon these places will be sown with the seeds of SDN revolution, but at least we know where things might happen.

The obvious SDN opportunity, which is the carrier cloud, is also the most problematic in timing and scope.  Operators five years ago were gaga over cloud opportunity; it outstripped everything else in my surveys.  Today few operators see themselves as giants in public cloud computing, and most of those who see a role for cloud services think that the role will develop out of something else, like IoT.  But to offset the market uncertainty is the assurance that at least they know what SDN technology in the cloud would look like

Cloud computing begs virtualization on a large scale, and the more cloud applications/components or features, the more virtual networks you need.  Stuff like containers makes the SDN opportunity bigger by concentrating more stuff in the same space, server-wise.  Even non-cloud-service applications of cloud data centers, including NFV and IoT, would demand a host of virtual networks, and SDN breaks down the barriers created by L2/L3 virtualization through VLAN and VPN technology.

The problem here is that while carrier cloud demands SDN, it demands more than simple OpenFlow or vSwitches.  Google and Amazon, no slouches in the cloud, know that and have developed their own virtualization models.  Googles, called Andromeda, is an open architecture and both are well-publicized, but the operators apparently didn’t get the memo (or press release).  To do virtualization well, you need not only separate address spaces and subnets for the tenant applications, but also an architected gateway approach that includes address mapping between the virtual subnets and customer or other networks.  As a standard, SDN doesn’t address this, though it would in theory not be difficult to add.

The technical reason why data-center SDN is a low apple is that the technology of Ethernet needs augmenting because of forwarding-table bridging limitations and VLAN segmentation, and it’s easy to do that with SDN.  That means, logically, that the next place where operators see SDN deploying is in carrier Ethernet.  Some form of Ethernet-metro connectivity is intrinsic in carrier cloud because there’d surely be multiple data centers in a given metro area.  It’s easy to see how this could be adapted to provide carrier Ethernet services to businesses, and to provide a highly agile and virtualized Ethernet-based L2 substrate for other (largely IP) services.

The challenge with the business side of the carrier Ethernet driver is the opportunity scope.  First, business Ethernet services are site-connectivity services.  Globally there are probably on the order of six million candidate sites if you’re very forgiving on pricing assumptions, compared of course to billions of consumer broadband and mobile opportunities.  With consumer broadband making high-speed services (over 50 Mbps) available routinely, it’s harder to defend the need to connect all these sites with Ethernet, and if you look at the number of headquarters sites that do demand Ethernet you cut the site population by an order of magnitude or more.

On the plus side, the number of “Ethernet substrate” opportunities are growing as metro networks in particular get more complex and data-center interconnect (DCI) interest grows for cloud providers, content providers, and enterprises.  The Metro Ethernet Forum (MEF) wants to grow it more and faster through its “Third Network” concept, which wants to formalize the mechanisms that would let global Ethernet create a subnetwork on which agile connection-layer services would then ride.  This is a good idea at many levels, but in the mission of being the underlayment of choice for virtual connection services, Ethernet has competition.

From the third opportunity source—“virtual wire”.  On the surface the virtual-wire notion isn’t a lot different from using SDN for carrier Ethernet applications in transport applications rather than as a retail service, but there are significant differences that could tip the scales.

SDN can fairly easily create featureless tunnels that mimic the behavior of a physical-layer or wire connection (hence the name).  If these virtual wires are used to build what is basically private physical-layer infrastructure, they could be used to groom optical bandwidth down to serviceable levels, to segment networks so that they were truly private, and when supplemented with virtual router instances create VPNs or VLANs.  Those are the same missions that Ethernet could support, but because virtual wires have no protocol at all, they demand less control-plane behavior and are simpler to operate.

One of the battlegrounds for the two possible WAN missions is likely to be mobile infrastructure.  Everyone in the vendor WAN SDN space has been pushing mobile applications, and in particular the virtualization of the Evolved Packet Core (EPC).  The fact is that at this point the overwhelming majority of these solutions are very simplistic; they don’t really take full advantage of the agility of SDN.  That means that there’s still time for somebody to put a truly agile strategy out there, one that takes a top-down approach to network virtualization.

The other battleground is SD-WAN.  It’s unlikely that SD-WAN will be based on carrier Ethernet whatever the goals of the MEF’s Third Network, simply because Internet tunneling is a big part of any viable model.  If we were to see a truly organized virtual-overlay-network tunnel and virtual node management approach emerge, we could see virtual connection layers take off and pull SDN with them.  There are some signs this could happen, but it’s not yet a fully developed trend.

A common technical issue for these pathways to SDN to address is management, and one aspect of management is brought out by the Ethernet-versus-virtual-wire face-off.  Ethernet has management protocols that can exchange information on the L2 service state.  These mechanisms are lacking in virtual wires because there’s no real protocol.  Some would say that gives Ethernet an advantage, but the problem is that in order for an SDN implementation of Ethernet to deliver management data you’d have to derive it from something because the devices themselves (white boxes) are not intrinsically Ethernet devices.

Deriving management data in SDN is really a function of delivering service state from the only place where it’s known—the SDN controller.  In adaptive networks, devices exchange topology and status to manage forwarding, and those exchanges are explicitly targeted for elimination in SDN.  That’s fine in terms of making traffic engineering centralized and predictable, but it means that the topology/state data isn’t available for use in establishing the state of a service.  Sure, the central controller has even better data because it sees all and knows all, but that presents two questions.

First, does the controller really see and know all?  We don’t really have specifications to describe how service state, as a composite state derived from the conditions of multiple forwarding rules and perhaps paths, can be known or can be communicated.  In fact, many network problems might be expected to cut off portions of the network from the controller.  There are good solutions here, but they’re per-vendor.

Second, how is the data delivered to a user?  Today it’s done via control packets that are introduced at each access point, but is that also the strategy for the SDN age?  Simple control packets can’t just be forwarded back to the SDN controller for handling; you need their context, meaning at least the interface they originated from.  In any event, wouldn’t it be easier to have management data delivered from a centrally addressed repository?

You can see from this summary that the big problem with deployment of SDN technology by network operators is the lack of a simple but compelling business case.  Yes, we can find missions for SDN (just as we can find missions for NFV) but the missions are complicated by the fact that they smear across a variety of technical zones, most of which aren’t part of the SDN specifications.  Thus, SDN has to be integrated into a complete network context and we can’t do that yet because some of the pieces (on the business side, in technology, or both) are missing.

I think we’re going to see SDN applications beyond carrier cloud take the lead here, in part because the NFV-carrier-cloud dimension is more complicated to architect and justify and so might take longer to mature.  My bet is on somebody doing a really strong virtual-EPC architecture and then telling the story well, only because mobile infrastructure is already being upgraded massively and 5G could make that upgrade even bigger.  It’s easiest to change things fundamentally when you’re already committed to making fundamental changes.





About List