A software-defined network: is it an evolution or a revolution in networking? The hype of SDN has been around for several years, but as yet it doesn’t seem to have managed to get much traction outside of the MSPs and Fortune 500 companies with regard to SDN, and telcos with regard to SD-WAN. When, if ever, will the SDN meltwater reach the fertile plains of the LME?
For this, we really need to look to history.
Previously Published on TVP Stragegy (The Virtualization Practice)
One of the frustrations of SDN has always been the fact that if you ask six different people for a definition of SDN, you’ll get ten different answers, at least. This stems in part from the usual IT buzzword symptoms. When a system is used for competitive advantage, each company wants to define its own brand of “The Thing”—to try to “own” the thing and become the de facto standard for it. There is also a deeper issue with SDN, precisely because it is networking.
When we talk about “the network,” we often think of one thing: one set of interconnected computers. Sometimes we think of the internet: of many interconnected networks. In reality, there are many different networks that even the smallest of companies use every day now. Each of these has different needs, different solutions, and different flavours of SDN. Add into that public and hybrid cloud, and we have many, many networks in use. Some of these we have control over, but many of them we don’t. However, that doesn’t mean that SDN isn’t playing its part.
Many of these posts talk about network functions virtualization (NFV) rather than software-defined networking (SDN). NFV is a subset of SDN that is more specific, and it is applicable to a higher level of the application stack. Whereas SDN is aimed at the network layers, NFV is aimed at manipulating the data. The idea of NFV is to take the functions that traditionally would be a part of the network and move them into the compute stack. This move gives us many abilities that we wouldn’t have if the functions remained isolated from the compute. It also lets us move to a much simpler underlying network that is capable of moving traffic around much more quickly. This article aims to examine the different parts that NFV encompasses and to discuss what we gain.
The first, simplest, and most obvious function to virtualize is the switch. At the most basic level, we can’t virtualize servers without also virtualizing their connectivity. While we could in theory pass all of the traffic for all of the virtual machines to an external switch directly, we would not be able to differentiate between the traffic on the way back to the VM without something inspecting the traffic. In effect, we must have half of a virtual switch, so we may as well have a full one. The virtual switch, then, gives us the ability to avoid hairpinning traffic between VMs in the same host out to an external switch. We can move VLAN tagging and some QoS functions into the server, meaning that top-of-rack switches don’t need to do this grunt work. Virtual switching is an integral part of all hypervisors.
In networking, as in life, we often use the same terms to mean many different things. One of the biggest culprits of this in networking is “edge.” An edge device is usually considered to be a device that connects into a network in only one place. Traffic can flow from an edge device, or it can flow to an edge device, but it can never, ever flow through an edge device. I say never—that’s not entirely true, but I’ll get back to that later. In a campus network, the edge devices are things like users’ computers, laptops, and printers; mobile phones; and tablets.
In data centers, the end devices are servers or, more than likely in the SDDC, virtual machines, or possibly containers. The exception to the rule about traffic not flowing through an edge device is the “edge router,” which more often than not takes the form of a firewall: a perimeter firewall. If we consider north/south versus east/west traffic flows, north/south traffic flows move between the edge and the core, and east/west circumnavigates the network, to take the globe analogy a step further. This distinction becomes important as we look at the direction that networking has taken, and the direction I believe it will continue to take.
The big story of the last few weeks has been Dell’s $67B acquisition of EMC, and with it, VMware. This is big news for the industry—news that will have ramifications all over the software-defined data centre. One of the most interesting implications is how Dell will reconcile its own SDN strategy with VMware’s NSX vision. Do the two work together? VMware paid $1.2B for Nicira. With currently around 400 customers, as reported by VMware, and roughly one in four of those running in production, NSX is a relatively small but highly lucrative gem in the crown jewels of VMware. Dell will want to see something come from that aspect of this acquisition.
Since Dell acquired Force10 in 2011, it has had a stable of network offerings, though perhaps not with quite the clout of the more focused network vendors. Dell runs 3 to 5% of the switching market, depending on whom you ask. Dell gives those enterprises that want it a one-stop shop, with switch and router options at every level, from unmanaged modular switches to line-rate chassis switches right through the 40G and 100G space: options that rightly complement its server and storage offerings.