A software-defined network: is it an evolution or a revolution in networking? The hype of SDN has been around for several years, but as yet it doesn’t seem to have managed to get much traction outside of the MSPs and Fortune 500 companies with regard to SDN, and telcos with regard to SD-WAN. When, if ever, will the SDN meltwater reach the fertile plains of the LME?
For this, we really need to look to history.
Previously Published on TVP Stragegy (The Virtualization Practice)
This post on reddit appears to intimate that VMware is closing its API for virtual switches to all parties, including its long-standing networking partner Cisco. When I first read the post, I thought the move was a retrograde step by VMware and another veiled dig at its ecosystem. The post links to an official blog post on the VMware site stating that moving forward, VMware “will have a single virtual switch strategy that focuses on two sets of native virtual switch offerings – VMware vSphere® Standard Switch and vSphere Distributed Switch™ for VMware vSphere, and the Open virtual switch (OVS).”
One of the frustrations of SDN has always been the fact that if you ask six different people for a definition of SDN, you’ll get ten different answers, at least. This stems in part from the usual IT buzzword symptoms. When a system is used for competitive advantage, each company wants to define its own brand of “The Thing”—to try to “own” the thing and become the de facto standard for it. There is also a deeper issue with SDN, precisely because it is networking.
When we talk about “the network,” we often think of one thing: one set of interconnected computers. Sometimes we think of the internet: of many interconnected networks. In reality, there are many different networks that even the smallest of companies use every day now. Each of these has different needs, different solutions, and different flavours of SDN. Add into that public and hybrid cloud, and we have many, many networks in use. Some of these we have control over, but many of them we don’t. However, that doesn’t mean that SDN isn’t playing its part.
t’s the end of the year, and a good time for thinking back. I’m thinking back to a dark past long ago, when physical servers ran server operating systems, and ran applications—when those servers plugged into a switch, and each endpoint was a single server. The network team could see every device, endpoint, or switch, and could trace packets from end to end. Network admins would tell you that those were Golden Days, when troubleshooting was easy and networks were simple. Then, ten or so years ago, along came server virtualization. All of a sudden there were multiple servers on any given endpoint, and worse, the servers would move between endpoints not only at will, but mid-flow. Troubleshooting became Hard, with a capital H.
Out of this came innovations such as VMware’s dvSwitch and the Cisco 1000V distributed vSwitch. These gave network admins the tools they required to push their traces deeper into the virtualization systems and to regain the end-to-end connectivity they desired. As time progressed, the ability to mirror flows and to extend technologies such as NetFlow into the hypervisor brought the VM world back into network admins’ view. As time advanced further, network functions virtualization (NFV) moved some of the functions of the network into the hypervisor, or into VMs, but the interaction between the flows remained fairly constant. The more recent developments of overlay/underlay networks have again pushed the end-to-end traffic flows into the twilight of tunnels (encrypted or not). The two-tier network model has made troubleshooting harder again, with layer 2 networks tunneled through layer 3 switch interconnects. Now Docker is throwing another spanner in the works.
An odd little title, I think you will agree, but consider this: Wham! had a hit with “Freedom” and Sam Cooke sang “Chain Gang,” and I think you can now see my thought process. This post is going to investigate not the technical capabilities of Cisco’s Application Centric Infrastructure (ACI), but rather what its market placement will mean to the software-defined networking (SDN) industry.
Firstly, Cisco’s Application Policy Infrastructure Controller (APIC), the brains behind its SDN product ACI, can only run on CISCO equipment. More importantly, APIC can only run on Nexus 9000 series switches. These are:
Only for the biggest environments.
So, what about those who have invested in Nexus 7000s and below? Well, up until the Cisco Live US conference, you were effectively legacy. However, Cisco recently stated that the APIC will now be able to overlay application workloads to older (read Nexus-only) switches. The first switch that will be able to have these policies is the Nexus 1K; the only issue is that a Nexus 9K series switch is still needed. Not looking so inviting now, is it? What is important here is that it is an overlay.