Big Switch Networks, the Santa Clara–based software-defined networking company, has just released a new version of the Big Cloud Fabric product. Big Cloud Fabric, a software-defined networking product that has been on the market for over four years, is heavily integrated into VMware. For the uninitiated, its core pitch is that with its product, you can cut out proprietary networking gear, and that by using its software-based controller, coupled with low-cost white-box servers and switches, networks can be provisioned, orchestrated, and configured programmatically.
Out of the box, it has many advanced features. Unlike NSX, it has a real physical presence. Unlike ACI, it has a real virtual presence. It plays nicely with both. Its data layer can be deployed on Open Networking Dell EMC Edgecore white boxes and the HPE Altoline family of equipment. Its Big Monitoring Fabric product is a Womble product; it monitors “overlay, underlay—so your packets roam free.”
Role-based access can give VM admins and storage admins the ability to push VMs directly on the network. Yes, you can do this with other products, but there are no Band-Aids™ or shoehorning of square pegs into round holes.
Previously Published on TVP Strategy (The Virtualization Practice)
A software-defined network: is it an evolution or a revolution in networking? The hype of SDN has been around for several years, but as yet it doesn’t seem to have managed to get much traction outside of the MSPs and Fortune 500 companies with regard to SDN, and telcos with regard to SD-WAN. When, if ever, will the SDN meltwater reach the fertile plains of the LME?
For this, we really need to look to history.
Previously Published on TVP Stragegy (The Virtualization Practice)
t’s the end of the year, and a good time for thinking back. I’m thinking back to a dark past long ago, when physical servers ran server operating systems, and ran applications—when those servers plugged into a switch, and each endpoint was a single server. The network team could see every device, endpoint, or switch, and could trace packets from end to end. Network admins would tell you that those were Golden Days, when troubleshooting was easy and networks were simple. Then, ten or so years ago, along came server virtualization. All of a sudden there were multiple servers on any given endpoint, and worse, the servers would move between endpoints not only at will, but mid-flow. Troubleshooting became Hard, with a capital H.
Out of this came innovations such as VMware’s dvSwitch and the Cisco 1000V distributed vSwitch. These gave network admins the tools they required to push their traces deeper into the virtualization systems and to regain the end-to-end connectivity they desired. As time progressed, the ability to mirror flows and to extend technologies such as NetFlow into the hypervisor brought the VM world back into network admins’ view. As time advanced further, network functions virtualization (NFV) moved some of the functions of the network into the hypervisor, or into VMs, but the interaction between the flows remained fairly constant. The more recent developments of overlay/underlay networks have again pushed the end-to-end traffic flows into the twilight of tunnels (encrypted or not). The two-tier network model has made troubleshooting harder again, with layer 2 networks tunneled through layer 3 switch interconnects. Now Docker is throwing another spanner in the works.