One of the frustrations of SDN has always been the fact that if you ask six different people for a definition of SDN, you’ll get ten different answers, at least. This stems in part from the usual IT buzzword symptoms. When a system is used for competitive advantage, each company wants to define its own brand of “The Thing”—to try to “own” the thing and become the de facto standard for it. There is also a deeper issue with SDN, precisely because it is networking.
When we talk about “the network,” we often think of one thing: one set of interconnected computers. Sometimes we think of the internet: of many interconnected networks. In reality, there are many different networks that even the smallest of companies use every day now. Each of these has different needs, different solutions, and different flavours of SDN. Add into that public and hybrid cloud, and we have many, many networks in use. Some of these we have control over, but many of them we don’t. However, that doesn’t mean that SDN isn’t playing its part.
To continue reading
I’ve written before about the difficulty as a user of getting hold of VMware’s NSX and about other problems with the release, but a small recap is in order. Founded in 2007, Nicira was bought by VMware in 2012 for its SDN platform. This consists of deep integration that combines the open VXLAN standard with vSphere’s vShield-like products and some other bit of magic to yield a fully functioning microsegmentation system. Although Nicira is available for OpenStack, too, VMware’s focus has always been on the vSphere implementation and using NSX, combined with some of the vShield products to replace VMware’s own vCNS (vCloud Networking and Security). This $1 billion acquisition has been with VMware for as long as Nicira existed as a company. By now, we would expect it to simply be another part of the VMware product line.
Many years ago, when VMware was a little-known start-up, one of the biggest factors in the growth of its hypervisor was the ability of systems administrators to get ahold of the product and play with it. The trial licenses enabled the full product set, which was unusual at the time, and were simply time-limited. The VMTN subscription included non-production licenses for testing. This, combined with the previously unknown willingness of VMware staff to interact on the company’s forum led to an immense community of enthusiasts who wanted to use the product and practically begged their bosses to bring it in.
To continue reading
I woke up one morning last week to find my wife capturing a Squirtle in the corner of the room. Not the real room, of course: just the augmented reality version that exists in Pokémon Go.
The idea of augmented reality, of using computers to superimpose extra information relevant to the real world, has existed for decades. William Gibson’s Neuromancer is famous for not only introducing the concept, and some of its uses, but also for coining the term “cyber.” Right now, augmented reality is on the rise. Retailers are using it to allow you to see their products in place in your home before you buy them, simply by holding up a smartphone running their app. The Eurofighter Typhoon has a heads-up display that gives the pilot real-time information, no matter where they are looking at the time. This is for all the world like something from Star Wars. With the technology maturing, its range of functions means that it can make its way into the home, not just into a few billion dollars’ worth of high-spec warplane.
Many of these posts talk about network functions virtualization (NFV) rather than software-defined networking (SDN). NFV is a subset of SDN that is more specific, and it is applicable to a higher level of the application stack. Whereas SDN is aimed at the network layers, NFV is aimed at manipulating the data. The idea of NFV is to take the functions that traditionally would be a part of the network and move them into the compute stack. This move gives us many abilities that we wouldn’t have if the functions remained isolated from the compute. It also lets us move to a much simpler underlying network that is capable of moving traffic around much more quickly. This article aims to examine the different parts that NFV encompasses and to discuss what we gain.
The first, simplest, and most obvious function to virtualize is the switch. At the most basic level, we can’t virtualize servers without also virtualizing their connectivity. While we could in theory pass all of the traffic for all of the virtual machines to an external switch directly, we would not be able to differentiate between the traffic on the way back to the VM without something inspecting the traffic. In effect, we must have half of a virtual switch, so we may as well have a full one. The virtual switch, then, gives us the ability to avoid hairpinning traffic between VMs in the same host out to an external switch. We can move VLAN tagging and some QoS functions into the server, meaning that top-of-rack switches don’t need to do this grunt work. Virtual switching is an integral part of all hypervisors.
“Service provider” and “enterprise” are often seen as opposites in networking circles. (For the purposes of this article, “enterprise” means “business” rather than “large business.”) I’m fortunate to have worked closely with both service providers and enterprises. The contrast is indeed sharp. Service provider networks are the product to be sold; they need to be fast, responsive, and connected above all else. The way a service provider network consumes equipment and services is fundamentally different from the way an enterprise does. This has more of an impact on how network function virtualization (NFV) works for the service provider than it does for the enterprise. Of course, all service providers also have an enterprise network for their back-office functions.
To continue reading
VMware just released details about the latest version of NSX—6.2.2. What is interesting about this release is that it is the first that is split into tiers. The release pages are full featured, and although pricing doesn’t appear to be available yet on the website, hopefully this will be a fully public release that doesn’t require jumping through hoops to get. Since VMware acquired Nicira in 2012, the NSX product has been a bit of a dark horse, kept well stabled and not allowed out to run free. The product has been available only to selected customers and partners, presumably with high-volume sales that will support a large amount of VMware employee time in each deployment.
Unlike VMware’s other products, and tellingly vCNS (vCloud Networking and Security), NSX was a single SKU with an all-or-nothing full feature set approach. With 6.2.2, this has changed. We are now looking at VMware’s standard three-tier approach. This could be a positive step. It gives customers options, and the ability to start small and grow into the full NSX product set as their needs change. It also splits out some of the complex Service Provider features from the view of most customers, making it less intimidating and, at the same time, less like customers are paying for features they do not need.
To continue reading
When we think about network connections, our focus is usually on bandwidth. Bandwidth is the main metric in everyday use, as most connections are within a local site, where latency will be very low. There are a few specific instances in which this is not true and latency is the main target. High-performance computing (HPC) is one, and inter-site connections is usually another. As soon as connections touch the Internet, though, most thought of latency is out the window. There are too many factors beyond the enterp
rise’s control, and usually latency is not the most important factor.
Bandwidth, as it relates to network connections, is the amount of throughput a connection is capable of sustaining: the number of bits per second that can be pushed through the interface. Modern data centres work in the realm of 10 Gb, 40 Gb, and even 100 Gb links, with some 1 Gb legacy links still around. Latency, though, is the time it takes data to travel across the network. Usually, latency is measured as an RTT, or round-trip time: this is the time a packet takes to get from source to destination and back. Within the data centre, latency is measured in milliseconds (ms) and is generally in the less than 5 ms range. Over the Internet, a good rule of thumb is 25 ms within a given country, 100 ms within a given continent, and 150 ms intercontinental. These limits are very close to the speed of light limit, which is fundamental. The final consideration, even more esoteric than both of these, is the number of packets an interface is able to process per second (PPS). This is something that switches are rated on, and tends to be in the millions of packets per second range.
In the good old days (rose-tinted spectacles required), there was only one operating system in the stack. It took care of device drivers and file IO. There were many flavours of OS, depending on the period, from UNIX and Windows to OS/2 and MacOS, and many, many others. Over time, the selection of operating systems in the data center reduced down to Linux and Windows (there are still holdouts for others, for various specific reasons, but Linux and Windows hold about 90% of the OS market). There are many flavours of Linux, but all an app developer in the enterprise really needs to know is which OS they are targeting. More and more, even that level is too low down for the app developer who is looking more at the middleware to make the final decisions.
In the age of virtualization and cloud, deciding on an OS—which version of Windows, which flavour of Linux—is a matter of choosing à la carte from a menu of preconfigured options. This ability to choose the OS—the ability to run a VM, even—is dependent upon the availability of a hypervisor, a sub-OS that runs the virtual machine, that runs the OS.
To continue reading
A few days ago, Stevie Chambers tweeted about the evolution from mainframe to container: “Why is it a surprise that VMs will decline as things miniaturise? Mainframes → Intel → VMs → Containers, etc. Normal, I’d say.” By “Intel” here, I’m going to take Stevie to mean “rackmount servers.” I’m also going to assume that by “decline” he meant “decline in importance, or focus” rather than decline in raw numbers of units sold. It would be easy to argue that there have been fewer rackmount servers sold in the last few years than would have been the case without virtualization, due to the consolidation of servers onto fewer, more powerful boxes. It’s also arguable that virtualization has brought us options that would simply be unavailable without it and have led to more volume of sales. Either way, Intel’s profits seem to be doing ok.
to continue reading
I, like most in the modern IT industry, have spent most of my working life installing, configuring, and maintaining Microsoft products, ranging from Active Directory and Exchange through Terminal Services and MSSQL Server. Most of these products have had extra layers of third-party software on top (Citrix MetaFrame, anyone?) or blended in to make them work better. In many cases, they were not a best-in-class product, although this has improved over time. Apache far outstrips IIS, and vSphere is still a good way ahead of Hyper-V, feature-wise. The gaps are closing, though, and Microsoft’s product set is maturing. Microsoft’s products often have been the more expensive option. There are numerous UNIX mail servers that outperform Exchange for raw message transport functions. However, there has always been one killer feature, one tie that has bound all of the systems together, making the Microsoft option the only option.
To continue reading