Big Switch Networks, the Santa Clara–based software-defined networking company, has just released a new version of the Big Cloud Fabric product. Big Cloud Fabric, a software-defined networking product that has been on the market for over four years, is heavily integrated into VMware. For the uninitiated, its core pitch is that with its product, you can cut out proprietary networking gear, and that by using its software-based controller, coupled with low-cost white-box servers and switches, networks can be provisioned, orchestrated, and configured programmatically.
Out of the box, it has many advanced features. Unlike NSX, it has a real physical presence. Unlike ACI, it has a real virtual presence. It plays nicely with both. Its data layer can be deployed on Open Networking Dell EMC Edgecore white boxes and the HPE Altoline family of equipment. Its Big Monitoring Fabric product is a Womble product; it monitors “overlay, underlay—so your packets roam free.”
Role-based access can give VM admins and storage admins the ability to push VMs directly on the network. Yes, you can do this with other products, but there are no Band-Aids™ or shoehorning of square pegs into round holes.
Previously Published on TVP Strategy (The Virtualization Practice)
Part 2a of this series concentrated on Hyper-V 2012 R2 and 2016 as well as vSphere 6.0 regarding the addition of a local distributed storage solution: DataCore Virtual SAN in the case of Hyper-V 2012 R2, Storage Spaces Direct with Hyper-V 2016, and VSAN 6.2 with vSphere 6.0. You can review that article here.
This article continues from that second article of the series and finishes the addition of a local distributed storage stack to XenServer and RHEV. Once again, our compute unit of choice is the Dell 730xd with two 10-core CPUs and 256 GB of RAM. As stated in the previous post, we need to add some local storage in each node. These compute nodes can, depending on the choices made during the configuration, take up to twenty-four disk drives. For the purposes of this article, we are assuming that data locality is required for performance and that there is a need for an all-flash array. We chose to go with two 400 GB SLC drives for cache and four 800 MLC drives for capacity, giving a total raw capacity per node of 4 TB. There may be further hardware requirements depending on the chosen solutions for each hypervisor, but that will be called out in the relevant vendor sections.
This post will take that original premise and expand it to include storage with a view to moving the entire environment toward a software-defined data center.
Once again, our compute unit of choice is the Dell 730xd with two 10-core CPUs and 256 GB of RAM. Now, we need to add some local storage in each node. This compute node can, depending on the choices made during the configuration, take up to twenty-four disk drives. For the purposes of this article, we assume that data locality is required for performance, and that there is a need for an all-flash array. We have chosen to go with two 400 GB SLC drives for cache and four 800 MLC drives for capacity. This means that there is a total raw capacity per node of 4 TB. There may be a requirement for further hardware, depending on the chosen solutions for each hypervisor, but that will be called out in the relevant vendor section. Due to the length of this article, we have split it into two sections. This post deals with the costs surrounding vSphere and Hyper-V.
On February 10, 2016, VMware announced VSAN v6.2. This is the forth generation of its flagship software-defined storage (SDS) product to be released. At the time of the release, VMware announced that it has more than 3,000 customers running the products; that is quite a number.
Now, to me, it is a misnomer for this to have been given a minor release notation, as there are a slew of new features, some of which are more than worthy of a major release cycle. I will examine the major ones in this article.
And I still stand by this remark. Building and configuring a New VSAN is simple, even if you have to spend most of the morning in 4 machines LSI Bios configuring several single disk RAID0 groups and associated vDisks and then manually marking your SSD as such in ESXCLI. Continue reading “VSAN is Great, but their Licensing Sucks”