Yesterday I woke up to the following tweet: I hear rumors that @vmware workstation/fusion teams were laid off. If that is true, it is a really sad news! — Krishna Raj Raja (@esxtopGuru) January 26, 2016View full post
If you installed a recent Linux guest recently then you might have noticed that things are changing. See also this blog post “Open-VM-Tools The Future of VMware Tools for Linux“. Trying to install the normal VMware Tools will tell you that it can’t and that you should use open-vm-tools instead. But there are numerous complaints …View full post
As of VMware Workstation 12.0 unity support for Linux has been removed. Whereas unity support for Windows 10 is one of the few areas where there is actually additional support for Windows 10, that same feature has been ripped out for Linux hosts and guests.View full post
This is the start of the series digging into the blueprint for the VCP Foundation Exam. This post will deal with “Objective 1.1 Identify vSphere Architecture and Solutions for a given use case”. Let’s get started. Identify available vSphere editions and features There are essentially 11 editions of vSphere available today, although the comparison on …View full post
If you installed a recent Linux guest recently then you might have noticed that things are changing. See also this blog post “Open-VM-Tools The Future of VMware Tools for Linux“. Trying to install the normal VMware Tools will tell you that it can’t and that you should use open-vm-tools instead. But there are numerous complaints about screen and desktop integration in the forums. So what is going on? Sven found the answer and I will tell you what it is if you read on below.
This is the start of the series digging into the blueprint for the VCP Foundation Exam. This post will deal with “Objective 1.1 Identify vSphere Architecture and Solutions for a given use case”. Let’s get started.
There are essentially 11 editions of vSphere available today, although the comparison on the website only lists 10, and it is debatable if the last one I have included here should be considered part of vSphere at all. I’ve included it though, because it is the base on which the rest is built, and it’s good to know it exists. There are a lot of acronyms in this table, most of them we will dig into later
|Standard||The base vSphere edition: vMotion, svMotion, HA, DP, FT, vShield Endpoint, vSphere Replication, Hot Add, vVols,Storage Policy Based Management, Content Library, Storage APIs|
|Enterprise||Standard plus: Reliable Memory, Big data extensions, virtual serial port concentrator, DRS, SRM|
|Enterprise Plus||Enterprise plus: sDRS, SIOC, NIOC, SR-IOV, flash read cache, NVIDIA Grid vGPU, dvSwitch, host profiles, auto deploy|
|Standard with Operations Management||Standard plus: Operations Visibility and Management, Performance Monitoring and Predictive Analytics, Capacity Management and Optimization, Change, Configuration and Compliance Management, including vSphere Security Hardening|
|Enterprise with Operations Management||Enterprise plus: Operations Visibility and Management, Performance Monitoring and Predictive Analytics, Capacity Management and Optimization, Change, Configuration and Compliance Management, including vSphere Security Hardening|
|Enterprise Plus with Operations Management||Enterprise Plus plus: Operations Visibility and Management, Performance Monitoring and Predictive Analytics, Capacity Management and Optimization, Change, Configuration and Compliance Management, including vSphere Security Hardening|
|Remote office/Branch Office Standard||Adds VM capacity into existing Std, Ent, Ent+ system. Packs of 25 VMs. Feature set roughly equivalent to Std.|
|Remote office/Branch Office Advanced||Adds VM capacity into existing Std, Ent, Ent+ system. Packs of 25 VMs. Feature set roughly equivalent to Ent+|
|Essentials Standard||For very small enterprises. Cut down vCenter(vCenter Server Essentials), up to 3 servers with 2CPUs each|
|Essentials Advanced||Essentials Std plus: vMotion, HA, DP, vShield endpoint, vSphere replication.|
|ESXi Hypervisor Free||Basic Hypervisor. No central management. No advanced features.|
These editions break down into five basic categories:
In addition to the vSphere system with gives you the ability to virtualise, there are the VMware add in products that extend the functionality.
There are a few ways we can design our VMware infrastructure depending upon the constraints. These start simple, and get more complex, but the added complexity often has distinct benefits. For any given customer, a solution will usually fit broadly into one of these schemes, but I have seen situations where more than one has been implemented.
This is the only solution we can use for the ESXi Free Hypervisor. There can be external storage, but this is not necessary. In this case we use a single ESXi host with no vCenter.
This gives us the benefits of consolidating physical servers onto a single host and better resource utilisation.
This system is harder to manage with multiple hosts, and does not scale well. There are no advanced features such as live migrations.
I have used this in an instance where I needed a couple of low utilisation VMs at multiple sites, but didn’t need to manage them often, or worry about fail-over.
This is the solution introduced in the Essentials Product line, and the simplest of Full Fat vSphere deployments. Here we introduce vCenter and Shared Storage, to gain the advantages of live migration, and manageability. The image below shows the architecture. Note that vCenter is shown as a Floating VM. This is because it can be either contained on one of the hosts (usual) or on a bare metal server (unusual). vCenter is also available as a windows application, or as a Virtual Appliance.
This solution is more scaleable than the first solution we discussed, but the limit of 64 hosts per cluster means that is doesn’t scale as well as the final architecture we will look at.
By including Management (i.e. vCenter) and usually DMZ (De-militarised zone, or “unsafe”) traffic into the cluster we have a single failure domain where failure of a host, or compromise of a single network affects the whole system.
This is the standard SME solution that most businesses start out with. The constraints are loose enough that this is a good fit for a large number of clients.
This is the most scaleable system available. This is used for cloud environments and large deployments, or when VDI is introduced.
In this system the servers doing the work (Compute) are in dedicated clusters. The servers doing management and DMZ traffic get clusters dedicated to them. Servers holding VDI user sessions get dedicated clusters. There are usually multiple vCenter servers, one serving the Management cluster, one serving the compute clusters, and one serving the VDI clusters. This level of segregation makes the system very scaleable. Adding in new compute capacity is a modular process. The separate clusters also become separate failure domains. Finally, delegation of admin work is easier and more secure, so VDI admins can be kept away from Compute admin privileges and vice versa.
The downside to this architecture is it’s complexity.
The final architecture we will look at runs parallel to the others. It is possible to have multiple vCenters running in different data centres, and now to vMotion between them. This is new in vSphere 6.0. This means that vCenter traffic can be kept local to a DC and not transported across the WAN.
Along with the usual slew of performance and scalability improvements, vSphere 6 has introduced new solutions that allow a wide range of systems that were not possible before. These are detailed below.
A range of security enhancements have been made to vSphere, with the addition of account lockout and password complexity rules.
Gives the ability for Horizon View to use hardware GPUs for guest VMs. This means that VDI sessions can benefit from full GPU acceleration for graphics intense workloads. This is either access to the GPU in a time-sliced fashion similar to how ESX grants access to the Host CPU, or in a direct 1 VM to 1 GPU fashion for direct GPU access that bypasses the hypervisor.
As well as having the option of Windows install or Appliance install, the vCenter Appliance in vSphere 6 brings with it two different architectures. The first embedded runs a single machine with all services. The second – External – runs the PSC and vCenter rolls on separate machines. This allows for more flexibility and scalability. This also makes it easier to upgrade where there are other services using the PSC such as NSX or Horizon.
Linked mode is now automatic if two vCenter servers are connected to the same PSC. This makes set up and maintenance much easier.
vMotion between data centres is now possible, so long as the connection supports a RTT (Round Trip Time) of 150ms or less, vMotion between different vCenters is also available. This also allows a path to upgrade seamlessly from Windows based vCenter to the Appliance.
The content library keeps a synchronised library of ISOs, updates and Templates making automated deployment much easier, and critically, centrally managed.
Virtual volumes or vVols, allow fine grained control of the storage underlying VMs. They allow the use of per VM storage and make snapshotting and other management tasks easier. They also allow the underlying storage to advertise capabilities which vCenter can then take advantage of. This is done through the vSphere API for Storage.
This has been a long blog post, and if you have stuck with it to the end, well done! It should have served to give you the tools you need to answer the final item on this section though. Determining the edition required depends on the customer requirements. Are they small enough that essentials with it’s three host limit is suitable? Do they need dvSwitch and so Enterprise Plus licensing? If you have the rest of this post covered, this section should be a breeze.
With the release of vSphere 6, VMware have updated the exam structure as normal. This time there are a couple of interesting (to me at least) changes.
The first is to bring the VCP-NV more in line with the other VCP exams. It now has a consistent structure with the DCV (Data Centre), CMA (Cloud) and DTM (Desktop) variants with the same requirements (except for the additional “Cisco Certified” Route which bypasses the course requirement. This looks like it will stay until the end of January 2016), and with a foundation exam, it tests some general vSphere knowledge as well as just the NSX side.
I was tinkering around with XenServer the other day. I know I can hear you saying “is that a thing?” Well, it is, but this is not what I am going to talk about today. Time for a tangent shift. I thought I would have a look for a third-party switch for XenServer, but it seems that XenServer is a third-rate citizen in this space, as there is no Cisco Nexus 1kV available for XenServer, even though Cisco previewed it at Citrix Synergy Barcelona in 2012.
This is my final port in the NSX Packet walk series. So far I have discussed only so called “East/West” traffic. That is traffic which is moving from one VM, or physical machine, in our network to another. This traffic will never leave the datacenter, and in a lot of cases, will never leave the same rack in a small system, or NSX system.
In the traditional network, traffic would be separated by purpose onto different VLANs, and would all be funneled towards the network core to be routed. North-bound traffic (i.e. traffic leaving the network) would then be routed to a physical firewall, before leaving the network via an edge router. South-bound (i.e. traffic entering the network) would traverse in the opposite direction.
This has the very obvious disadvantage that for traffic to reach the servers, the correct VLANs must be in place, and the correct firewall rules must have been implemented at the edge. Historically the network and security teams would have each handled that, and requests that involved a new subnet would take a long time while those teams processed the request.
As we’ve seen, internally we have removed the need for the VLANs that span outside of our compute clusters for the most part. All of our East/West traffic is handled by Distributed Routers. The first, and most obvious step to making North/South Traffic then is to utilise the DLRs ability to perform dynamic routing to pass traffic to a physical router as the next hop.
Using OSPF or BGP would mean that the next hop router knows of our internal networks as and when we create them. The downside to this is that we still need to pass the VLAN the Physical router is connected to to all of the compute nodes.
The next option we could come up with would be to put a VM performing routing in the Edge Rack. We could then have dynamic routing updates from this VM to the DLR, and from this VM to the next hop router.
As this VM is in the edge rack, the external VLAN only needs to be passed into the hosts in the Edge Rack.
The biggest constraint here is pushing all of the North/South traffic through the edge rack, and the vulnerability of the NSX Edge VM. If the Edge VM fails, we would lose all North/South traffic. This has been alleviated by VMware by allowing multiple Edge VMs.
This VM is called the NSX Edge Services gateway; It is an evolution of the vShield Edge that was first part of vCloud Director, and later vCNS.
The Edge services gateway can have up to 10 internal, up-link or trunk interfaces. This combines with the “Edge Router” which we have so far referred to as the Distributed Logical Router (DLR) which can have up to 8 up-links and 1,000 internal interfaces. In essence, a given Edge services gateway can connect to multiple external networks, or multiple DLRs (or both) and a given DLR can utilise multiple Edge Services Gateways for load balancing and resilience.
The figure below (taken from the VMware NSX Design Guide version 2.1 (fig 41)) shows the logical and physical networks we will be thinking about.
In the top part of the figure we can see the green circle with Arrows, which represents the combination of the DLR and Edge Services Gateway, is connected to both of the logical switches, and also to the up-link to the L3 network. We can envisage how there could be other up-links to a WAN, or DMZ (or even multiple DMZs), or to other L3 networks if we had multiple ISPs etc. These up-links come from the pool of 10 links in the Edge Services Gateway. The logical Switches connect to the DLR which can connect to up to 1,000 logical switches.
Connectivity between the DLR and the Edge is through a transit network.
It is possible to configure BGP, or OSPF between the Edge Services Gateway and the DLR. This means that we can have multiple Edge Services Gateways (up to 8) with connections to a given DLR, which can use ECMP (Equal Cost Multi-Pathing) to spread the North/South traffic load over the multiple gateways and also give resilience. This is very much and Active/Active setup.
The Alternative is to have the Edge Services Gateway deployed as a HA pair. This means that we get an Active/Passive setup whereby if one Edge fails, the other takes over within a few seconds. This is used when the Active/Active option above is not possible, due to using the other Edge services that are available such as Load Balancing, NAT and the Edge Firewall.
Of course, we can have multiple layers of Edge Services gateways if necessary, with HA Pairs running NAT close to the logical switches, and ECMP aggregating the traffic outbound.
This ends our short series on NSX and Packet flows. Although the later posts have become much more generic and less about how the packets actually move, that to some extent is precisely the point of NSX. We gain the ability to think much more logically about our whole datacenter network, with almost no reliance on physical hardware. We can micro-segment traffic so that only the allowed VMs see it, regardless of where they are running. We can connect to existing networks and migrate slowly and seamlessly into NSX. We can even plug our internet transit directly into hosts and bypass physical firewall and routing devices.
Today, Atlantis Computing moves into the hardware market with a new hyperconverged solution, HyperScale. HyperScale is based on the company’s flagship product, USX. Technically, this solution is not a revolution, but it is an evolution on Atlantis Computing’s part. This is the first time it has delivered an end-to-end bespoke solution that tightly couples certified hardware with its flagship USX product. More to the point, unlike most new entrants into this space, Atlantis has entered straight in with a full product set, multiple-hypervisor support, and three OEM deals. This is in addition to its own Supermicro-based in-house appliance. What’s more, HyperScale has a starting price that does not set your teeth on edge.
This is the fourth post in the NSX Packet Walks series. You probably want to start at the first post.
Up to now we have focused on the traffic from one VM to another VM somewhere within the NSX system, as well as how traffic moves between physical hosts. But what if you’re data centre isn’t 100% virtualised? Can you still use NSX? What are the constraints? This post will look at this question.