Aug 17

VCP Foundation Objective 1.1 Identify vSphere Architecture and Solutions

This is the start of the series digging into the blueprint for the VCP Foundation Exam. This post will deal with “Objective 1.1 Identify vSphere Architecture and Solutions for a given use case”. Let’s get started.

Identify available vSphere editions and features

There are essentially 11 editions of vSphere available today, although the comparison on the website only lists 10, and it is debatable if the last one I have included here should be considered part of vSphere at all. I’ve included it though, because it is the base on which the rest is built, and it’s good to know it exists. There are a lot of acronyms in this table, most of them we will dig into later

vSphere Edition Description
Standard The base vSphere edition: vMotion, svMotion, HA, DP, FT, vShield Endpoint, vSphere Replication, Hot Add, vVols,Storage Policy Based Management, Content Library, Storage APIs
Enterprise Standard plus: Reliable Memory, Big data extensions, virtual serial port concentrator, DRS, SRM
Enterprise Plus Enterprise plus: sDRS, SIOC, NIOC, SR-IOV, flash read cache, NVIDIA Grid vGPU, dvSwitch, host profiles, auto deploy
Standard with Operations Management Standard plus: Operations Visibility and Management, Performance Monitoring and Predictive Analytics, Capacity Management and Optimization, Change, Configuration and Compliance Management, including vSphere Security Hardening
Enterprise with Operations Management Enterprise plus: Operations Visibility and Management, Performance Monitoring and Predictive Analytics, Capacity Management and Optimization, Change, Configuration and Compliance Management, including vSphere Security Hardening
Enterprise Plus with Operations Management Enterprise Plus plus: Operations Visibility and Management, Performance Monitoring and Predictive Analytics, Capacity Management and Optimization, Change, Configuration and Compliance Management, including vSphere Security Hardening
Remote office/Branch Office Standard Adds VM capacity into existing Std, Ent, Ent+ system. Packs of 25 VMs. Feature set roughly equivalent to Std.
Remote office/Branch Office Advanced Adds VM capacity into existing Std, Ent, Ent+ system. Packs of 25 VMs. Feature set roughly equivalent to Ent+
Essentials Standard For very small enterprises. Cut down vCenter(vCenter Server Essentials), up to 3 servers with 2CPUs each
Essentials Advanced Essentials Std plus: vMotion, HA, DP, vShield endpoint, vSphere replication.
ESXi Hypervisor Free Basic Hypervisor. No central management. No advanced features.

These editions break down into five basic categories:

  1. The hypervisor – not really a vSphere edition at all, and unable to connect to vCenter server. Included for completeness.
  2. Essentials – A reduced feature set, only usable on up to three hosts, designed for the SMB. Upgrade capacity is limited.
  3. ROBO (Remote Office/Branch Office) – Designed to add hosts in remote locations to an existing vSphere installation.
  4. vSphere – The baseline for medium to large enterprise. A nice upgrade path from fewer to more features by licensing. Most additional products assume this as a base. Most documentation assumes this edition set.
  5. vSphere with Operations Management – Basically a way to purchase vSphere along with the vRealise suit to gain orchestration, insight and automation.

Identify the various data centre solutions that interact with vSphere (Horizon, SRM, etc.)

In addition to the vSphere system with gives you the ability to virtualise, there are the VMware add in products that extend the functionality.

  • Horizon extends vSphere into the Virtual Desktop domain.
  • Site Recovery Manager (SRM) gives active/passive DR capabilities, with the ability to fail your virtual infrastructure to a remote location.
  • vRealise gives operations management and insight, along with Orchestration.
  • vCloud Suite gives the ability to create multi-tenant private clouds.
  • NSX gives fine grained network virtualisation with distributed routing and fire-walling along with data protection.
  • VSAN moves storage closer to compute by implementing a virtual SAN in your ESXi hosts
  • Airwatch allows Enterprise mobility and builds on Horizon.

Explain ESXi and vCenter Server architectures

There are a few ways we can design our VMware infrastructure depending upon the constraints. These start simple, and get more complex, but the added complexity often has distinct benefits. For any given customer, a solution will usually fit broadly into one of these schemes, but I have seen situations where more than one has been implemented.

ESXi Standalone

This is the only solution we can use for the ESXi Free Hypervisor. There can be external storage, but this is not necessary. In this case we use a single ESXi host with no vCenter.

ESXi Architecture

This gives us the benefits of consolidating physical servers onto a single host and better resource utilisation.

This system is harder to manage with multiple hosts, and does not scale well. There are no advanced features such as live migrations.

I have used this in an instance where I needed a couple of low utilisation VMs at multiple sites, but didn’t need to manage them often, or worry about fail-over.

Single Cluster

This is the solution introduced in the Essentials Product line, and the simplest of Full Fat vSphere deployments. Here we introduce vCenter and Shared Storage, to gain the advantages of live migration, and manageability. The image below shows the architecture. Note that vCenter is shown as a Floating VM. This is because it can be either contained on one of the hosts (usual) or on a bare metal server (unusual). vCenter is also available as a windows application, or as a Virtual Appliance.

vSphere Architecture

This solution is more scaleable than the first solution we discussed, but the limit of 64 hosts per cluster means that is doesn’t scale as well as the final architecture we will look at.

By including Management (i.e. vCenter) and usually DMZ (De-militarised zone, or “unsafe”) traffic into the cluster we have a single failure domain where failure of a host, or compromise of a single network affects the whole system.

This is the standard SME solution that most businesses start out with. The constraints are loose enough that this is a good fit for a large number of clients.

Many, specialised clusters

This is the most scaleable system available. This is used for cloud environments and large deployments, or when VDI is introduced.

Enterprise Architecture

In this system the servers doing the work (Compute) are in dedicated clusters. The servers doing management and DMZ traffic get clusters dedicated to them. Servers holding VDI user sessions get dedicated clusters. There are usually multiple vCenter servers, one serving the Management cluster, one serving the compute clusters, and one serving the VDI clusters. This level of segregation makes the system very scaleable. Adding in new compute capacity is a modular process. The separate clusters also become separate failure domains. Finally, delegation of admin work is easier and more secure, so VDI admins can be kept away from Compute admin privileges and vice versa.

The downside to this architecture is it’s complexity.

Multiple vCenter systems

The final architecture we will look at runs parallel to the others. It is possible to have multiple vCenters running in different data centres, and now to vMotion between them. This is new in vSphere 6.0. This means that vCenter traffic can be kept local to a DC and not transported across the WAN.

Identify new solutions offered in the current version

Along with the usual slew of performance and scalability improvements, vSphere 6 has introduced new solutions that allow a wide range of systems that were not possible before. These are detailed below.

ESXi Security Enhancements

A range of security enhancements have been made to vSphere, with the addition of account lockout and password complexity rules.

NVIDIA GRID Support

Gives the ability for Horizon View to use hardware GPUs for guest VMs. This means that VDI sessions can benefit from full GPU acceleration for graphics intense workloads. This is either access to the GPU in a time-sliced fashion similar to how ESX grants access to the Host CPU, or in a direct 1 VM to 1 GPU fashion for direct GPU access that bypasses the hypervisor.

vCenter Server Architecture Changes

As well as having the option of Windows install or Appliance install, the vCenter Appliance in vSphere 6 brings with it two different architectures. The first embedded runs a single machine with all services. The second – External – runs the PSC and vCenter rolls on separate machines. This allows for more flexibility and scalability. This also makes it easier to upgrade where there are other services using the PSC such as NSX or Horizon.

Enhanced Linked Mode

Linked mode is now automatic if two vCenter servers are connected to the same PSC. This makes set up and maintenance much easier.

vSphere vMotion

vMotion between data centres is now possible, so long as the connection supports a RTT (Round Trip Time) of 150ms or less, vMotion between different vCenters is also available. This also allows a path to upgrade seamlessly from Windows based vCenter to the Appliance.

Multi site Content Library

The content library keeps a synchronised library of ISOs, updates and Templates making automated deployment much easier, and critically, centrally managed.

Virtual Volumes

Virtual volumes or vVols, allow fine grained control of the storage underlying VMs. They allow the use of per VM storage and make snapshotting and other management tasks easier. They also allow the underlying storage to advertise capabilities which vCenter can then take advantage of. This is done through the vSphere API for Storage.

Determine appropriate vSphere edition based on customer requirements

This has been a long blog post, and if you have stuck with it to the end, well done! It should have served to give you the tools you need to answer the final item on this section though. Determining the edition required depends on the customer requirements. Are they small enough that essentials with it’s three host limit is suitable? Do they need dvSwitch and so Enterprise Plus licensing? If you have the rest of this post covered, this section should be a breeze.

Aug 10

VCP Foundation Exam

With the release of vSphere 6, VMware have updated the exam structure as normal. This time there are a couple of interesting (to me at least) changes.

The first is to bring the VCP-NV more in line with the other VCP exams. It now has a consistent structure with the DCV (Data Centre), CMA (Cloud) and DTM (Desktop) variants with the same requirements (except for the additional “Cisco Certified” Route which bypasses the course requirement. This looks like it will stay until the end of January 2016), and with a foundation exam, it tests some general vSphere knowledge as well as just the NSX side. Read the rest of this entry »

Jul 08

Choices We Have Plenty: Your Guide to Virtual Switches

I was tinkering around with XenServer the other day. I know I can hear you saying “is that a thing?” Well, it is, but this is not what I am going to talk about today. Time for a tangent shift. I thought I would have a look for a third-party switch for XenServer, but it seems that XenServer is a third-rate citizen in this space, as there is no Cisco Nexus 1kV available for XenServer, even though Cisco previewed it at Citrix Synergy Barcelona in 2012.

Read more

Jul 03

NSX Packet Walks – North/South Traffic

This is my final port in the NSX Packet walk series. So far I have discussed only so called “East/West” traffic. That is traffic which is moving from one VM, or physical machine, in our network to another. This traffic will never leave the datacenter, and in a lot of cases, will never leave the same rack in a small system, or NSX system.

Traditional Network Design

In the traditional network, traffic would be separated by purpose onto different VLANs, and would all be funneled towards the network core to be routed. North-bound traffic (i.e. traffic leaving the network) would then be routed to a physical firewall, before leaving the network via an edge router. South-bound (i.e. traffic entering the network) would traverse in the opposite direction.

This has the very obvious disadvantage that for traffic to reach the servers, the correct VLANs must be in place, and the correct firewall rules must have been implemented at the edge. Historically the network and security teams would have each handled that, and requests that involved a new subnet would take a long time while those teams processed the request.

Virtualised Network – Physical Next Hop

As we’ve seen, internally we have removed the need for the VLANs that span outside of our compute clusters for the most part. All of our East/West traffic is handled by Distributed Routers. The first, and most obvious step to making North/South Traffic then is to utilise the DLRs ability to perform dynamic routing to pass traffic to a physical router as the next hop.

Using OSPF or BGP would mean that the next hop router knows of our internal networks as and when we create them. The downside to this is that we still need to pass the VLAN the Physical router is connected to to all of the compute nodes.

Virtualised Network – Edge Router

The next option we could come up with would be to put a VM performing routing in the Edge Rack. We could then have dynamic routing updates from this VM to the DLR, and from this VM to the next hop router.

As this VM is in the edge rack, the external VLAN only needs to be passed into the hosts in the Edge Rack.

The biggest constraint here is pushing all of the North/South traffic through the edge rack, and the vulnerability of the NSX Edge VM. If the Edge VM fails, we would lose all North/South traffic. This has been alleviated by VMware by allowing multiple Edge VMs.

This VM is called the NSX Edge Services gateway; It is an evolution of the vShield Edge that was first part of vCloud Director, and later vCNS.

The Edge services gateway can have up to 10 internal, up-link or trunk interfaces. This combines with the “Edge Router” which we have so far referred to as the Distributed Logical Router (DLR) which can have up to 8 up-links and 1,000 internal interfaces. In essence, a given Edge services gateway can connect to multiple external networks, or multiple DLRs (or both) and a given DLR can utilise multiple Edge Services Gateways for load balancing and resilience.

Connectivity

The figure below (taken from the VMware NSX Design Guide version 2.1 (fig 41)) shows the logical and physical networks we will be thinking about.

NSX Edge Services Gateway

In the top part of the figure we can see the green circle with Arrows, which represents the combination of the DLR and Edge Services Gateway, is connected to both of the logical switches, and also to the up-link to the L3 network. We can envisage how there could be other up-links to a WAN, or DMZ (or even multiple DMZs), or to other L3 networks if we had multiple ISPs etc. These up-links come from the pool of 10 links in the Edge Services Gateway. The logical Switches connect to the DLR which can connect to up to 1,000 logical switches.

Connectivity between the DLR and the Edge is through a transit network.

Resilience

It is possible to configure BGP, or OSPF between the Edge Services Gateway and the DLR. This means that we can have multiple Edge Services Gateways (up to 8) with connections to a given DLR, which can use ECMP (Equal Cost Multi-Pathing) to spread the North/South traffic load over the multiple gateways and also give resilience. This is very much and Active/Active setup.

The Alternative is to have the Edge Services Gateway deployed as a HA pair. This means that we get an Active/Passive setup whereby if one Edge fails, the other takes over within a few seconds. This is used when the Active/Active option above is not possible, due to using the other Edge services that are available such as Load Balancing, NAT and the Edge Firewall.

Of course, we can have multiple layers of Edge Services gateways if necessary, with HA Pairs running NAT close to the logical switches, and ECMP aggregating the traffic outbound.

Conclusion

This ends our short series on NSX and Packet flows. Although the later posts have become much more generic and less about how the packets actually move, that to some extent is precisely the point of NSX. We gain the ability to think much more logically about our whole datacenter network, with almost no reliance on physical hardware. We can micro-segment traffic so that only the allowed VMs see it, regardless of where they are running. We can connect to existing networks and migrate slowly and seamlessly into NSX. We can even plug our internet transit directly into hosts and bypass physical firewall and routing devices.

May 05

Atlantis Computing Releases HyperScale

Today, Atlantis Computing moves into the hardware market with a new hyperconverged solution, HyperScale. HyperScale is based on the company’s flagship product, USX. Technically, this solution is not a revolution, but it is an evolution on Atlantis Computing’s part. This is the first time it has delivered an end-to-end bespoke solution that tightly couples certified hardware with its flagship USX product. More to the point, unlike most new entrants into this space, Atlantis has entered straight in with a full product set, multiple-hypervisor support, and three OEM deals. This is in addition to its own Supermicro-based in-house appliance. What’s more, HyperScale has a starting price that does not set your teeth on edge.

read more

May 05

NSX Packet Walks – VLAN Bridge

This is the fourth post in the NSX Packet Walks series. You probably want to start at the first post.

Up to now we have focused on the traffic from one VM to another VM somewhere within the NSX system, as well as how traffic moves between physical hosts. But what if you’re data centre isn’t 100% virtualised? Can you still use NSX? What are the constraints? This post will look at this question. Read the rest of this entry »

Apr 20

NSX Packet Walks – Replication Modes

This is the third post in the NSX Packet Walks series. You probably want to start at the first post or the second post.

Up to now we have focused on the traffic from one VM to another VM somewhere within the NSX system. I have skipped over the way traffic is distributed between hosts, and I haven’t discussed the ways that traffic can leave the NSX system. The next three posts will cover these topics.

This post will focus on the different methodologies available in NSX for dealing with BUM traffic. That is Broadcast, Unknown Unicast and Multicast traffic. This encompasses all traffic that doesn’t have a specific layer three address. In all three of these cases, the traffic is destined to touch anything up to all physical hosts in the cluster, and NSX has a few different ways of handling that traffic. Read the rest of this entry »

Apr 16

Managing and Monitoring Performance in SDN / NFV

We have all drunk the Kool-Aid. Software-defined networking (SDN), network functions virtualization (NFV), or both will save the world. They decouple us from the shackles of legacy networks to allow a utopia of business-driven requirements to freely flow, delivering value and freeing the network, application, storage, and infrastructure teams to have weekends off and time with their families.

Read More

Apr 15

Beware, All You Other Clouds: Odin Wants His Cloud Back

Who would have thought it? Parallels, the developer of the Mac-based hosted virtualization product, had a service provider business. As of March 24, 2015, Parallels has split it off from its core business of selling hosted virtualization to Mac users and marketed it as “Odin.” Yes, Odin, the Norse god, king of Asgard. At first glance, this might seem slightly pretentious for a service provider.

Read More

Apr 14

NSX Packet Walks Continued

This post is a continuation of the last post where I discussed a very simple virtual network, and a very simple VXLAN environment. If you haven’t already, you will want to read that post first. In this post I’m going to step it up a gear and introduce a virtual distributed router to the mix. This is a key part of NSX and the ability to create virtual networks on the fly without waiting for provisioning externally. It also makes for an interesting thought process when the “router” is distributed across all hosts. Read the rest of this entry »

Older posts «