Category: Network Virtualization

NSX Packet Walks – North/South Traffic

This is my final port in the NSX Packet walk series. So far I have discussed only so called “East/West” traffic. That is traffic which is moving from one VM, or physical machine, in our network to another. This traffic will never leave the datacenter, and in a lot of cases, will never leave the same rack in a small system, or NSX system.

Traditional Network Design

In the traditional network, traffic would be separated by purpose onto different VLANs, and would all be funneled towards the network core to be routed. North-bound traffic (i.e. traffic leaving the network) would then be routed to a physical firewall, before leaving the network via an edge router. South-bound (i.e. traffic entering the network) would traverse in the opposite direction.

This has the very obvious disadvantage that for traffic to reach the servers, the correct VLANs must be in place, and the correct firewall rules must have been implemented at the edge. Historically the network and security teams would have each handled that, and requests that involved a new subnet would take a long time while those teams processed the request.

Virtualised Network – Physical Next Hop

As we’ve seen, internally we have removed the need for the VLANs that span outside of our compute clusters for the most part. All of our East/West traffic is handled by Distributed Routers. The first, and most obvious step to making North/South Traffic then is to utilise the DLRs ability to perform dynamic routing to pass traffic to a physical router as the next hop.

Using OSPF or BGP would mean that the next hop router knows of our internal networks as and when we create them. The downside to this is that we still need to pass the VLAN the Physical router is connected to to all of the compute nodes.

Virtualised Network – Edge Router

The next option we could come up with would be to put a VM performing routing in the Edge Rack. We could then have dynamic routing updates from this VM to the DLR, and from this VM to the next hop router.

As this VM is in the edge rack, the external VLAN only needs to be passed into the hosts in the Edge Rack.

The biggest constraint here is pushing all of the North/South traffic through the edge rack, and the vulnerability of the NSX Edge VM. If the Edge VM fails, we would lose all North/South traffic. This has been alleviated by VMware by allowing multiple Edge VMs.

This VM is called the NSX Edge Services gateway; It is an evolution of the vShield Edge that was first part of vCloud Director, and later vCNS.

The Edge services gateway can have up to 10 internal, up-link or trunk interfaces. This combines with the “Edge Router” which we have so far referred to as the Distributed Logical Router (DLR) which can have up to 8 up-links and 1,000 internal interfaces. In essence, a given Edge services gateway can connect to multiple external networks, or multiple DLRs (or both) and a given DLR can utilise multiple Edge Services Gateways for load balancing and resilience.

Connectivity

The figure below (taken from the VMware NSX Design Guide version 2.1 (fig 41)) shows the logical and physical networks we will be thinking about.

NSX Edge Services Gateway

In the top part of the figure we can see the green circle with Arrows, which represents the combination of the DLR and Edge Services Gateway, is connected to both of the logical switches, and also to the up-link to the L3 network. We can envisage how there could be other up-links to a WAN, or DMZ (or even multiple DMZs), or to other L3 networks if we had multiple ISPs etc. These up-links come from the pool of 10 links in the Edge Services Gateway. The logical Switches connect to the DLR which can connect to up to 1,000 logical switches.

Connectivity between the DLR and the Edge is through a transit network.

Resilience

It is possible to configure BGP, or OSPF between the Edge Services Gateway and the DLR. This means that we can have multiple Edge Services Gateways (up to 8) with connections to a given DLR, which can use ECMP (Equal Cost Multi-Pathing) to spread the North/South traffic load over the multiple gateways and also give resilience. This is very much and Active/Active setup.

The Alternative is to have the Edge Services Gateway deployed as a HA pair. This means that we get an Active/Passive setup whereby if one Edge fails, the other takes over within a few seconds. This is used when the Active/Active option above is not possible, due to using the other Edge services that are available such as Load Balancing, NAT and the Edge Firewall.

Of course, we can have multiple layers of Edge Services gateways if necessary, with HA Pairs running NAT close to the logical switches, and ECMP aggregating the traffic outbound.

Conclusion

This ends our short series on NSX and Packet flows. Although the later posts have become much more generic and less about how the packets actually move, that to some extent is precisely the point of NSX. We gain the ability to think much more logically about our whole datacenter network, with almost no reliance on physical hardware. We can micro-segment traffic so that only the allowed VMs see it, regardless of where they are running. We can connect to existing networks and migrate slowly and seamlessly into NSX. We can even plug our internet transit directly into hosts and bypass physical firewall and routing devices.

NSX Packet Walks – VLAN Bridge

This is the fourth post in the NSX Packet Walks series. You probably want to start at the first post.

Up to now we have focused on the traffic from one VM to another VM somewhere within the NSX system, as well as how traffic moves between physical hosts. But what if you’re data centre isn’t 100% virtualised? Can you still use NSX? What are the constraints? This post will look at this question. Continue reading “NSX Packet Walks – VLAN Bridge”

NSX Packet Walks – Replication Modes

This is the third post in the NSX Packet Walks series. You probably want to start at the first post or the second post.

Up to now we have focused on the traffic from one VM to another VM somewhere within the NSX system. I have skipped over the way traffic is distributed between hosts, and I haven’t discussed the ways that traffic can leave the NSX system. The next three posts will cover these topics.

This post will focus on the different methodologies available in NSX for dealing with BUM traffic. That is Broadcast, Unknown Unicast and Multicast traffic. This encompasses all traffic that doesn’t have a specific layer three address. In all three of these cases, the traffic is destined to touch anything up to all physical hosts in the cluster, and NSX has a few different ways of handling that traffic. Continue reading “NSX Packet Walks – Replication Modes”

NSX Packet Walks Continued

This post is a continuation of the last post where I discussed a very simple virtual network, and a very simple VXLAN environment. If you haven’t already, you will want to read that post first. In this post I’m going to step it up a gear and introduce a virtual distributed router to the mix. This is a key part of NSX and the ability to create virtual networks on the fly without waiting for provisioning externally. It also makes for an interesting thought process when the “router” is distributed across all hosts. Continue reading “NSX Packet Walks Continued”

My VCPN610 Experience

This morning I took the VCPN610 Exam according to plan. What didn’t go according to plan was getting a score of 290 when I needed 300 to pass. So near, and yet so very very far.

This one was quite an expensive learning experience for me, so I need to make the most of it and learn what I can.

Lesson the First: VCP Exams are hard. I’ve done Cisco CCNA exams, and MS MCSA/MCSE Exams. I’d put this exam at well above the level of the MCSA, probably a bit above the CCNA, and probably along side MCSE. The exam goes quite deep, and broader than I expected.

Lesson the Second: Time is quite tight. I’m used to getting out of exams well before the end of the time, 30-50% of the allocated time isn’t unusual, even on the harder exams. I’m blessed that English is my first language, and that I’ve sat enough exams through school and uni to just get on with it. This exam took 75% of my time. I had the option to review questions (I wasn’t sure if I would have), but I didn’t have enough time to review them all properly, I’d have barely managed to re-read all the questions. Which leads to:

Lesson the Third: Note questions you are unsure of next time! There is the ability to review a question at a time, and you can jump about. Use it! Many questions you just know the answer to, many could use some thought. Mark and Return.

Finally: There are some areas I really need to look at in some more depth. Things that took more thinking about than they should for me:

  • QoS Where/How it gets applied to work over the physical and virtual networks.
  • The actual GUI process of adding in a logical network.
  • Where do the controllers live?
  • Packet walks for simple (one logical network, two logical network and distributed router) networks.
  • Service Composer
  • Upgrade Paths from vCNS and old versions.

I don’t want to just pass next time. I want a good solid score. I am capable of this, now to get it done.

Cisco ACI: Is It Wham! or Sam Cooke?

An odd little title, I think you will agree, but consider this: Wham! had a hit with “Freedom” and Sam Cooke sang “Chain Gang,” and I think you can now see my thought process. This post is going to investigate not the technical capabilities of Cisco’s Application Centric Infrastructure (ACI), but rather what its market placement will mean to the software-defined networking (SDN) industry.

Firstly, Cisco’s Application Policy Infrastructure Controller (APIC), the brains behind its SDN product ACI, can only run on CISCO equipment. More importantly, APIC can only run on Nexus 9000 series switches. These are:

  • Not cheap
  • Only for the biggest environments.

So, what about those who have invested in Nexus 7000s and below? Well, up until the Cisco Live US conference, you were effectively legacy. However, Cisco recently stated that the APIC will now be able to overlay application workloads to older (read Nexus-only) switches. The first switch that will be able to have these policies is the Nexus 1K; the only issue is that a Nexus 9K series switch is still needed. Not looking so inviting now, is it? What is important here is that it is an overlay.

read more

Software-Defined-Networking (SDN): What is it?

SDN is getting a lot of hype at the moment. Coupled with its kissing cousin, network virtualization, it is all the buzz. So what exactly is it? At its most basic level, SDN is an approach to networking in which the control plane is decoupled from hardware and given over to a software controller.

To read the full post click here