Advertisements

It is all a question about which path to take.

Planning storage is a simple thing,  you go to your Storage Admin’s and say, I need x amount of LUNs of this size please for my ESX servers and they NO, we only do xGB size LUN’s, or they breath thought their teeth like a motor mechanic or plumber and say, Storage doesn’t grow on trees you know, we don’t have much left, are you sure you really need all that space, etc.

But I digress. 🙂

The majority of people know and understand that you can have a maximum of 256 LUNs per ESX host, yes that is correct 256, that means that your cluster can have almost 1/2 a petabyte of storage attached to it. :O

however the fact is you will reaching that maximum will be quite difficult, I shudder to think of the reboot times due to HBA enumeration.

The limit that you will most likely reach is “Maximum Paths”; this limit is set at 1024 per server.

wow that seems a large number you say how will I ever reach that;  unfortunately the answer is quite easily.

here is a quick formula to aid you in Max Path / LUN numbers

number of HBA’s * number of Ports * number of connections to fabric form San / Max Path per server.

So a Quick example. if you have hosts that have two dual port HBA’s that connect to a SAN with four uplink connections, (average now a days), your formula will read:

2 * 2 * 4 /1024 = 64

this means that the maximum number of LUNs that can be attached to a single ESX host is 64.  this too is not an insignificant number,  however and here is the crux of the issue,  6 LUNs can be very quickly eaten up if you are dealing with MSCS which cross boxes.  Remember there is a requirement that your Cluster shared storage be on RDM’s.

So for example if your Cluster has several high IO resource groups you would create a RDM for each mapped Drive, you can see where I am going now.

Now couple this with a requirement for point in time backup of groups of servers.  Whoooooo, what is this and where did this requirement come from,  well obviously it came via the back door, over the afternoon drinking session 😀

You know the sort of conversation, My boss was talking to your boss about the conversation he overheard the SAN guys having about consistent pools of storage being able to be snapshotted and the backup are kept in point of time synchronicity and the next thing you know it has become a requirement (all post design sign off, of course).

So out comes the Fag paper, and the design starts. but wait, where is the Virtualisation TA, oh he is on leave, he will be back on Monday, OK we need to get this sorted, but remember to run it by him on Monday, (emmmm, guess what, yep they never did)

So the “Technical” meeting starts and before long there are another 20 RDM,s that need to be created.  this goes along and suddenly the Virtualisation team requests some new storage for their VMFS partitions and 2 of the 5 LUNs they have requested fail to attach and you discover that you are at 66 LUNs.

Quick and dirty solution is to pull 2 of the Connections out of the back of each of the ESX servers to reduce number of paths from 8 to 16.

And then you finally start the proper “Technical Review” meeting.

Well that was my Doh moment for yesterday, I wonder what today will bring :S

Advertisements

6 comments

1 ping

Skip to comment form

  1. Thi came up in a mini-defence I did with one of the VCDX guys – he was very much an advocate of single initiator zoning for this very reason that you dont actually need 64 paths per LUN.

    1. you can only have a maximum of 32 paths per server per LUN 😉 what sort of VCDX was he LOL

    • PiroNet on January 21, 2011 at 11:41 am

    Real life experience, I like it 🙂

    @Chris
    Configuration maximums for 4.1 (http://www.vmware.com/pdf/vsphere4/r41/vsp_41_config_max.pdf)

    Number of paths to a LUN 32
    Number of total paths on a server 1024
    Number of HBAs of any type 8
    HBA ports 16

    @Tom
    I’ve seen many time customers working around this issue by creating a new cluster within their VMware Datacenter, thus ‘resetting’ the number of path. Many time this workaround comes with another one, that is a ‘shared’ datastore presented to a single host in every cluster. Kind of jumphost to move around VMs across all the Datastores

    Cheers,
    Didier

  2. I do realise that storage people are very happy with using LUNs and to them and also many vArchitects this is the only storage you could possibly ever want.

    I do still wonder why people do not consider NFS for storage. no LUNs, zoning, initiator groups, carving, locking etc. You can still use iSCSI/ FC if you must for RDM like connectivity from either within a guest OS or using VM RDM with maybe additional uplinks but why would you bother having all your OS data on a LUN when it just creates so much more administration.

    NFS is so much simpler, your limiting factor for storage is never the size of the pipe, it is always your IOPS…use 10GbE and your bandwidth is bigger than you will need.

    Please storage and VM people, look at NFS and breath a sigh of relief!

    P.S. I don’t work for any storage vendor so am only commenting on this as a user.

    • Peter on January 24, 2011 at 6:10 am

    how is 2*2*4/1024 = 64. When i went to school it would equal 0.015625

    1. yeah good point 😀 That should be 1024/(2*2*4), I will amend the formula thanks for bringing that to my attention.

  1. […] This post was mentioned on Twitter by jonathanmedd and tom_howarth, PlanetVM Net. PlanetVM Net said: New Post on PlanetVM.NET http://tinyurl.com/6yqtzhf Doh […]

Comments have been disabled.

%d bloggers like this: