This page was exported from phaq
[ http://phaq.phunsites.net ] Export date: Thu Apr 25 15:59:13 2024 / +0000 GMT |
||||||||||||||||||
So you've got one of these nice decent Cisco Unified Computing Systems (UCS) connected to a multi-terabyte storage system? One of the nasty things to get done is SAN zoning. Unfortunately, Cisco's UCS doesn't help much about getting this done automagically on the MDS SAN switches. This is however no big deal to automate using a little script magic. Let's have a quick look at the topology. In a typical UCS environment, all chassis are connected redundantly to two fabric interconnect switches (let's call them fabric interconnect A and B). As for the single blade servers, each of them usually gets two virtual FC host bus adapters, fc0 connected to fabric A, fc1 connected to fabric B. For the "A" side, we use VLAN 4001 on the UCS to carry vsan 101 originating from fc0. On the "B" side, VLAN 4002 is used to carry vsan 102 originativ from fc1. While fabric interconnect A ist connected to the MDS "A" switch, the fabric interconnect B is connected to the MDS "B" switch respectively. Since there are two storage systems, each of which has two independet storage controllers, each of the MDS switches is cross-over connected to every storage controller. This is illustrated in the drawing below: Diving into deeper detail, let's check out the end-to-end connectivity from a single blade server to the storage systems. As far as communication in a SAN is concerned, we talk strictly point-to-point here. So for each host bust adapter (fc0 and fc1), we need to add four SAN zones, as there is four point-to-point connections per HBA. Let's look at the illustration: Considering the first illustration, where fc0 is connected through fabric A and MDS "A" and fc1 is connected through fabric B and MDS "B", we can summarize the following SAN zones to be created:
In SAN terminology, a "zone" depicts this particular end-to-end connection from the initiator (controller) to the target (HBA). The end points are determined by the world-wide port-names (WWPN), a "MAC address like" ID. Multiple zones again build up a so called "zoneset", which add all the particular zones to a given VSAN. In terms of the example Above, on every MDS switch we need to add the correspondig FC HBA to the device-alias databases, add four zones per HBA (remember, for end-points to the storage controllers) and make them member of the VSAN zoneset. Here's what this would look like on MDS "A" for fc-0:
On MDS "B" this would look like this:
After all, that's not too complicated to understand, is it? Well, at last it's fairly simple to configure SAN zoning on the MDS switches on the CLI once you get used to it. However, as there is a lot of fiddling around this hosts, HBAs, WWPNs and other abstract stuff, there's a lot of typing and/or copy-pasting involved, which leaves plenty of room for mistakes. As such, I've written up a little shell script, which eases this process by parsing the UCS configuration to build up the SAN zoning configuration. This is a fairly straight forward process, since the only thing needed is the FC HBA WWPNs assigned within the UCS. To get this, login to any of the two UCS fabric interconnects and run this command:
This command will filter for assigned world-wide port-names used for fibre-channel interfaces. You should come up with something like this:
For my example, this conforms to the given scenario with one node and two FC HBA each. As for the output, to fourth columns shows the name of the assigned service profile name on the UCS blade. So, following up the afore mentioned configuration, this can be used to build up descriptive names in the SAN zoning configuration. To work with my script later on, please safe your retrieved list to a temporary directory with the name of "wwnlist". Please do not use my list from the example above - it won't be usable for you, as your configuration is very, very likely to be all but different from mine. Now download my script and extract it to the same location where you saved the "wwnlist" file. Then open the "create_zones.sh" script in your favorite text editor and change the variables to reflect your environment. The important sections are marked red in the screenshot. For your convencience and for testing purposes, I've added some fake data. But, really, you WILL NEED to change these in each and every case to get a working configuration. After you have done so, run the script:
If you receive an error message stating that "wwnlist" was not found ("cat: wwnlist: No such file or directory"), then make sure, that you stored your wwnlist file in the same directory as the script itself. I didn't do much error checking on this ;-) Upon a successful run, the script will produce two blocks of output, one for each of the two MDS switches. You can simply copy-paste these into your MDS.
Please note that you may need to redistribute the changed zoneset manually after importing the configuration. The script does not take care about this! If I had to do it on my switches, it'll look like this:
|
||||||||||||||||||
Powered by [ Universal Post Manager ] plugin. HTML saving format developed by gVectors Team www.gVectors.com |