Cisco Nexus VDC Introduction
Having just recently deployed a pair of Nexus 7009 switches with multiple VDCs I thought I would put together a short post to give a brief introduction. I know that when our group began looking at using Nexus with VDCs we had a bit of a challenge getting up to speed as though we understood the concept, we did not have a clear picture of what some of the differences were when compared to our Cisco Catalyst 6513s. There are good articles on the internet describing how vPCs work and the gotchas, but we did not find anything really simple to show us what a VDC looks like when logged onto the switch. NX-OS, the operating system which runs on the Nexus line of switches, has been out for several years now but I have found - through many interviews for experienced network engineers - that there aren't many Cisco admins who have actually used it (and when asked about key selling features have little to no knowledge about what they are and how they work). If they have used it is usually on the Nexus 5000 series which does not support the use of VDCs.
So where do we start? VDC stands for "Virtual Device Context" (not Virtual Data Center as many virtualization afficionados think). It is a feature which must be enabled on a supporting device - such as a Cisco Nexus 7000 series switch - and it allows you to represent the physical device as multiple logical devices. You might be familiar with the concept is you have used device contexts on Cisco ASA firewalls. In plain English this means that you can make one physical switch look and behave like many switches. Why might this be useful? Well if you find yourself deploying many physical switches for separate isolated networks you many benefit from the ability to consolidate those many switches onto fewer devices. If you consider that many switch deployments are actually switch pair deployments this benefit is even greater as you can consolidate several pairs of switches onto fewer pairs of switches.
Here is an image from the Cisco site showing what this logical separation looks like:
There is a limit to how many virtual device contexts you can have on a switch (determined by license type, code level, and supervisor card model), but generally speaking sup1 cards can have up to 4 VDCs while sup2 cards can have up to 8 VDCs. A potential performance benefit of using larger VDC capable switches might be that instead of purchasing many lower end model switches, you can simply buy a pair of higher end switches and modules and get increased performance overall. To compare with virtualization on the compute side of the house consider that the performance of one big beefy switch will likely be greater than that of many lower end models. The details may vary depending on the switch models you look at but this generally holds true, until you hit the next bottle neck. Note that we are talking in terms of network throughput, not distributed computing.
A couple of notes about VDC licensing:
- The license required to be able to use virtual device contexts is the Advanced License.
- If you want to use FCoE you need to dedicate on of the VDCs to it and license it with a separate Storage license associated with a specific line card.
- Nexus licensing can be overwhelming so be sure to discuss your plans with Cisco or your reseller (There are 9 different license levels!).
- More information can be found on the Cisco web site (near the bottom of the page)
I am going to keep this post brief and stick to simply showing you what the interface status looks like (since that is what I wanted to see - I'm a visual learner), but there is much, much more. The Cisco web site has lots of diagrams such as the one above. Each VDC is a separate logical switch with its own routing tables, enabled features, running processes, routing protocols, etc., down to the VDC-assigned physical interfaces. You can even re-use the same IP addresses on the different VDCs if you want to (though it can get confusing so you might want to avoid this if possible). Routing traffic from one VDC to another VDC even requires that you run a cable between ports each assigned one of the VDCs. Do note however that only one copy of the software runs on a VDC capable Nexus switch. You cannot run different NX-OS versions in each VDC.
From the Cisco web site:
The Cisco NX-OS software supports VDCs, which partition a single physical device into multiple logical devices that provide fault isolation, management isolation, address allocation isolation, service differentiation domains, and adaptive resource management.
To demonstrate the different VDCs here is an example of logging into one of our Nexus 7009s (some items modified for obvious reasons). For reference this unit is running NX-OS 6.1.2 with a sup-1 supervisor card.
The commands typed are in blue, the output is in green. In this scenario I am logging in through the console port via a serial switch on the network.
User Access Verification
login: [enter username]
Password: [enter password]
The standard Cisco copyright information is returned followed by the prompt:
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2012, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. [snip for brevity]
Admin-VDC#
Assuming that one logs via the admin VDC management IP address or through the console port you land at a prompt inside the admin VDC. If you look at the port status you can see that there are no ports assigned to this VDC other than the mgmt0 port. The mgmt0 port is special in that it can be used by the other VDCs, but each instance is virtually separate. Each mgmt0 in each VDC has a different MAC address (incremented by one per VDC) and a different IP address.
Admin-VDC#sh int status
--------------------------------------------------------------------------------
Port Name Status Vlan Duplex Speed Type
--------------------------------------------------------------------------------
mgmt0 -- connected routed full 1000 --
Now let's say that I want to administer another VDC where I actually have an active configuration. The switchto vdc [vdc name] command will allow me to change VDC.
Admin-VDC#switchto vdc VDC3
Now if I issue a command to see port status you can see that my output is different (port output list truncated). The output I see this time includes the ports that have been assigned to this VDC.
VDC3#sh int status
--------------------------------------------------------------------------------
Port Name Status Vlan Duplex Speed Type
--------------------------------------------------------------------------------
mgmt0 -- connected routed full 1000 --
Eth3/25 Force10 MXL A2 por connected trunk full 10G DWDM-GBIC-5
Eth3/26 Force10 MXL A2 por connected trunk full 10G DWDM-GBIC-5
Eth3/27 Force10 MXL B2 por connected trunk full 10G DWDM-GBIC-5
Eth3/28 Force10 MXL B2 por connected trunk full 10G DWDM-GBIC-5
Eth3/29 Force10 MXL A2 por connected trunk full 10G DWDM-GBIC-5
Eth3/30 Force10 MXL A2 por connected trunk full 10G DWDM-GBIC-5
Eth3/31 Force10 MXL B2 por connected trunk full 10G DWDM-GBIC-5
Eth3/32 Force10 MXL B2 por connected trunk full 10G DWDM-GBIC-5
Eth3/33 -- sfpAbsent 1 auto auto --
Eth3/34 -- sfpAbsent 1 auto auto --
Eth3/35 -- sfpAbsent 1 auto auto --
Eth3/36 Trunk to Cisco3750 connected trunk full 1000 1000base-T
Eth3/37 Trunk to Cisco3750 connected trunk full 1000 1000base-SX
Eth3/38 Trunk to Cisco3750 connected trunk full 1000 1000base-SX
Eth3/39 Reserved Port sfpAbsent trunk auto auto --
Eth3/40 -- disabled 1 auto auto 1000base-T
Eth3/41 Half of vPC connected 1 full 10G SFP-H10GB-C
Eth3/42 Half of vPC connected 1 full 10G SFP-H10GB-C
Eth3/43 Physical intfc for connected trunk full 10G SFP-H10GB-C
Eth3/44 Physical intfc for connected trunk full 10G SFP-H10GB-C
Po301 -- connected 1 full 10G --
Po302 "vPC to Force10 MX connected trunk full 10G --
Po303 "vPC to Force10 MX connected trunk full 10G --
Po304 "vPC to Force10 MX connected trunk full 10G --
Po305 "vPC to Force10 MX connected trunk full 10G --
Po500 vPC Peer Link connected trunk full 10G --
Eth171/1/1 -- inactive 300 auto auto --
Eth171/1/2 -- disabled 1 auto auto --
Eth171/1/3 -- disabled 1 auto auto --
Eth171/1/4 -- disabled 1 auto auto --
Eth171/1/5 -- disabled 1 auto auto --
Eth171/1/6 -- disabled 1 auto auto --
Eth171/1/7 -- disabled 1 auto auto --
Eth171/1/8 -- disabled 1 auto auto --
Eth171/1/9 -- disabled 1 auto auto --
Eth171/1/10 -- disabled 1 auto auto --
Eth171/1/11 -- disabled 1 auto auto --
Eth171/1/12 -- disabled 1 auto auto --
Eth171/1/13 -- disabled 1 auto auto --
As you can see from the port descriptions I have a number of other switches connected to this VDC on 10 Gb connections, mostly in port channels using LACP. You can also see the peer link vPC for the VDC. When planning port counts remember that you need to configure a peer link for each VDC. You don't want to find out that you don't have enough ports during the initial implementation. I would recommend that you plan for a few extra ports as there are scenarios with vPC and routing on the Nexus may require that you allocate additional ports - but I digress (and it's hard not to where vPCs and routing are concerned).
So what do I mean when I say that the output above shows the ports assigned to this VDC? To put it simply when using VDCs the physical switch is virtually carved up into separate switches. Each of these switches however needs to have physical interfaces assigned it in order to interface (see what I did there?) with the outside world. In the case of the output above you can see that the list shows e3/25 to e3/44, but not ports e3/1 - 24 and e45 - 48 as they are assigned to another VDC. The break in the port numbers stems from the fact that ports need to be allocated to different VDCs in blocks that correspond to an ASIC boundary. So we assigned 1 through 24 and 45 through 48 to another VDC. Keep this in mind when planning port counts as in the case of F2 modules one needs to carve out blocks of 4 ports. Other modules may have different boundaries which you will need to be aware of when planning capacity. If you need to assign 1 interface to a VDC for a specific purpose, you will end up having to assign 4 in my example.
Well this was a very basic introduction to Nexus VDC, just to show a little of what it looks like from an interface status perspective for someone who has never seen it before. If you're looking at the Nexus line I urge you to read up on it very carefully and have some discussions with your Cisco rep or trusted reseller. I cannot do the topic justice in a few paragraphs (and even an entire book on the topic that I purchased doesn't). It is easy to simply think that it is very similar to IOS and while much of the syntax is similar, there are a lot of important differences that can surface as you begin implementation. The sales pitch regarding similarities is really about making existing IOS users comfortable about trying something new. Pay particular attention to vPC functionality as wrapping your head around it and getting a good understanding of how it works can take time. An excellent primer on how routing works on the Nexus 7000 when using vPC is available on Brad Hedlund's web site. And while there are several vPC best practice lists, a couple that I like to reference are the one at NetCraftsmen, and one on the Cisco Support Community web site. Until next time.
January 27th, 2015 - 20:03
great explaination….