VirtualBacon

Dell Force 10 MXL – First impressions

Posted on February 27, 2013

force10-logoI recently began a virtual infrastructure refresh which included the purchase of a couple of Dell M1000e blade chassis. The solution had a number of networking options to choose from, including new Force10 MXL 10/40 Gb Ethernet switches, which I chose. If you have not seen these switches before they are essentially the Force10 S4810 switch in a format made to fit the Dell blade chassis. The announcement for the switch can be found on the Dell web site but here are a few of the highlights:

  • 2 line rate fixed 40 GbE ports (up to 6 when using 2 FlexIO modules, that can be turned into 24 10 GbE ports when paired with 4 x 10 GbE breakout cables)
  • Up to 32 line rate 10 GbE KR ports (server facing when using quarter-height blades)
  • Switch fabric capacity: 1.28 Tbps
  • Forwarding capacity: 960 Mpps
  • MAC addresses: 128K
  • IPv4 routes: 16K
  • Line rate L2 switching
  • Line rate L3 routing
  • Ability to stack up to 6 MXLs using 40 GbE QSFP+ ports
  • Converged Networking using FIP snooping (it is not an FCF so you still need separate FCoE switches)
  • IEEE DCB compliant
  • Scalable and flexible with the availability of different FlexIO modules
  • 2 GB RAM
Force10_MXL

DellForce10 MXL

According to the specs this is a very capable switch with all the features it can have fully licensed when you purchase it. While some people might need a port aggregator or another switch option with a lot of gigabit ports, the 10/40 GbE switch in question fit the requirements of my project quite well. The shift from multiple 1 GbE to 10 GbE and the use of 40 GbE QSFP+ to 4x 10 GbE SFP+ breakout cables, combined with the use of VDCs (virtual data centers) on the Nexus and continuing conversions of physical servers to virtual servers, would give us 10 Gb network speeds, the ability to have several times that in terms of uplink speeds, while resulting in a huge reduction of switch ports and the ability to decommission a good number of switches. In short we would achieve greater bandwidth across all of our host to network links while reducing the number of switch ports (over 100 ports across several pairs of switches).

If you haven't seen the Dell M1000e blade chassis yet there are 6 IO modules in the back which can take up to 6 switches. Each half of the back of the chassis has 3 fabrics: A,B,and C . In my configuration 4 of those slots - fabrics A and B - are populated with Dell Force 10 MXL switches.  The switches in the fabrics are redundant to each other, and each connect to a separate VDC on a Cisco Nexus 7009. Each blade server in the chassis has 2 dual port 10 Gb mezzanine cards, one card connected to fabric A and one connected to fabric B. Fabric C is used for the fiber channel storage network.

A requirement of the network and virtual infrastructure refresh project was to add 10 Gb networking. We were already working with Cisco switches at the core so we were pretty sure that we would be sticking with it. Our track record with Cisco was very good and we had no compelling reason to change. Speaking of Cisco a new interesting option if we were deploying Nexus 5Ks (and 6Ks now) - which we were not - might have been to use a Cisco Nexus B22DELL Blade Fabric Extender. This would basically simply place a Nexus remote line card in the fabric slot on the server which can ease network administration. You would have a 2 to 1 oversubscription but this would be acceptable in many shops that would not saturate 80 Gbps worth of uplink (8 x 10 SFP+). We went with 7Ks which were not supported at the time, and I am not sure whether that has changed yet. There are some differences between connecting FEXes to Nexus 5Ks and 6Ks compared to 7Ks (vPC related).

Back to my deployment: In doing research for the refresh options I came across the document Deploying the Dell Force 10 MXL on a Cisco Nexus Network which showed pretty much what I was thinking of doing. The document described 2 Nexus switches connected in a vPC, with downstream MLAG connections to the MXL switches.

The example described in the document is this:

Nexus_to_MXL_example

As you can see the diagram depicts two Cisco Nexus switches in a vPC and the left side shows an MLAG from the MXL switch to the Nexus switches, which is what I was planning on configuring. The document helped speed things up as I was not familiar with FTOS, the Force10 operating system. That said many of the FTOS commands are quite similar to Cisco IOS commands, with differences that I have noticed mainly related to protocol and VLAN configuration (ports are assigned to VLANs as opposed to VLANs being assigned to ports). The differences in the protocol configuration are actually quite similar to how it is done in NX-OS, found on the Nexus switches. Overall the differences were not that pronounced, and working in the OS is very similar to what I remember from working on Foundry Networks and Extreme Networks switches long ago.

I won't go into the switch level configuration details, but the document I link to above has excellent examples, much of which you can pretty much copy and paste.

Here are a few examples from the deployment guide which illustrate the difference in configuring port VLAN membership:

MXL1#configure
MXL1(conf)#interface tengigabitethernet 0/1
MXL1(conf-if-te-0/1)#switchport
MXL1(conf-if-te-0/1)#spanning-tree pvst edge-port
MXL1(conf-if-te-0/1)#exit
MXL1(conf)#exit
MXL1#

MXL2#configure
MXL2(conf)#interface vlan 11
MXL2(conf-if-vl-11)#tagged tengigabitethernet 0/1
MXL2(conf-if-vl-11)#no shutdown
MXL2(conf-if-vl-11)#exit
MXL2(conf)#exit
MXL2

So far I have found the MXL switches easy to use and configure. Accessing the console can be done through an SSH session to the chassis and then typing connect switch-[slot number] (like connect switch-a1), or there is a USB Type A console port on the blade if you want to use a serial console switch (or connect directly for that matter). There is also a USB port for a storage device. The construction is solid and working with them is pretty painless. I am using them with a demo Force10 S4810 as I we have not yet completed our Nexus cut-over, and I will perform some network performance tests and post more information as I have it.  Anecdotally my initial vMotion tests have been favorable but I have not collected any metrics. Time will tell whether these switches prove to be as stable and reliable as the Cisco switches, and how flexible the configuration can be as we continue using them. If my experience with other enterprise class switches are any indication there should not be any major issues, but like I said, time will tell. I'm only a little less than 3 months into this gear.

Breakout Cable

40 Gb QSFP+ to 4x 10Gb SFP+ Breakout cable

 

 

Posted by Peter

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.

Website Security Test