VirtualBacon

A Look at the Brocade 6510 Fibre Channel Switch

Posted on September 29, 2013

Brocade_6510Part of a recent refresh project included upgrading the fibre channel storage network. I was running Cisco MDS 9124 switches at 4 Gb and they were working just fine, but I knew that with newer blade servers and greater server consolidation ratios I would need more bandwidth per port. I was pleased with the Cisco switches but as I was putting Brocade M5424 switches in the blade enclosures I figured that it would probably not be a bad idea to replace the rack mount fibre channel switches with Brocade switches as well.

Initial research into Brocade switches paired with Cisco MDS configurations seemed to indicate that an ISL would not be supported by either vendor, and running a single 8 Gb link between two FC switches hosting multiple active front-end array ports was not going to cut it as the load increased over time. I was interested in a model that would meet our needs for the next 3 to 4 years and the Brocade 5100 or 5300 might have been sufficient, but I also saw that new 6505 and 6510 models could provide up to 16 Gb capability. So I selected 6510 as I needed the greater port capacity and as insurance. I also did not like the idea of purchasing a model which might soon be end-of-sale.

brocade_logoHere are some specs for the Brocade 6510 fiber channel switch from the product page:

Fibre Channel ports Switch mode (default): 24-, 36-, and 48-port configurations (12-port increments through Ports on Demand [PoD] licenses); (E, F, M, D, EX) ports
Access Gateway default port mapping: 40 F_Ports, 8 N_Ports
Scalability Full fabric architecture with a maximum of 239 switches
Certified maximum 6000 active nodes; 56 switches, 19 hops in Brocade Fabric OS® fabrics; 31 switches; larger fabrics certified as required
Performance Auto-sensing of 2, 4, 8, and 16 Gbps port speeds; 10 Gbps and optionally programmable to fixed port speed.
ISL trunking Frame-based trunking with up to eight 16 Gbps ports per ISL trunk; up to 128 Gbps per ISL trunk. Exchange-based load balancing across ISLs with DPS included in Brocade Fabric OS. There is no limit to how many trunk groups can be configured in the switch.
Aggregate bandwidth 768 Gbps end-to-end full duplex
Maximum fabric latency Latency for locally switched ports is 700 ns; encryption/compression is 5.5 µsec per node; Forward Error Correction (FEC) adds 400 ns between E_Ports (enabled by default).
Maximum frame size 2112 byte payload
Frame buffers 8192 dynamically allocated

 

My previous setup included two separate fabrics with 3 switches each, with each switch 1 hop away from the switch hosting the front-end ports of the array controllers and connected with a 4 port ISL (16 Gb). This was a smaller setup which grew as the number of ports for server connections increased. With the move towards blade servers however the number of ports required would decrease since we would be virtualizing some of the currently connected servers, and we would be decommissioning 21 VM hosts that were using a combined 42 FC switch ports. We also would not be losing as many ports to ISLs.

The new sSFPetup could be sustained on two fabrics each running on 1 rack mount switch (two fabrics for redundancy), and each switch would have a couple of two port ISLs downstream to the fiber channel switches in the blade enclosures. I only needed a port bandwidth of 8 Gbps so I purchased those SFP+s, as they were cheaper than the 16 Gbps SFP+s which would likely not be needed for a couple of years. I rather liked the idea of being able to increase throughput without having to replace the switch, while being able to save some money upfront by running 8 Gbps which my calculations showed was what we needed, with room to grow. The ability to upgrade by simply swapping out SFP+ modules as needed was insurance. The alternative would have meant purchasing switches with 8 Gbps ports and having to add or replace hardware if our needs grew at a rate which would cause us to need additional capacity prior to the next refresh cycle. I wish I could have done the same with the FC switches in the blade enclosure but unfortunately the 6500 blade switches were not yet available when we purchased. Mapping different internal HBA ports to separate external ports and using NPIV however would solve the problem if needed.

Rack mounting the switches was standard. The switches are 1U in height, and seem a little deeper than the Cisco MDS 9124. This is not an issue, just different. It is something to consider however if you have 9124s in a Telco rack not far from a wall or in a shallow rack as this could be a problem. I have worked in more than one environment where the Telco rack close to the wall was all too common. I am also not a fan of the rails the switch comes with. These are the same as those that came with the MDS, but after using newer click rails for HP, Dell, and IBM servers one learns to dislike the older style rails even more. I'd pay a few dollars more for better tool-free rails.

 

Initial impressions compared to Cisco MDS 9124:

Unlike the MDS 9124 the Brocade 5424 and 6510 switches cannot be used in regular (switch) mode and NPIV (access gateway) mode at the same time. Learning this after purchase was problematic and entirely my fault for not having thought to check. As our array controllers had been setup in what is called Legacy Mode many years ago converting to Virtual Ports would not be possible without taking an outage. In fact technically a non-service affecting conversion was possible but the array vendor would not do it due to the perceived risk. Not being willing to take risk with primary storage either I instead chose to upgrade our storage array in parallel instead of in-place which would allow me to migrate data and disks over to the new array over a period of time, using storage vMotion for VMs and replication between the arrays. This would mean that the new controllers could be setup in Virtual Port mode using NPIV and they would be plugged into the 6510s running in Access Gateway mode (Brocade's term for NPIV mode).

Setting up and managing the switch is pretty straightforward. Unlike the MDS I did not have to install any software on a server to manage the switch. Simply navigating to the switch IP in a browser launches a lean Java client to perform all the necessary tasks. This is nice considering that I don't go into the switches very often, typically only when adding new servers. This will be infrequent with blade servers downstream plugged into chassis FC switches. Ongoing monitoring is done through SNMP polling and graphing in Cacti, while config changes are captured by SolarWinds NCM.

Overall I like the switches so far. After 8 months I have found that they are simple, reliable, and get the job done. The ability to run in switch mode and access gateway mode at the same time would make them better. This may not be an issue for you but it was for me, at least initially. I also wish that I could ISL between Brocade and Cisco but neither vendor has any interest in doing so since it could make it easier for customers to migrate from one platform to the other.

Need to know more? Check out the Brocade 6510 product page and the Brocade 6510 FAQ .

Brocade Gen5 FC

 

Posted by Peter

Comments (0) Trackbacks (0)

No comments yet.

Trackbacks are disabled.

Website Security Test