VirtualBacon

Latest Experience Selecting Blade Servers

Posted on February 13, 2013

dell-blade-server2I have been looking at blade servers for several years now but was never really enthusiastic about the options available.

Sure, the concept was good: greater density, simplification of cabling (if done right), cable reduction, easier to provision and replace servers, etc.

Some other parts of the story were not so good: Price (anywhere from 9 - 14 rack server replacements to break even depending on the configuration), network density (1 Gb network connections no longer seem that fast when you pack so many servers into a chassis, especially if your rack servers already have multiple 1 gigabit connections), power density (you run out of rack power long before you fill the rack), and heat density (you know, what the power turns into).

Financially a big obstacle was that break even point. Like many shops we were used to purchasing servers on a project basis which meant buying one or two servers here and there, maybe 4 or 5 for a larger project. Purchasing an enclosure for the first two blades makes sense when you consider that you will break even at about 9 servers and can fit 16 blades in a chassis (eg. Dell and HP, 8 for Cisco, and 14 for IBM - depending on the form factor), selling management on the high up-front cost however was another story, especially when everyone was used to buying servers for specific projects. After all you buy all the supporting infrastructure up front: the chassis (not usually a large expense), the power supplies, the Ethernet switches (or port aggregators, fabrics extenders, and others), the storage switches (such as if you have fiber channel storage), and perhaps even software licenses (such as special adapter features or per port switch licenses). As we were looking at virtualizing close to 80 older physical servers (after already being 75% virtual), the cost of replacing all those physical rack mount servers with new rack mount servers was EASILY outweighed by refreshing our virtual infrastructure on blade servers to refresh most of those older servers to virtual servers on new hardware.

For our project we looked at 3 vendors: Cisco, Dell, and HP. We had our reasons for picking these three: HP was our incumbent server vendor, Cisco was our incumbent network vendor, and Dell was our incumbent storage vendor (through their purchase of Compellent). We were not working with any vendor that we were not already working with. As far as the technology went I was fairly comfortable with what all three vendors offered. I liked the network-centric approach of Cisco for managing servers. I was comfortable with HP as I have worked with HP servers for over 12 years (starting with the Compaq line), though I had some doubts about Virtual Connect (perhaps due to a lack of familiarity). And I was comfortable with the Dell servers after seeing them anew after a poor experience with them over 6 years ago (and working with a test unit for a few weeks, recently - production before).

Here is a quick list of my impressions of what I reviewed. The perceived likes and dislikes for each vendor approach to blades can differ depending on the design factors for the business solution.

Cisco (UCS B-Seriesucsb

Liked:

  • Network-centric approach to server management.
  • Simplified cabling.
  • PowerShell scripting interface.
  • Management of multiple chassis done through a single interface (on the Fabric Interconnects, or FIs for short). We spent some time in a lab demo and were impressed with how it worked, though we were limited in what we were allowed to do (because of the demo environment, not the tool).
  • The ability to set up a vPC with our planned new Nexus core.
  • Familiarity with Cisco as a vendor.
  • Pretty good track record with Cisco Support over the years, once past Tier 1 level.

Did not like:

  • Only 8 half-width blades per chassis. I know they say that the cost is not in the chassis, but once you add the cost of the chassis, the power supplies, the FEXes, and SMARTnet, those costs still add up. Per chassis. I was purchasing 14 blades up front with plans to buy another 10 later. So I would need 3 chassis, and a fourth one if I needed just one extra blade over what I planned for. I also understand the logic behind this decision as it was explained to me by Cisco (surrounding power), but I want more in-rack and in-chassis density.
  • All traffic between blades in the same chassis has to go up to the fabric interconnects and back down as there is no switching in the chassis. Many people would not take issue with this but it just doesn't seem right to me.
  • While there would be two fabric interconnects a network misconfiguration could potentially take out the entire blade infrastructure, across all the chassis.
  • The need to purchase additional FI port licenses to connect more cables between the chassis and the fabric interconnects.
  • No in-chassis storage options (more on this later).
  • Cost when compared to competing quotes. After accounting for planned growth and future project the quotes showed a significant premium over a competing solution. Getting quotes updated was also time consuming (days) and every change caused all discounted component pricing to change as well. We could not just run with the prices for certain pieces of the solution as each change changed everything.

Summary: This was a bummer because overall I liked Cisco's approach and would have liked to go with them but cost was too much of an issue.

Dell (PowerEdge Blade Server)
m1000e-storage

Liked:

  • After Dell purchased Compellent, our storage array vendor, we worried that they would ruin a great thing. They didn't, and that meant something.
  • Vendor relationship. We worked with DellCompellent in regards to our Compellent arrays. We also had a long standing relationship with our rep from back when she was with a different vendor and we worked pretty well together.  Overall the team very responsive to our needs during this long and arduous process, and made it easy to get quotes with prices that would not change discounting all over the place every time a single component was changed (ahem, other vendors).
  • Quality. Not that the other vendors do not have high quality hardware, but after taking another look at Dell servers after not having had a great experience with them over 7 years ago these are totally different machines. These servers appear to be made of high quality materials and contain innovations that the older lines, built with off the shelf parts, did not have.
  • Ability to put 24 memory sticks into a single blade which meant being able to populate with room to grow. Not a unique feature but not available with all vendors.
  • Redundant SD cards for the ESXi install. Others have a single slot, these have two in a RAID1. No SPoF (single point of failure), and no hard drives needed. This should make a big difference in heat density when considering a chassis full of blades.
  • Flexible adapter options. There was a pretty wide variety of options for the mezzanine slots, not just 2 or 3.
  • Availability of quarter height blades. If you have not seen these they are an even smaller form factor than half height blades and up to 32 of them fit in a single chassis. Pretty cool. I'll probably get a sleeve for each chassis to fit 4 of these for certain use cases.
  • Availability of in-chassis storage. While we may not go this route it is nice to have the option to put a hybrid SSD/SAS shared storage array in just two half-height slots, such as for VDI. This brings high performing shared storage close to the hosts without any additional cabling. Being a DellCompellent shop there are also some interesting future possibilities in regards to data migration of tiered storage between the different arrays.
  • Availability of reasonably priced 10/40 Gb non-blocking full featured Layer 3 switches within the chassis.
  • Cost. Dell was aggressive out of the gate, they gave us their best price up front, and we did not have to haggle like we felt we had to with the other vendors. With the other vendors it felt like I had to go back and forth ten times to shave off a few dollars each time.

Did not like:

  • Lack of familiarity with the servers. Not having worked with Dell servers recently there was a sense of risk associated with moving to a server vendor we had little experience with. This discomfort was somewhat mitigated after working with a demo unit, and feeling that because we would be running ESXi on HCL-compliant hardware the risk was greatly reduced. And we are still talking about a top 3 server vendor in the world, not a smaller company.
  • Lack of familiarity with the networking. We elected to put Force10 switches for Ethernet in the chassis to provide 10Gb/40Gb networking. The pricing was very competitive, we had heard good things about Force10 switches, and I have worked with a similar switch interface before (think Foundry Networks and Extreme Networks). But it was new and different. We are a Cisco shop and elected to stay with Cisco at the core, and we are comfortable with that. I was eager to try out these MXL switches and was pleased with what I saw. Only time will tell, just as it did with every other switch vendor I have worked with.
  • Lack of familiarity with the management tools. After working with IBM, Compaq, and HP servers for so long iDRAC seemed completely foreign. After working with it for a while on the demo unit though getting around was pretty easy. We are still learning the different ways of doing things compared to how we are used to doing things on our current server population, but it's getting a little easier every day.

Summary: Seemingly good quality products (time will tell), lots of vendor attention, easy to get demo units, and most competitive quotes (without having to pull teeth).

HP (Proliant BladeSystem)
HPBladeSystem

Liked:

  • Familiarity with HP servers.
  • Good track record overall with hardware quality.
  • HP ProLiant G8 manageability features are quite nice. (Although all the vendors seem to be improving, with on-chip management tools).
  • Ability to install storage within the chassis.
  • Incumbent server vendor so easy to go with this selection.

Did not like:

  • While we did not have too many problems with HP Support, when we did need them the quality of service was increasingly poor and it often coincided with the worst issues.
  • Vendor relationship. We have been an HP server shop for years yet we have almost never met with our HP reps. We had to ask around to see if anyone knew who our current rep was as previous contacts were no longer there (but we found some of them at Cisco and Dell).
  • Virtual Connect. While I am sure that this works for may people I am not yet sold on this. Having to select between Flex10 and FlexFabric was confusing even after asking for clarification on multiple occasions, the limit of being able to carve Flex-10 and FlexFabric 4 ways only (what if I need 5? -> Network I/O Control in vDS anyways then). Granted I am still unclear about the details of how this works, no thanks to my asking on multiple occasions. After going through the quotes it also seemed that a lot of money was going towards software or feature licensing instead of towards hardware. Overall VC comprised a large portion of the cost of the solution, larger than the networking of the other solutions we looked at.
  • Cost. The quoted cost of the HP solution was the highest of the 3 vendors in a year when green dollars were the bottom line. The up sell on reduced OPEX had less impact this year and I have to wonder if being the incumbent HP felt less of a need to compete, since they didn't.

Summary: It would have been easy to go with HP but the confluence of a poor (almost non existing) vendor relationship, a poor support record when it mattered most, and high cost on initial quotes made us look elsewhere. It might have been possible to work this out over time but our rep didn't start paying attention until the 13 hour, when it was too late. We were tired after months of research and negotiations, and never received final best price quotes that were promised to us. I still like HP servers, but they did not make it this time around. Bummer.

 

I want to point out that while I made the choice to go with Dell this time around, that does not mean that the competing vendors do not have good products. On the contrary I believe that all 3 vendors have very good products, and that it was the confluence of business, design, budget, and people factors that led me to make the decision that I did. At another point in time the decision might have been different. I know that many people, myself included, have strong preferences and loyalties about certain things but that brings up the question: Does that mean that you would not grant other options an objective evaluation (as much as is possible anyway)?

I will say that going into this process I did not think that I would come out where I did. Quite the opposite actually. But over several months of comparing solutions, working with vendors, building relationships, achieving varying levels of comfort with different approaches and technologies, thinking about planned projects throughout the coming year or two, and working to best meet our budget goals, I ended up where I did. I did not pick the most competitive bid out of hand however; it was selected based on the totality of factors. I even passed along the data upstream to the Director level for an unbiased evaluation of the data and had my decision reinforced. Yes there was also a lot of math, comparison tables, and more, and maybe one day I will sanitize those tables and update this page. But I doubt it.

Being the type of person who likes to do it all I would like to implement and use all 3 vendors' solutions, but alas the reality of budget constraints and the need to standardize where it makes sense prevails. In the end we just need to make the best business and technical decisions that we can with the information that we have at the time, and within the constraints that we have. I hope that this information is of use to some of you. If you are looking for more information specifically related to blade servers I recommend taking a look at Blades Made Simple where Kevin Houston offers an in-depth look at different vendors' blade solutions.

 

Posted by Peter

Comments (0) Trackbacks (0)
  1. I too am going through the exact same process at the moment with the exact same vendors and have found the exact same pros/cons for all vendors even after over a year. We’re a HP shop at the moment and have tried to look for ways to continue down the same path, but at ghe moment Dell are quoting half price for 2 chassis (one for our DR as well).

    HP support still the same, we’ve gone to an independent for this and has improved significantly.

  2. We’ve been an HP Partner for more than 10 years and spend thousands of dollars every year in marketing campaigns to win new customers. To read about your experience with HP representation absolutely astounds me. I realize it happens, I’m just sorry we couldn’t have been involved, as HP not only has a great product, their pricing in my experience has been extremely aggressive, and I’ve not lost an opportunity to Dell on price in recent memory. I wish you well with the Dell, they have a product that is certainly competitive with any of the manufacturers.


Trackbacks are disabled.

Website Security Test