Boosting Storage Performance with PernixData

Posted on September 16, 2013

PernixData_logoFollowing up on a previous post I have long looked for ways to boost storage performance in my virtual environment without having to purchase 2 to 3 times more capacity than required in order to meet performance needs. While my arrays generally work well the issue is more one of predictability. Most of the time the arrays perform great and things zip along just fine, but occasionally read or write latency will briefly spike to noticeable levels due to a sudden change of one or more workloads (such as a poorly defined database query), due to data backups, rebuilds failed after disk replacements, etc. While there are a number of options out there the ones that appeal to me more are those that are software-based and those that permit more linear scaling of computer, memory, and storage performance. I've had my eyes on a few and plan to test several of them but the one that really caught my attention is PernixData FVP (Flash Virtualization Platform). This product stands out for several reasons:

1. It doesn't use a virtual appliance.

Running an additional workload on a host, or even each host is not the end of the world, but having to guarantee performance by applying reservations is problematic. I don't want the VM to ever be starved for resources nor do I want it to potentially cause other VMs to become starved for resources. Setting reservations also impacts HA capacity. Imagine what setting the equivalent of two processors worth and 16 GB of RAM to a virtual appliance - potentially on each host in the cluster depending on the solution - might do to your fail-over capability. VM/VA solutions also need to be clustered if they are to provide the availability expected of presented shared storage, not to mention patched, have VM tools upgraded on occasion, be rebooted, potential to hang or crash, and more.

2. Host installation is as simple as installing a small VIB.

Installation was a small VIB file which can easily be made into an update pushed out to all hosts via VMware Update Manager. While the host has to be put into maintenance mode for installation no reboot is required. This is no more impactful than the occasional host patching as required and VMs can simply be vMotioned off to other hosts. The command line installation on the first host I tested on took me a couple of minutes on my first exposure to the software.

3. Management is integrated with vCenter.

While this is not a unique feature some others require that a separate management web site or application be used. The management console application is an executable and does require a SQL Server database, though you can use an existing one or even install SQL Server Express Edition. The app is used to configure and collect analytics, but the system itself continues to run without it once configured, much like a vCenter server.

4. This one is huge - it supports write-back caching.

Most storage acceleration products only provide read caching. The reason for this is that it is the easiest to do. Handling fail-over scenarios is not as complicated when doing read caching only since losing the cache simply means that data has to be re-read to build a cache on a different host. In write-through mode the writes are not acknowledged back to the application until they have been committed to the back-end storage system. This is the way writes are normally done. PernixData has developed a clustered solution that can replicate writes to one or more peers in order to prevent issues in the case of a host outage.

5. No configuration changes are required for the VMs.

No new virtual switch port groups are created, no new datastores are created, VMs do not have to map an RDM or have a client (aka agent) installed, they do not need to be vMotioned to newly presented storage, no upgrading the virtual hardware, no editing the vmx file, nothing. The solution is completely transparent to the virtual servers. You simply select the VMs or datastores to be accelerated on a new vCenter client tab and you're done.

6. The company was co-founded by Satyam Vaghani, sometimes referenced as "Mr. VMFS", a man who truly understands the file system - since he created it.

The mindset and philosophy of the simplicity and robustness that you know from the VMFS file system is evident in FVP. Setup is quick and simple, and the interface is clean and intuitive. There is little to configure and few knobs to turn as the solution works "out of the box".

What is PernixData FVP?


PernixData FVP is a software solution that virtualizes server-side flash storage via a scale out architecture. What this means is that increasing performance is as simple as adding another FVP-enabled host to the cluster. Ideally this new host would have local flash storage of its own which would be virtualized, but interestingly this also works without local flash as long as writes are being done in write-through mode.

Another key differentiator of this solution when compared to others is that no changes are required in the virtual environment. The software runs as a kernel module at the hypervisor layer and is completely transparent to the VMs. If it wasn't for the extra tab in the vCenter client you would not even know is was there (though you need the tab to configure [note: maybe not if you use the included PowerCLI command set]). An enormous amount of work must have gone into development to ensure that the software can do everything that it does within the finite amount of resources used by the hypervisor. This low profile is in stark contrast to solutions that require a virtual appliance with resource reservations and that need to be co-scheduled with other virtual machines.

In looking at the various solutions out there this one certainly addresses many of the concerns I mentioned in my post asking:"Is it time for a new data tier?". The solution currently supports any iSCSI, FC, and FCoE storage on the VMware HCL, with NFS support on the roadmap.

More information is available on the PernixData web site and in their Storage Field Day video.

Update Sept. 2013: I have since purchased this product for a production cluster of 8 hosts. While read caching alone showed some improvement, it is write-back caching that really improved the feel of guests and their applications. Look forward to a post with screenshots and graphs from real-world production enterprise workloads.


Posted by Peter

Comments (0) Trackbacks (0)

No comments yet.

Trackbacks are disabled.

Website Security Test