VFCache – Man, am I conflicted over this announcement. On the one hand, I applaud it. Here you have a market leader addressing a trend (host-based flash cache) with a lot of potential for customers. That’s great. That’s what you want to see out of your vendors. On the other hand, if I net out the actual product (and EMC people, I can stand to be corrected but it’s all I could find as far as technical details go), I come up with this:
If you have a FC-only SAN (any SAN; no unique EMC array value here), non-virtualized, high random read, small working set application where cache coherency/data integrety isn’t a concern, then a proprietary VFCache card (limit one per server) is for you!
Wow - there’s lowering the bar for market entry and then there’s just laying it on the ground so you can step over it. Even with all of the app hype in EMC’s presentation, I was hard-pressed to come up with a good use case.
I even got a good chuckle with the Thunder pre-announcement. In a rare vendor triple lutz, EMC announced VFCache in paragraph one and pretty much gutted it with the Thunder announcement in paragraph two. That had to be a new land speed record for obsolescence. If not obsolescence, it will be interesting to see how EMC stitches this all together in the coming year. But, it’s pretty clear that there wasn’t a lot of “there” there today.
Now – all that said – I still like the announcement. I’m not crazy about a low/no-value v1.0 product as a conversation starter but, there is something to be said for having that conversation. With all of the big brains running around inside NetApp, I sometimes wish we wouldn’t play things as close to the vest as we do. Almost a year ago to the day, NetApp previewed its project Mercury at FAST ’11. Chris Mellor picked up on it in The Register. Other than a few other mentions here and there, you didn’t see a lot of press on Mercury from NetApp; not a lot of chest thumping even as it turned into a hot topic for customers. I will say if you want to hear the latest-greatest on Mercury, you can ask your local sales team for an NDA meeting. We’ve been sharing the info under NDA and as I’m sure EMC, Fusion I/O and others can attest, it resonates very well.
Another interesting facet to the EMC announcement is the central role that caching is taking in its AST strategy. Let’s face it, FastCache was meant to remedy the glacial data movement issue of FAST (and, quite frankly, as a reaction to NetApp’s Flash Cache). However, once you’ve plugged in to a caching strategy, it’s easy to see the logical next steps: moving an intelligent cache closer to the point of attack. We talked about the inevitability of a caching strategy in the blog Why VST Makes Sense and the next logical steps in The Next Step in Virtual Storage Tiering. There's no question that intelligent host-based caching is a great next step and a logical extension of a VST strategy. (Just wondering how long it will be before EMC adopts VST as a strategy?)
I actually think there is a balance that can be struck here. I do think there’s value in promoting your ideas on how to best solve customer problems. From that standpoint, I perfectly understand the EMC announcement. But, I also think there’s value in delivering a solution that has practical value to a customer. What’s practical about the VST strategy? Well, the great thing about caching is it just works. You don’t have to worry if caching works across protocols or if it supports advanced application features. You wouldn’t even have to worry about which cache card, necessarily. Flash is hardware. Hardware commoditizes and in the eyes of the customer this should be a good thing. The key to a VST strategy - just in case EMC is looking for some messaging as it ventures down the caching path - is flexibility. It's a consumer (vs. vendor) driven model. It would be a brave new world for EMC but, as we have said before, one that is deeply embedded in the NetApp DNA. For more detail, on how Mercury plays a role in the VST strategy, give your NetApp sales team a call. Chances are, they'll bring it up for you.