1 2 Previous Next

Virtualutopia

19 Posts

New site:  http://vmmaverick.com

 

Please update all RSS subscriptions to http://vmmaverick.com/feed/

 

The operation to migrate all content is complete, and any new posts will be made to the new site.  Efforts will be made to push a feed to the communities site here so we can engage with one another either way.  Regardless, I am looking forward to contributing further to the virtualization community at large.

 

--Scott

VMworld is in full swing on Day 2, and the Hands on Lab program is well underway with people looking to walk home with knowledge and skills that have been earned through fingers on the keyboard.  This year, VMware was kind enough to offer Netapp, Cisco and EMC and opportunity to participate in the program.  As you might imagine we were all excited about the opportunity and dedicated some of our best resources at it.

 

That said, I'd like to share with you some of the current stats mid-way through the event.  Of course, there are 2 and half days or so, so these numbers are likely to change.  If you click on the image below, you'll see that we are doing well as the top partner lab and rank #12 based on all of the labs deployed as of 3:13pm on 8/30.

 

HoL.png

A very special thanks goes out to the staff: Jim Weingarten, Julian Cates, Peter Learmonth, Nick Howell, Brian Schofer and Lisa Crewe on all of the hard work to showcase a killer NetApp product demonstration set.  The labs turned out great and people have walked away very excited about what they learned and how the pieces fit together.

 

VMworld 2011 is turning out to be an amazing show!!

Well, v0dgeball 2011 has come and gone, and I must say that all teams present put up an incredible show of skill and sportsmanship.  As you'll recall, I posted earlier that this event was a fundraiser with money going to the Wounded Warrior Project.  I can say that as a former member of the US Armed Forces, I do very much appreciate the generosity that was at the heart of this competition.  That said, I'd like to extend the spirit of giving by offering an opportunity for you to be part of our team.  If you would be interested in purchasing one of our team shirts, I'll charge you cost + $5.00.  The $5.00 will be donated to the same program as v0dgeball.  If you're interested, please complete the following form:

 

If you're interested, here is what I need from you. Please complete and send me an email by clicking here before Sept 7th, 2011 : Email Scott

 

I'll confirm the final price and we can make payment arrangements via paypal.  I'll drop a follow up post on the blog once the money has been collected for a final contribution count.

 

Thanks for the continued support - Go NetApp v0dgeballers!

It is win or go home. No questions. You have to bring your best or you will have to wait for next year.  And of course the winner will remind you until then.

My love for this game was cemented last year when I watched the friendly competition between NetApp and EMC as the favored underdog storage vendor was down by one.There was very little time left.

Inbound play. Last second throw by the NetApp right wing. The release… A red blur over mid-court… direct hit as the stomach of the EMC player reverberates in a slow ripple effect from the shock… WIN!

If you read that sentence slowly you will get the feeling I felt as the ball bounced around on the floor. After finally knocking out the last EMC player, the audience and the NetApp team jumped up off the bench and with a vicious fist pump screamed,“V0dgeball Baby!” The commentators even shared in the jubilation:

Cotton McKnight:Do you believe in unlikelihoods? Netapp’s shocking the v0dgeball world and upsetting EMC in the championship match!

Pepper Brooks:Unbelievable!

Cotton McKnight:Ladies and gentlemen, I have been to the Great Wall of China, I have seen the Pyramids of Egypt, I’ve even witnessed a grown man [redacted] a camel. But never in all my years as a sports caster have I witness something as improbable,as impossible, as what I’ve witnessed here today!

This is oneof my favorite moments in v0dgeball, and I expect nothing less from the 2011v0dgeball tournament in Las Vegas, NV

. . .

Stories are an integral part of sports. There is no single competition that is not made more enjoyable and satisfying when the stories of the individuals and the program are known. If you’ve engaged in the story of an athlete or a team, you know the different forms that a story can take. The Star Athlete suffers from a triple-digit fever, but turns in the performance of his life. The Juggernaut v0dgeballprogram loses their right-winger; can they still dominate? The longest vOdgeball Series drought in v0dgeball history – will it end this year?

Perhaps the most compelling stories in sports, though, are those that center around the desire, determination, and faith of a group of individuals that by all accounts has no business succeeding in their chosen arena. Think Rocky. Think Hoosiers.Think Miracle. These Cinderella stories capture the hearts of anyone who listens, sports fan or not, because of the transcendent nature of the story they tell. This is not just about athletics, this is about being human. It’s about that indelible part of the human soul that burns to be part of something larger than itself. And the longer the odds are in the beginning, the more compelling the journey to glory becomes.

Enter NetApp’s Dominate v0dgeball Team:The Crazy Nasty-Ass Honey Badgers.

hb_small.png

While NetApp may be known for it's wildly successful storage and software products, there is perhaps no team in the IT industry more prepared to be the next great Cinderella sports story than the Honey Badgers. As for humble beginnings, this rag-tag collection of virtualization zealots has not played together before.  Can their passion for virtualization transcend the v0dgeball courts and bring a blazing victory and all of its bragging rights home to NetApp? Time will only tell, but odds seem insurmountable and building a strong v0dgeballpedigree takes a determined team.

In v0dgeball,as in all sports, success breeds success. Talented players go to winning programs. Without a successful past, history-changing players rarely fall intoa team’s lap. Without wins, a program must be built from the ground up to be competitive, and that takes a special kind of leader. A program builder. A captain capable of creating a total far greater than the sum of its parts.

Enter Vaughn Stewart.

Vaughn came to NetApp in January of 2000, and in 2010 began building that program. In the last 2 years Vaughn has made an unbelievable run at recruiting the best players from NetApp throughout the continental united states.  This,according to Vaughn is the first priority and key ingredient to creating a winning program. Vaughn's grueling recruiting efforts took him to the back waters of Louisiana to capture the widling Cajun dodgeballer - Julian Cates; the barren north of Alaska for the "right-wing yeti" we know as Joel Kaufman; to Brownsville, TX to grab the panhandler-turned-guard Jim Weingarten; to Detroit, MI to convince 80's hair-band lead Michael Slisinger to come out of retirement; knocking on the door of a secluded SoCal monastery to cash in on the favor owed to him by Adam Fore; to finally rescuing Set Forgosh from a collapsed table caused by the 5.4 earthquake that rocked the east coast.

Not all was success though, early in the recruiting process, the once famed Nick "that1guynick" Howell who dominated the dodgeball courts at South Charlotte Elementary (before Charlotte PD asked him to leave), that bad-boy from the south, suffered a broken nail and was red-shirted for the game.  He'll be there in body, but I worry that the break in that nail may have reached his spirit, and that my friends is the essence of a true player.

As captain of the Honey Badgers and NetApp’sv0dgeball program, Vaughn took even took his sister team, the vBallers, a struggling program in its own right, and built them into a powerful “B” team. He has earned captain of the year honors in 2010 and took his team to the v0dgeballfinals.

“Vaughn is a captain that has had success at every stop along his career path,” Arkansas Gazette Athletic Analyst Larry “Courtside” Williams has said. “His ability to develop and take multiple programs to unprecedented levels is no fluke. He is a program builder and his record speaks for itself. Vaughn values the importance of virtualization and applies it to his philosophy of teaching on the court,making him an excellent addition to NetApp’s v0dgeball program.”

The history of NetApp’s v0dgeball program is about to become just that: history.They’re finished taking abuse from the wicked stepsisters, nay-sayers and competitors, and they’re preparing to climb up out of the musty, dank cellar of the Las Vegas Sports Center. You can wait until they’ve become royalty to pay attention, but then you’ll miss entering into – and becoming part of – the story as it unfolds.

VMworld 2011 is upon us and the excitement around the event continues to build.  Like many vendors, NetApp has quite a lot going on and I thought it would very beneficial to offer you a set of CliffNotes to make sure you don't miss anything.  With that, here's your TOC:

 

TOC: Everything NetApp

 

It's Not All Work

Put Your Hands On It Without Violating HR Policies

What If I Wanna Hear Your Take

What Can I See At The Booth

Thanks For All Of This, But I'm In It For The Prizes

 

It's Not All Work

 

NetApp, in conjunction with EMC, Cisco, VMware, VCE, and others are participating in the 2nd annual vOdgeball event.  This year, the teams have chosen the Wounded Warrior Project as the recipient of the money we raise for the event.  The details about the event are here and if you don't have anything to do on Sunday between 7pm and 10pm, please come out and support some friendly competition and an amazing charity.  For NetApp, we will be sponsoring 2 team -- The NetApp HoneyBadgers and The NetApp vBallers:

 

Shirt_draft1_Front_blue.pngFriea_Shirt_draft1_Front_blue.png

The HoneyBadgers include: Vaughn Stewart (Captain), Julian Cates, Scott Baker, Adam Fore, Joel Kaufman, Michael Slisinger, Jim Weingarten, Seth Forgosh, and Nick Howell (Red Shirt).

The vBallers include: Friea Berg (Captain), Nick DeRose, Lisa Crewe, Chris Gebhardt, Rob McDonald, Heather Sutherland, Nick Triantos and Chris Wells

 

return to top

 

Put Your Hands On It Without Violating HR Policies

 

NetApp, EMC and Cisco were invited by VMware to provide a "pod" for the VMworld Hands On Lab.  The team worked some long nights to pull this off and as a result, I'd encourage you to take time out of your time at VMworld and use the HoL program to gain some immediate insight on the integrations, storage efficiencies and steady state infrastructure analysis tools to improve your virtualization initiatives.  The lab highlights our Virtual Storage Console, Insight Balance Product, Plug-ins and it's all presented in a day-in-the-life of an administrator who is asked to make decisions as workloads and requirements change.  Consider your schedule and stop by.  Besides, if you are prize collector, attending the HoL will get you a special pin used by our prize patrol team.  Located in the Sands Exhibit Hall D, the HoL hours are: Sun 2-8pm, Mon 8am-3pm, Tue 10am-10pm, Wed 8am-6pm, and Thu 8am-4pm.

 

What If I Wanna Hear Your Take

 

NetApp has a series of speaking sessions throughout the event.  Some are traditional, some support other customers/partners, and some are panels.  Here are the details where key members of the NetApp team will be participating in speaking engagements:

 

Session IDDayTimeTitle NetApp Speaker
SPO3987Mon10am - 11amSponsor Breakout: Seven Corners and NetApp Accelerate the Journey to the Cloud TogetherSpencer Sells
VSP1956Mon1pm - 2pmThe ESXi Quiz ShowVaughn Stewart
SEC2942Mon2pm - 3pmBuilding Trusted Clouds: Proof Points, Not PromisesRavi Kumar
SPO3988Tue10:30am - 11:30amSponsor Breakout: Oak Hills and NetApp Do the Math: Implementing a Practical VDI Solution Designed to Scale Beyond 50,000 SeatsVaughn Stewart
SEC2942Tue3pm - 4pmBuilding Trusted Clouds: Proof Points, Not PromisesRavi Kumar
VSP1361Wed8am - 9amPowerCLI 101Glenn Sizemore
SUP1009Wed9:30am - 10:30amSuper Session: Thomson Reuters and NetApp Build a Successful, Efficient CloudVal Bercovici
VSP3223Wed2pm - 3pmStorage as a Service with vCloud DirectorNick Howell & Jack McLeod
BCO2863Thu11am - 12pmHow to Use Distance to Your Advantage to Create a Unified Data Protection StrategyScott Baker & Larry Touchett
EUC2692Thu11am - 12pmPanel: Rethinking Storage for Virtual DesktopsVaughn Stewart





 

return to top

 

What Can I See At Your Booth

 

This year, NetApp have a front-row booth at VMworld, booth number #401. Our booth will provide focus area and live demonstrations related to Cloud Computing, Integrated Infrastructure Stacks with FlexPod, Virtual Desktops and Enterprise Business Applications.  The Cloud Computing booth will demonstrate our plugin to vCenter Orchestrator, tenant-level backups with our SnapCreator product, building SLA-based storage service catalogs with OnCommand, and we will show steady state analysis with the OnCommand Insight products.  For Virtual Desktops, we'll demonstrate storage efficiencies from NetApp, virtual storage tiering, space reclamation for thinly provisioned pools, and our partner Liquidware Labs will be there to highlight our partnership and their product portfolio.  In the Enterprise Biz Apps booth, we'll highlight disaster recovery with SRM5, app-consistent backups and multi-level restores with our new product SnapProtect, and how to isolate workload anomalies with Insight Balance (such as misaligned VMs).  Lastly, in the FlexPod booth, we'll demonstrate mixed workload deployment and management, secure multi-tenancy, steady-state analysis with our Insight products, and our partner Cloupia will be there to demonstrate an industry leading unified infrastructure management tool and a truly amazing new release that they will unveil.

 

There is a lot more to see, and certainly spending 1:1 time with some of NetApp's vExperts and professionals to talk about your virtualization initiatives, considerations you are making in your designs, and how to broaden the adoption and proliferation of VMware technologies will be a major benefit to you.

 

NetApp Booth.png

 

return to top

 

Thanks For All Of This, But I'm In It For The Prizes

 

This year we will have some really great opportunities to go home with more than some bag inserts, pens, hand sanitizers and the like.  Your first job - help drive NetApp brand awareness by stopping by the booth and picking up your official NetApp hat:

 

NetApp hat.png  Congrats.png

Wear the hat - literally everywhere, because our prize patrol could be anywhere; the exception will likely be your hotel room.  If you're spotted wearing it, they'll hand you the card like the one shown (I know, it was for CiscoLive -- it'll be branded appropriately) and you have a chance to win: a BlackBerry 32GB tablet, XBox Kinect, Sony Internet Blu-Ray Player, Amazon Kindle, and different tiers of gift cards.  Now I pointed out that we will be participating in the VMworld HoL and that we would be running live demonstrations at our booth.  That said, as you complete visit and complete each of these, you'll be given a pin to add to your hat.  The pins are holographic (expect the one for the HoL which flashes - hint: that will grab the attention of the prize patrol if you have it going) and look like the following:

 

Button_Sample.png 

 

Collect them all, and the prize patrol could tap you to be put in a raffle to win a 46" Sony Internet TV (and yes, we'll send it to you to forego any headaches of getting it into the overhead compartment).

 

Folks -- we are going to have an absolute blast at the event and we want you to be a part of it.  Looking forward to seeing you there!

Finding Waldo...

Posted by scottsb Aug 24, 2011

Let me apologize for my long social media silence (twitter and blog).  Since my last post, there have beenchanges internally in both organizational structure and businessfocus that have consumed my time.  However, these shifts havebloggable-value and I thought I would share with you whereour team is headed.

 

Shift in Organizational Structure

 

At the beginning of August, I wasgiven the opportunity to take a new role in the NetAppVirtualization BU.  Since the beginning of August, I havetaken the role of Sr. Manager for the Virtualization Technical MarketingEngineers.  I am very fortunate to work with a collectionof intelligent and capable professionals who are all as committed toVirtualization as I am and we are very excited about ourefforts align our efforts under the following core tenants:

 

  • 1.Be the Technical Teamof Choice
  • 2.Out-Innovate ourCompetition
  • 3.Capture VirtualizationMindshare
  • 4.Communicate,Communicate, Communicate

 

Certainly sounds like a tall-order, but the team thatI have the pleasure to work with are as deeply committed to thesetenants as I am. 

 

Being the technical team of choice appliesto all areas of cloud computing and virtualization. Our activities and collateral will be focused on drivingbroader virtualization adoption, dragging NetApp storage tosupport the infrastructure of course.  The goal is toestablish subject matter experts in core solutions for virtualizationcreating a "go-to" person for our NetApp internal customers,partners and our customer base. 

 

Out-innovate our competition is self-defining. We will push the envelope, passing opportunities to bringparity in our features and instead pass by our competition inour activities and deliverables.  It is a bit of a friendlycompetition, as we all want the same thing in the end - to make yourvirtualization experience and programs as successful as possible.  Wherepossible, we will work with our competitors to participatein virtualization feature development, and where possible, we will usefeatures specific to NetApp to continue to drive the value ofrunning virtualization workloads on top of our portfolio.

 

Capturing virtualization mindshare builds on the previoustwo and weaves in #4 - we will communicate morethan ever - both socially and in normal collateral development.

 

All of this culminates into the tacticalbusiness focus shift I mentioned.

 

Shift in Business Focus -- VMworld 2011

 

Not a permenant shift mind you, just atactical position change to bring our resources and activities tobear on VMworld 2011.  The team has been very heads downfocused on the delivery of assests for VMworld and specifically theNetApp Corporate Booth.  This year I am happy to say thatwe have a lot going on.  My next post will be my cliffnotes to the NetApp presence at VMworld.

baker

The Care and Feeding of Clouds

Posted by baker Jul 6, 2011

I’ve had the opportunity to spend most of June engaging with customers on topics related to VMware, NetApp, and Cloud Computing.  While the experience was enlightening, it did consume much of the time I would normally take to blog.  As it is, there always seemed to be a common set of discussion points that I thought would be great to share with a larger audience, so prepare the blog cannon and fire on my mark...

 

Taming the Cloud Monster

 

VT1.png

I'll spare you the traditional opening around defining clouds. Suffice it to say, every customer that I engaged with maintained a different definition of cloud with respect to server virtualization.  I did not expect any different, on that this continues to prove to me that one definition of cloud will not exist, ever.  I think instead, a common understanding or reference will surface that we can point back to. 

 

Those that I spoke to could, in the very least, agree that they were on journey as it pertains to their server virtualization projects. Many went so far as to say the cloud monster was just another unbounded manifestation from the offices that begin with "C".  And that with products like vCloud Director from VMware, clouds are stood up simply by installing the vCD software, making some vDCs and simply connect it to the finance department and you have an "as-a-service" IT shop.

 

VT2.pngTo which I respond with: Have you seen the new cloud-enabled Swiss Army Knife? No? What about the new cloud-in-a-box product? No? Well, it is likely that you won't, not for some time.  Now, some would have you believe that it does exist, or that it is on the immediate horizon, but the truth is, it’s not.  How can cloud exist today or be just around the corner—I mean, what is it?  And there-in lies the rub: you see, “cloud” is really like porn: not everyone agrees how it’s defined, but everyone recognizes it when they see it.

 

Let me not overlook the fact that there are companies that are delivering cloud services (Amazon, Google, RackSpace, etc). And arguably they’re successful.  You only consume and pay for the resources you ask, derived from a management and orchestration tool, based on what appears to be a flexible infrastructure. Isn’t that a cloud?  Well, maybe, but, I don’t know the specifics of  each layer—and I don’t need to.  However, to truly be a cloud (in my definition), the orientation of service delivery must occur and be measured on three (3) distinct levels: the infrastructure, management/orchestration and the financials.  External cloud consumption is easy—if the price is right you burst for the length of time you need those resources.  Internal cloud adoption is harder because success hinges on the three orientations I briefly mentioned, and requires incredibly strong people skills.  Why emphasize the later?  Successful cloud adoption and implementation is an unending series of negotiations to affect a positive change in perception.

 

VT3.png

Naturally everyone wants a single infrastructure, available today, that provides you with everything you need to deploy a cloud—who wouldn’t want that? An infrastructure that internally contains all the components and interconnects to enable cloud/virtualization so you can host application-specific workloads isolated from legacy and siloed equipment in the datacenter.  From the moment it lands on your loading dock, if the only effort you had to put forth was rolling it to the datacenter, then you would truly have the “Cloud Swiss Army Knife”.  Your CAPEX and OPEX would likely improve in segments of time that are near-impossible to measure, not to mention the positive effects on your user community – ahhh Utopia or… Virtualutopialike that plug didn’t ya.  So back to the question—does it exist?

 

No…well, kind of…well, uhm, maybe it does...right?


The question I just asked is the root of the problem—“is there a single infrastructure?”.  Within IT, we refer to objects as tangible items that can be manipulated and referenced; that can be provisioned and managed.  The vernacular association between the “cloud infrastructure” and those tangible objects that make it up is crippled.  You see, defining “infrastructure” depends on where you are in the stack.  To the compute layer, the network and storage are the infrastructure. To virtualization, the compute, network and storage is the infrastructure. To the application, the infrastructure is the virtual machine it runs in, on the hypervisor that hosts it, based on compute, network, and storage components that support it. This is why cloud adoption becomes a series of negotiations based on the capabilities of each underlying component (physical and virtual) in the infrastructure presented from the point of view of who/what consumes it.  You cannot sell the value of cloud, nor accelerate its adoption, based solely on the infrastructure that will provide the resources for it.

 

The negotiations that I am talking about are not the traditional types that occur between a vendor and a purchaser. Instead, these must occur between you and your peers, you and the application owners, you and your business leaders, and even you and the person that looks back at you in the mirror.

 

Here’s a quick analogy:  When someone looks to purchase a car, what most look for are the characteristics and amenities they need from the car—they accept the fact that the underlying components will enable the car to travel from point A to point B.  So how does the tie into the conversation?  In this analogy, auto sales would not be successful if the salesperson started the engagement by trying to sell you a specific motor, a drivetrain, electronics, and then a series of modification parts that increase power and speed—no, they sell you a car based on your current needs and incentivize you with how the car will meet your future needs.  And ultimately you value the car based on its ability to do just that.  Now, I’m not suggesting that we sell cloud as one sells cars, but we have to overcome the problem we have in our industry: we continually focus our conversations and engagements on the individual components and that makes it nearly impossible to make cloud a reality.

 

The hypervisor capabilities are not what are being called into question here.  Nor are those companies focused on delivering integrated infrastructures like NetApp's FlexPod, HP's Matrix, VCE's vBlock, or the like.  And not those “<NAME>-as-a-Service” providers like Amazon, RackSpace, GoGrid, etc.  I'm talking about more than that—what I am really talking about is perception.  In this series, I will do my best to arm you with the mental arsenal necessary to capture mindshare and affect the perceptions of cloud computing in those around you.  Call it what you will, then goal in this 3 part series (based on my definition in bold above) is to enable you to get to a 100% virtualized data center -- regardless of whether or not you want to turn it into a cloud.

 

Ready... Fire!

vt4.jpg

 

In a previous post, I listed out the session that NetApp submitted as part of the abstract call for breakout sessions at VMWorld 2011. In that post, I shared with you what we submitted and asked you to consider going to the public voting page and help support our presence and give us the opportunity to present some ground-breaking content that would no doubt set your hair on fire.

 

To those of you who did just that, you have my thanks. To those of you who did not, well... Regardless, I'd appreciate the chance to talk to you at our booth or at the Hands On Labs.  If you haven't heard though, please let me congratulate Vaughn Stewart and Jack McLeod whose sessions were accepted and will be hosted at both VMWorld 2011 US and EMEA. As you start planning your learning map for VMWorld 2011, please consider attending both sessions. Here’s the quick synopsis of what you’ll learn:

 

Session ID: BCO2863

 

Session Title: How to use distance to your advantage to create a unified protection strategy 

 

In this session, Vaughn Stewart will introduce you to unified business continuance and how you can come to grips (technically, not emotionally…well maybe both) with the fact that geography no longer drives the boundaries of the datacenter. In this session, Vaughn will introduce the concept of unifying backup and DR and how it can be implemented within datacenter deployments that are local, metro and global. The focus is on the cloud and Vaughn is set to discuss a number of business continuance strategies from partners of both VMware and NetApp where the emphasis extends beyond datacenter walls and into cloud based infrastructures in an automated and highly scalable fashion. And what unified business continuance session would be complete without discussing how you can extend the datacenter walls by 200km and still retain that same level of accessibility, high availability and protection across a stretched volume.

 

Why should you attend? Vaughn’s intent is to demonstrate to you how you can get both backups and disaster recovery from one single process. He’ll also show you how you can span your datacenter across multiple locations to create a metro-area datacenter cluster. Finally, this is all tied together based on VMware’s vCenter SRM to create a global strategy to deliver non-stop availability and business continuance.

 

At this session, Vaughn will focus on NetApp’s SnapProtect and MetroCluster technologies as well as VMware’s vCenter Site Recovery Manager.

 

Session ID: VSP3223

 

Session Title: Storage as a Service (StaaS) with vCloud Director

 

In this session Jack McLeod will provide a brief introduction to VMware vCloud Director (vCD) with a more detailed discussion provided on the use of vCD as a consumer of StaaS. Jack says that he intends to provide deeper discussions and demonstrations on how StaaS is architected, implemented, and delivered to vCD resulting in a storage container that holds storage-specific rules regarding provisioning and protection bound to a specific service level. This storage service catalog is akin to a service or host profile for storage and can be analyzed and monitored. In addition to these discussions, Jack intends to demonstrate NetApp’s unique plug-ins in this space, to include one for vOrchestrator used to provision storage as part of a vCloud Director workflow.

 

Why should you attend? Really for three reasons – if you are unsure of what StaaS is and its relationship to cloud computing, this will give you that introduction. If you know what StaaS is, but want to learn solid design strategies based on NetApp’s features, you’ll get that here. Finally, if you need both, but in the context of how this adds value to your resource consumers and how these constructs can be easily implemented and managed, the plug-in demonstration will give you that first-hand view of these technologies and constructs in action.

 

At this session, Jack will talk at length about NetApp’s OnCommand Suite (specifically Provisioning / Protection Manager), the Virtual Storage Console, our new Orchestrator plug-ins, and the value of unified management in these deployment initiatives.

**UPDATE: Additional VMWorld sessions have been added to the list.**

 

Now when is the last time you read a blog post and walked away with an action item?  Well, let me give you one now--VMWorld 2011 session abstracts have completed the 1st round review and have been posted for public voting, and I’d like to ask you to vote for your favorite. And, if I can be so bold as to drop a selfless-plug, vote for NetApp’s submissions. 

 

Making this happen requires a lot of work above and beyond the the people submitting the abstracts.  To that end, I would like to personally thank our Alliance manager Jim Weingarten for spending tireless hours building a tiger team, editing content and coordinating submissions--without him, this process would not have been half as successful.

 

Okay, not so selfless-plug—if you are going, you very well should have a say in the content that is going to be presented. Experts and technical leaders from all over the virtualization industry will be there to present topics in their space.  Sure Vegas can be part “boondoggle”, but it is also an opportunity to see the advancements that are occurring and the future of virtualization.  Remember my credo: “Virtualization changes everything, and by your actions, you are changing everything about virtualization”, so register, vote, attend, learn, and affect change.

 

So How Do I Participate

 

To be able to vote, and register for VMWorld for that matter, you have to have a VMWorld account.  You’ll use this account later to build up you session agenda before arriving at VMWorld (and do this please, the sessions fill up fast and standing in the queue hoping for a seat can be a real pain).  Once you have your account, you’ll be able to vote by going here.  There is a lot to choose from, but the search feature (shown below) can speed you along.

 

VT5.png

NetApp’s Submissions (individual and joint)


NetApp has submitted the following sessions that are available for public voting and I would appreciate it if you could logon and vote for them.  We’d like to see you there and we have a great deal of advancements in this space that will literally rock the foundation of how you think about virtualization.

 

Session ID

Title

1361

PowerCLI 101

1427

Battle of the Storage Experts

1623

Storage Superheavy Weight Smackdown 2011

1948

Architecting a 50,000 Seat Virtual Desktop Infrastructure with VMware View

2031

Understanding How Storage Design Has a Big Impact on Your VDI

2209

Understanding How Storage Design has a Big Impact on Your VDI (EU)

2333

VDI - Answers to the Questions you Wish You Asked

2422

Reservation and Scheduling for Real-World Cloud Environments

2424

Affordable, Rapid DR Using Net2vault Cloud NetApp and VMWare SRM (NOT a NetApp employee but NetApp related)

2481

A Multi-Vendor Panel: "What Does Storage of the Future Look Like?"

2827

Case Study: SymDemo: How Symantec Learned to Love the Cloud and Saved $39 Million In the Process

2829

Extending the VMware Infrastructure: The Future of NetApp Integrations

2838

Delivering Multi-tiered “Storage as a Service” for Private, Hybrid, and Public Clouds

2842

Becoming a Hybrid Cloud Service Provider: How PeakColo Used VMware vCloud Director and vCloud Connector to Increase Revenue and Improve Margins

2851

vMotion Anywhere- Optimizing Application Availability with VMware vMotion

2853

Building Disaster-Proof VDI Environments: Restoring Customer Service and Employees Productivity When It Matters Most

2855

NetApp Introduces “Cloud-Mode” Storage

2857

Redefining Datacenter Boundaries: Spanning a Datacenter Across a Metropolis

2859

VM Misalignment: Why It Really Matters, What It Is Costing You Right Now, and   How To Fix It

2863

How to Use Distance to Your Advantage to Create a Unified Data Protection Strategy

2867

VDI User Experience: The Secret Portal of Productivity

2870

Where Did All My Replicas Go?! Managing End-to-End Replication for DR and Backup in One Framework

2877

Successfully Configuring Site Recovery Manager 5

2880

Optimizing Virtual Desktop Environments: How Storage Efficiency Will Improve Performance and Save Money

2888

Optimizing Storage for Zimbra

2890

Knowing What Is Happening: Managing Dynamic Services in the Private Cloud

2978

Deploying Robust, Cost-Effective Storage Infrastructure in Distributed Environments Using Storage Virtual Appliances

2981

Windows 7 Migration Strategies for Virtualized End-User Computing Infrastructures

2985

Case Study: Delivering Phenomenal Clinical Patient Care With VMware View. How One CIO Put IT at the Center of Healthcare Innovation

2997

Virtual Disk Alignment: The Small Detail That Makes a Huge Difference in VMware vSphere Storage Performance

3060

One Year In: Results from the Siemens AG Private Cloud Implementation and Plans for the Future

3123

Best Practices for Deploying Citrix XenDesktop on VMware vSphere

3142

Capacity Planning in a Shared Infrastructure

3169

Case Study: American Greetings Says Thanks a Half-Million

3220

Desktop as a Service with vCloud Director

3223

Storage as a Service with vCloud Director

3225

Storage Best Practices for vCloud Director

3233

“Get This Monkey Off My Back”: Automating Storage Provisioning and Protection with Storage Service Catalogs and vCloud Director

3279

Accelerating the Enterprise Journey to the Cloud

3290

Proven Strategies and Best Practices for Building Private Clouds to Run Tier1 Apps

 

Thanks for taking time to read and thanks in advance to voting for our sessions!  I look forward to seeing you in Las Vegas and Copenhagen this year.

baker

Cinco De VMUG: Charlotte, NC

Posted by baker May 6, 2011

May 5th, 2011: Nick Howell and I participated in the Carolina VMware Users Summit in Charlotte NC.  The event was handled very well with registered attendance well over 700 people and more than 580 onsite (vendor and participant).

 

The agenda included a keynote in the morning by VMware, one in the afternoon by myself, and a panel discussion on the future of cloud computing.  There were breakout sessions throughout the day that covered all aspects of virtualization: DR, Networking, Virtual Desktops, Case Studies, Management and Monitoring Virtualized/Cloud environments.  My kudos to Varrow for hosting the hands on lab, and Trainsignal for streaming and recording many of the sessions (and presently working to make them available for review).

 

If you missed the event, fear not - check at Trainsignal's site for the recordings of some of the sessions and the keynotes.  Here is the link: http://www.trainsignaltraining.com/2011-charlotte-vmug

 

As for me, my keynote concerned how virtualization is changing everything we do, and how everything we do in turn is changing virtualization.  The quick summary before the recordings are posted: make the highly over-used "journey to cloud computing" reference a reality by applying a practical design approach from NetApp for your cloud infrastructure before you deploy.  For example, if you set out to go on a hike in the woods, you'd likely plan the event by choosing a path, a timeframe, equipment, clothing, shoes, etc.  Cloud computing is no different.  Have you ever taken a hike in dress shoes?  No, you wouldn't, well unless you were cast in a really cheesy 80's scary movie.  My point is, the design decisions that you make today affect/effect the success of your virtualization initiatives and ultimately your ability to cross adoption boundaries on your schedule versus scheduling how you cross adoption boundaries.  I also had a chance to introduce a new concept in storage--no, to introduce to the audience what NetApp is doing for storage in the same way VMware did for servers.  Cloud Mode or clustered shared storage arrays are the future of cloud computing and will accelerate its adoption and success (IMHO).

 

Think about that for a moment... the aggregation of compute resources and the allocation and management of those resources to subscribers is what makes virtualization so powerful.  That was a great idea - injecting the hypervisor to create that layer of abstraction enables you and me to run multiple VMs on one physical device.  Well, if Unified to NetApp means that the metal is just a means to scale capacity and performance, then we should be able to do the same thing.  And we can.  By abstracting the OnTap operating system from the bare metal, we can cluster the storage arrays, we can offer virtualized controllers (think specialized storageVMs) and we can create a truly elastic shared storage infrastructure that can scale in any direction.  Now that my friends is a true game changer; that is how direction is set in the industry; and that is the key ingredient that you'll need to make your virtualization/cloud initiatives successful without the need to run a forklift.

 

Remember, your infrastructure design decisions have to work in good times and in bad times -- otherwise, they just don't work.  Your infrastructure must support your users and you, not the other way around.  Okay, off the soapbox--here are some photos from the event:

 

vt6.jpgvt7.jpgvt8.jpg

vt9.jpgvt10.jpgvt11.jpg

vt12.jpgvt13.jpgvt14.jpg

vt15.jpgvt16.jpgvt17.jpg

Population: Zero: When our SOI investigates the death of the entire data population of a storage volume, they are contacted by a rogue x86 server, once employed in the legacy data center, who is set on revenge for past grievances. Our SOI must stop the server when it demands $10 million or it will kill another storage volume.

NetApp: Storage Constructs for the Cloud

 

As part of OnCommand, NetApp provides three integrated tools for the development of storage service bundles and catalogs: Provisioning Manager, Protection Manager and APIs to integrate third party applications and capabilities to extend these technologies.

Storage service catalogs

 

VMware administrators typically have a detailed level of control over the CPU and memory resources within the virtual infrastructures.  This level of control is elevated when to a broader scope with the use VMware’s vCloud Director.  Both the individual virtualization deployments and the broader vCloud Director deployments require shared storage to host the virtual machines and vApps that are deployed.  But there are no fundamental requirements that the VMware administrator should be responsible for understating the storage at a granular level – rather, they should expect that the storage is available to them and at a specific capacity, performance, and protection level to meet the needs of the virtualization objects that they are deploying.  NetApp has introduced storage service bundles and a storage service catalog as a means to encapsulate the actions associated with provisioning the underlying storage, applying storage efficiencies (multiple storage tiers, thin provisioning, deduplication, compression, etc.) and applying a direct data protection (mirroring, backup, snapshots, etc.) to the storage once it is created and made available for consumption.

 

Creating a storage service bundles together a protection policy, resource pools and a provisioning policy.  These bundles are made available to a storage service catalog, which acts as an inventory.  After the storage service is created, it is made available to the catalog and available for consumption. NetApp provides Provisioning Manager, part of DataFabric Manager to host and manage the catalog.  The consumption of storage services is realized using data set abstraction. Within each data set, storage resources with the appropriate protection and provisioning policies are created.  Figure 10 depicts the relationship between the storage service bundle, and how the storage provisioned from it is consumed by VMware vCloud Director, or any other consumer entity interacting with the SOI.

 

The information conveyed in Figure 10 is an example of how a storage service looks.  Presented here as “SLA Type 2”, the service consists of the following policies and resource pools:

 

  • Protection Policy: Mirror the data, then back the data up to the assigned resource pools.

 

  • Provisioning Policy: A pre-configured provisioning policy where the primary storage node will be accessible using a defined protocol such as NFS.

 

  • Resource Pools: the storage available to provision from for example, Primary data may be located in at the current site, but the mirroring and backup resource pools may reside at a secondary site.

 

Figure 10) Storage Service Bundles and Catalogs

VT19.png

Storage Service Catalog Protection Policie

Extending storage service catalogs to encapsulate the way in which the provisioned storage is protected is done so with NetApp’s Protection Manager.  Like Provisioning Manager, Protection Manager is part of the OnCommand suite.  There are three components to a defining protection polices for provisioned storage.

 

  • Data sets: groups of data that will be protected in similar ways.

 

  • Resource pools: groups of targets that accept replications traffic (such as SnapMirror and SnapVault).

 

  • Policies: defined relationships between Data Sets and Resource Pools that include when and how the data will be protected.

 

With data sets, resource pools and polices created, the policy manager automates the relationships and schedules, assigning each data set to an appropriate resource.  The protection policies can include load-balancing considerations to further account for infrastructure optimization.  When encapsulated as part of the storage service catalog, storage is now a fully orchestrated and automated service to the organizations that look to consume it.  The error-prone and time consuming activities that once plagued the IT administrators are now encapsulated execution packages that support the elastic demands of the SOI.

Storage Service Catalogs: Extensibility through Integration Points

 

The NetApp features supporting the storage service bundle, catalog and the protection policy development and application are based on the DataFabric Manager APIs.  Decoupling the management interface from the underlying logic enable NetApp to provide access to the infrastructure logic for consumption by our partnering community.  These points of integration enable partners to bring additional and unique extensions to the storage service catalog offering.

Blog Post Series Conclusions

 

Virtualization remains a key technology focus within the IT industry.  The advent and adoption of higher degrees of abstraction as a means to scale virtual data centers can leave the supporting infrastructure struggling to keep up.  By adopting a catalog based approach and treating storage as pools of resources that are aligned to defined SLAs and protected in a prescribed manner, IT administrators can fast track the deployment of virtual infrastructures in lock-step with the speed of their organization.   While VMware continues to lead the evolution of data center virtualization, NetApp’s innovative approach to Just-in-Case and On-Demand encapsulated provisioning will ensure a storage infrastructure stack capable of keeping up with the pervasive and explosive growth of both data and virtual infrastructures.

The Solid Gold Kidnapping: Our SOI becomes involved in a series of storage resource kidnapping cases culminating in a demand in a billion dollar ransom delivered from the data center's traditional storage arrays. When the ransom is hijacked by a competing legacy mainframe, Our SOI must use the power of integration to team up with the virtualization layer's vCloud Director to extract the storage resources from the traditional storage array's drives through data transfer technology.

Storage Automation

 

Storage automation is the ability to assign time-intensive and/or repetitive storage operations into a prescribed procedure that can be initiated by some event or the command of an IT administrator.  Based on the dynamic provisioning, automation relies on the core intelligence applied in storage orchestration to automate complex operational tasks in a seamlessly integrated process.  By integrating existing storage management techniques, organization can define policies for storage allocation, dynamically provision storage (resulting in error-free, timely and automatic process for improved business efficiency and operations).  Once a policy can be articulated, it can be automated.  Once it is automated, it can be encapsulated into executable containers that have different economic values based on characteristics of the automation and the supporting storage efficiencies that are integrated into the automation construct.

 

Storage automation is a natural extension to storage orchestration as shown in Figure 9. It is comprised of three core elements that are essential to designing and deploying catalogs of standard storage services that have defined service level agreements, protection policies, and can be extended through integrations from third-party vendors.

 

Figure 9) Storage Automation

VT20.png

  • Data Availability:  continued access to the storage being consumed is a fundamental element to storage automation.  Encapsulating data availability (aka data protectection) policies into the way the storage is automatically provisioned not only means the business units have access to an end-to-end storage protection solution for critical business storage assets, it also means that data protection becomes a repeatable element of the overall storage provisioning process.  THis means IT organizations have a quantitative view of backup windows, snapshot requirements, backup/recover, disaster recovery, data replication and hierarchical storage management attributes of the storage that is provisioned (did I mention that it is encapsulated as part of the overall definiton of how the storage is provisioned???).

 

  • Storage Access Management: encapsulating the method in which consumers connect and interact with the storage reduces the additional complexities associated with isolated storage tasks.  With isolated storage approaches, considerations must be applied to exporting, zoning, path management, protocols, etc.  Consider now the value of binding the manner in which the provisioned storage is accessed into the definition.  Incorporating the communication path into the automation of storage provisioning means that IT organizations can implement new protocols non-disruptively through the use modified automation profiles, organize service level agreements based on data access pathways, and set consumer-expectations that regardless of protocol, access to the provisioned storage is available to solve their business need.  And that's waht it is about, the business unit owner things in terms of what the applicaiton n to the storage consumers that a ubiquitous set of protocols are available to the provisioned storage, regardless of how they choose to interact with it.

 

  • Storage Resource Management: centralized management of the automation profiles is essential to the applications and/or organizations that consume them.  In this paper, we introduced VMware’s vCloud Director as a means to manage containers of resources bound by a defined SLA for which organizations consumed to augment the server infrastructure to support the development of intellectual property or to host the applications that are key to their success.  Storage resource management then provides the same features to the virtualization layer.  Through storage service catalogs, these profiles define the characteristics of the storage, the SLAs and the protection policies that can be expected from their use.  A central repository for the creation, execution and management enables a holistic view of how the profiles are used and evolve.

Storage Analysis

 

Orchestration and automation, when successfully implemented, improves the operational costs associated with storage management.  IT administrators spend less time managing the day-to-day aspects of storage associated activities, and organizations benefit from having provisioned storage resources that have a certain level of predictable performance wrapped with a defined set a features that they can rely on.  These two side effects enable the injection of analytics into the process.  IT administrators can begin monitoring the effects of orchestration and automation and adjust or fine-tune the policies across the entire virtualized infrastructure stack.

 

Whew.... that was a long post.  Sorry folks.  Trying to keep things trimmed can be a pain, especially where there is a lot to say in such a wide-open topic.  The lie is set for next week, and that will close off this series.  Thanks for reading.

The Solid Gold Kidnapping: Our SOI becomes involved in a series of storage resource kidnapping cases culminating in a demand in a billion dollar ransom delivered from the data center's traditional storage arrays. When the ransom is hijacked by a competing legacy mainframe, Our SOI must use the power of integration to team up with the virtualization layer's vCloud Director to extract the storage resources from the traditional storage array's drives through data transfer technology.

Storage orchestration

 

The concept of orchestration is simply the efforts taken to identify and implement easier ways of managing complex storage environments.  It improves the way in which IT environments are managed through the use of existing resources with the emphasis on ensuring the protection of current and future investments based on the following techniques:

 

  • Virtualization software, in this case VMware vCloud Director, to improve the flexibility of the entire virtualization infrastructure stack through the collection, provisioning and assignment of resources in the form of containers that can be consumed en mass and are tied to specific service level agreements and have applied costing models based on the container’s characteristics.

 

  • Hierarchical storage management capabilities to manage storage growth through the use of storage concepts outlined earlier related to pools of storage.  By encapsulating disparate or siloed storage constructs into easier to manage pools, the abstraction of the underlying physical components result in higher efficiencies at both the operational capabilities of the hardware as well as the staff used to manage it.

 

  • Archival storage management capabilities to handle data storage over long periods of time.  In virtualized environments, the term VM-sprawl is used to indicate large amounts of virtual machines on the organization’s network without proper IT management or control.  Through storage orchestration – specifically the ability to introduce archival storage, IT administrators can associate a defined life-cycle to virtual machines and use this storage capability to move virtual machines to a class of storage once their usefulness has expired.

 

  • Storage protection capabilities to ensure the backup and recovery of the business data.  This concept, traditionally introduced once the storage was provisioned and in use by the organization or application requesting it, must become part and parcel in the way that storage is provisioned and orchestrated.  Defining tiers of storage and embedding storage efficiencies (thin provisioning, deduplication, compression, etc.) as part of storage orchestration can be augmented with defining how that storage will be protected (snapshots, full clones, mirrors, etc.) as part of the overall orchestration process thus creating encapsulated storage service catalogs discussed later in this paper.

 

IT organizations are under increasing pressures to minimize, if not eliminate, human errors, reduce costs while deploying more storage, meet the needs of the business units the support, and remain competitive.  This is a daunting challenge as complexities continue to creep into the system.  IT administrators must address the increasing complexity of storage systems, the explosive growth in the data and continue to improve their own technical skills.  Furthermore, the storage infrastructure must be designed to help maximize the availability of critical applications.  Storage itself is being treated more and more as a commodity; however, the management of it cannot – in fact, managing storage is typically an acquired cost many times over.  Orchestration addresses man of these issues, but when paired with automation, the benefits accelerate within the virtual infrastructure.

 

Storage orchestration is used to operate the storage infrastructure in real-time to achieve desired business goals according to defined business polices.  It is also used to identify changes in the demand for resources and automatically trigger actions to relocate resources accordingly throughout the entire hardware and software stack.  Finally, introducing levels of automation that can evolve intelligently around data security, availability, how it is provisioned, and optimized ensures that the entire IT organization environment is aligned to the overall business goals and provides resources in an elastic manner through the use of “Just-in-Case” (JiC) and “On-Demand” storage provisioning.

Just-in-Case Provisioning

 

JiC storage provisioning was introduced earlier in this paper associated with isolated storage – a provisioning method typically used in traditional IT environments that are characterized by dedicated arrays for each task.  It requires enough resources to fulfill peaks in the storage infrastructure demand but results in low resource utilization.  The result is that infrastructure becomes very static and reconfiguration can take days.  As such, the capital and operational expenditures to manage JiC storage provisioning is very high.  However, there are times in which this kind of provisioning can be required, but that need can be address in the on-demand storage provisioning approach with some additional benefits that make it ideal for SOIs.

On-Demand or Just-in-Time Provisioning

 

On-demand storage provisioning techniques utilize orchestration to automate the infrastructure and to execute configuration changes in a repeatable manner.  It also helps to eliminate human execution errors traditionally associated with provisioning and managing isolated pools of storage.  Through storage orchestration, organizations can achieve much higher levels of automation leading to an efficient and rapid response to the fluctuating business demands on the IT infrastructure.

The Five Areas of Orchestration

 

Recall that in Figure 4, the idea of pools of storage was discussed.  Pools of storage are essentially the result of storage orchestration.  Figure 8 provides the interrelationships between the storage orchestration and the underlying hardware infrastructure providing the resources to the virtualized infrastructure stack.

 

Figure 8) Storage Automation

VT21.png

  • Pools of Storage:  pools storage resources is essentially an abstracted management layer that exhibits characteristics associated with automating availability and securing tenant access based on polices associated with how the pools is allocated for use.  Because the pool is a collection of resources that have been abstracted and presented as a logical unit, the ability balance the I/O load is now a function orchestration, moving the data objects within the infrastructure to address steady-state demand and have the capabilities at hand as data demands burst into higher levels of I/O performance.

 

  • Storage Infrastructure Management:  to effectively create and allocate the pools of storage, a storage infrastructure, IT architects use a storage management tool.  This software interacts with the core elements of the storage infrastructure allow the architect to encapsulate storage objects into abstract layers, or pools, with assigned storage efficiencies and protection features to enhance the capabilities of the pools based on service level agreements.

 

  • Hierarchical Storage Management:  including multiple tiers of storage within the pool requires a means to manage the hierarchical resources, as well as the mobility of objects within that hierarchy in a non-disruptive manner.  Orchestration without different performing tiers does not provide the organization with the options necessary to bind storage capabilities to defined service level agreements.  Furthermore, by using storage hierarchies, storage growth can be narrowed and IT architects can move low-activity or inactive data to less expensive storage within the hierarchy.  More importantly, the use of storage automation can inject a degree of intelligence into the process resulting in an offloading of the data movement responsibility to the orchestration process further reducing human interaction/error.

 

  • Archival Storage Management:  data growth and retention continues to be at the heart of storage planning (whether it occurs up front, or as the infrastructure expands is irrelevant).  As part of the storage tiers, end-to-end orchestration dictates the ability to identify data objects that are no longer active, but must be retained and providing a means to separate those objects onto a dense pool of storage for a defined length of time depending on the needs of the organization.

 

  • Storage Protection Management:  business continuance is a constant concern and these concerns become exacerbated in when storage resources are pooled.  The ability to encapsulate the manner in which storage is protected as part of the storage container allows consumers to make purchase decisions based on how that class of storage is being protected, the speed of which they can recover their data, and the length of time those recovery points can be relied on.  These decisions become compounded as decision criteria extend beyond the organization’s user data and considerations around application resilience are a requirement.  Regardless of the type of objects that are being protected, successful storage orchestration includes some form of data recovery management.

 

The final part of part 4 will be up tomorrow - Storage Automation and Analysis

The Solid Gold Kidnapping: Our SOI becomes involved in a series of storage resource kidnapping cases culminating in a demand in a billion dollar ransom delivered from the data center's traditional storage arrays. When the ransom is hijacked by a competing legacy mainframe, Our SOI must use the power of integration to team up with the virtualization layer's vCloud Director to extract the storage resources from the traditional storage array's drives through data transfer technology.

 

Blogger's Note: In the last post, I teased you with two things - lies to be told and Darwinism to be applied.  First the lie - there will be a couple more posts in this series.  My apologies, but the size of this post alone requires it to be broken up, plus, I have a professional obligation to do a wrap up post that brings this content into alignment with NetApp's approach to this topic.  As for the Darwinism, let's evolve the last post to talk about how the storage layer must evolve in SOI to function at a higher level.

 

Blogger's Note: In this post, I will briefly touch on vCloud Director.  Aside from all of the VMware-specific content on their product, I highly encourage you to read Nick Howell's post "What is vCloud Director, really?"

Evolving Storage Constructs for Cloud Based Initiatives

 

As stated previously, the fundamental element of a SOI is elasticity, as well as efficiency.  To address this element, SOIs rely on the ability of the infrastructure to rapidly provision tiers of compute, network and storage for on-demand consumption by multiple tenants.  This rapid provisioning of infrastructure resources can only be efficiently achieved when a holistic approach to orchestration is applied at every piece of the infrastructure stack (remember Figure 2 from the first post).  Furthermore, the use of continual resource analytics provides a mechanism to constantly evaluate utilization rates to ensure optimal performance and planning for future tenant needs.  Considering the entire infrastructure stack, VMware’s hypervisor technology provides for the server virtualization layer, but it is their vCloud Director and vCloud Orchestrator products that apply directly to this topic.

 

An overview of how the virtual datacenter can be automated and orchestrated is required before further discussions on the automation and orchestration of storage can continue.

Virtual Data Center Automation and Orchestration

 

Cloud computing leverages pooled resources, delivered and consumed as a service.  Pooling infrastructure resources allows IT organizations to deliver them to consumers in the form of catalog-based services managed by VMware vCloud Director (vCD).  vCD is a layer of abstraction above vCenter pooling all of the resources that vCenter manages into containers of compute, network and storage to which specific SLAs are assigned and measured.  As shown in Figure 5, the resources are combined into large pools that are consumed by tenants through the use of a self-service portal.

 

Figure 5) VMware vCloud Director.

VT22.png

vCD is based on logical virtual datacenter constructs that include compute (the VMware clusters and resources pools), networking (distributed virtual switches and port groups), and storage (VMware VMFS datastores and NFS shares) to enable complete separation between the consumption of the infrastructure services and the underlying resources.

 

Originally stated, SOIs provide an integrated infrastructure stack that can be used by multiple tenants in the form of shared resources.  It is vCD then that encapsulates these resources into logical constructs (Provider and Organizational vCDs) that are available to be consumed, perform under defined guidelines, and are charged at a defined rate.  This enables the SOI to deliver resources to the consumer without the consumer knowing the complexities of the infrastructure behind it.  It is, therefore, elastic in nature and provides consumption that can be accessed over standard networking protocols, regardless of where it is hosted (privately, publicly, or as a hybrid cloud infrastructure).  Figure 6 depicts a more detailed view of the layers of abstraction of the virtualization infrastructure stack through to the vCD layer.

 

Figure 6) SOI and vCD Transparency

VT23.png

The Provider vCD, based on a VMware resource pool, is the foundation for which organizations consume guaranteed resources from the SOI.  Containing compute and storage resources, the Provider vDC is the catalog of services available at a defined price with a defined SLA for which organizations buy or lease capabilities through a self-service portal.  Figure 7 depicts the relationship between the Provider vDC and the Organizational (consumer) vDC.

 

Figure 7) Provider and Consumer vDC relationship

VT24.png

Virtualization software, depicted in this white paper as VMware technologies that include vCloud Director, are used to improve the flexibility of the entire virtualization infrastructure stack – including the storage.  In this case, the virtualization layer enables the storage subsystem to evolve in the way that objects are provisioned – in essence, the way that it is orchestrated.

 

Next part tomorrow....

 

Figures 5 & 6 were developed by VMware.

Wine, Women and War: Our new SOI is assigned to stop the in-efficiencies of traditional storage arrays, who claims to have capabilities that are good enough for virtualization, but require more CAPEX for performance and capacity. Along the way, our SOI must deal with Business Units that are flooding the traditional arrays with common requests for resources that are managed independently.

 

Apologies up front for the lengthy post folks, just a fair bit to cover...

Storage Design Considerations for Cloud Based Initiatives

 

Consider the fact that the acquisition of storage only accounts for roughly 20-30% of the total cost of ownership and that IT organizations will dedicate three to five times that managing and maintaining it.  Introducing virtualization and the capabilities to rapidly deploy servers and infrastructure exacerbates this issue.  This is the genesis of an organic datacenter that must be corralled before cloud-based initiatives can be considered and it begins with designing the way in which storage is presented to the infrastructure stack.

Virtualization Does Not Stop with Servers

 

Understanding the fundamental shift in storage provisioning requires a brief overview of why isolated storage allocation exists, more importantly, understanding that virtualization goes beyond the server virtualization layer.  As server virtualization is deployed in the datacenter, the management of storage within the IT infrastructure becomes both challenging and complex.  The value that server virtualization brings to the organization should not be overlooked, but often overlooked are the challenges that this technology creates in the storage environment.

Virtualization: Isolated Storage

 

Server virtualization provides benefits to organizations that include improved utilization, improved resource allocation, higher levels of resource availability, faster deployment times, and the value to the datacenter related to heating/cooling, power consumption, and overall carbon footprint.  Figure 3 depicts how isolated storage introduces complexities within the infrastructure.  These complexities include:

 

  • Multi-Resource Management:  When managing large storage environments with many arrays in a virtualized environment, management becomes complex because a large portion of IT staff's time is dedicated to the day-to-day management of the physical storage components in the datacenter (that can be a huge operational cost, especially if we're talking a mixed-array environment).

 

  • Limited Data Mobility:  Siloed storage assets limit, if not prevents, the mobility of data within the storage infrastructure.  Because the server virtualization stack introduces high availability through resource balancing across the virtualization infrastructure, a siloed storage stack will not provide the same mobility of data objects within the storage stack limiting the load balancing opportunities to remain bottlenecked.

 

  • Siloed Protection Strategies:  Each siloed storage stack requires its own unique protection strategy – including snapshotting, full backups, mirroring, etc.  From the scary math I introduced initiatlly, you are doubling the staff requirements if you separate your storage admins from backup admins.  Now we are talking about doubling the costs from a resource perspective and may even be adding additional time to our ability to deploy new storage assets with organization procedures.

 

  • Performance and Scalability Limits:  Managing datacenter environments with many arrays is time consuming - meaning there are difficulties in provisioning new storage capacity to meet the data growth needs.  Members of the IT staff must deal with over-provisioning the storage for virtual machines that can soon offset the cost savings associated with server virtualization.

 

  • Underutilized performance:  If you have a large storage environment with many different arrays, it is likely that an IT staff has been created with members dedicated to different business units or different arrays.  For resources assigned to different business units, staffers have to deal with different storage performance requirements for the units they support (increased IOPS, disk-level protection, capacity thresholds, data protection, application requirements, etc.).  For staffers assigned to vendor-specific arrays, they have to deal with competing resource requests weighted against priorities and the capabilities/capacity of the array. In a siloed environment, storage can be underutilized because of how each silo is architected.  Members of the IT staff can be under utilized if they are specialized (to a specific business unit and/or array).  Additionally, when the silos grow organically without forethought as to how the storage will be consumed in the future, quick decisions can result in underutilized capacity and performance and ultimately affect the scaling potential of datacenter.

 

Figure 3) Isolated Storage.

VT25.png

The use of isolated storage, although still in use my many organizations today, does not utilize the underlying storage, nor those managing it, in the best possible way.  As such, a layer of abstraction can be applied to the physical disks within the arrays to create collections or pools of storage.  The pools are still presented to their consumers in constructs that are specific to them (file systems, raw addressable space, etc.), but the way in which the storage is managed, the capabilities of the the storage to expand and contract dynamically, and the introduction of rudimentary analytics are some of the core benefits that result when the transition from isolated pools to pools of storage is made.

Virtualization: Pools of Storage

 

When storage is pooled, organizations benefit from efficient capacity utilization and optimal performance of the file systems in both clustered and non-clustered environments.  Each file system pulls from a common logical pool of storage ensuring disk capacity is never wasted.  As a result, IT administrators have complete flexibility in selecting the specific storage characteristics (capacity, storage tier, thin vs. thick, etc.) during provisioning.  Additionally, as the pools of storage are consumed, additional physical storage can be added to extend the pool for continued seamless growth.  Figure 4 provides a graphical representation of the abstracted pool and its relationship to the underlying physical storage.

 

Pools of storage are key enablers to defining a SOI as multiple pools of storage on different storage tiers, witht their own SLAs are introduced.

 

Figure 4) Pools of Storage.

VT26.png

As shown above, when storage is pooled, organizations can receive the following benefits:

 

  • Lower Operational Costs:  When storage is managed as a pool, day-to-day operational costs can be reduced because the management of multi-vendor arrays is simplified.  I'd argue that there are tangible CAPEX and OPEX savings here because of the improved utilization of the storage means buying less, and eliminating the need to do specialized array provisioning or repetative tasks by the IT staff is minimized.

 

  • Improved Performance and Scalability:  Creating pools of storage bound to physical drive capabilities introduces tiering.  Applications can be assigned to the right pool based on the storage tier that provides optimal performance.  Pools enable scale (up and out) as new storage assets can be added on demand creating an elastic storage infrastructure.  

 

  • Unified Management:  Managing the storage pool is much easier than managing each storage array in your environment.  I understand that this does not remove the need to have specially trained staff members if you have a multi-vendor datacenter, but pools do enable the entire IT staff to provision storage using a common management approach, regardless of what array is providing access to the drives.

 

  • Higher Reliability:  When you aggregate your physical storage resources into storage pools, more storage systems (even herterogeneous) can be added as and when needed, and the  virtual storage space will scale up by the same amount. This is transparent to the applications using the storage infrastructure and reduces situations where applications are brought to their knees due to lack of disk space.  Additionally, data can be moved within the pool to balance out I/O loads or deal with spikes in I/O demand, making the entire infrastructure more reliable.

Virtualization: Pools of Storage Use Cases

 

As shown above, when storage is pooled, organizations can receive the following benefits:

 

  • Optimizing Price and Performance:  IT organizations with the need to protect mission critical data, bulk storage and/or preparing for scaling demands based on utilization trends can use multiple pools of storage to meet this needs.  The pool for mission critical data would utilize the highest performing arrays available, configured with RAID level protection that best meets the application’s I/O patterns and requirements.  The pool to address bulk scaling requirements could contain higher-capacity arrays configured to maximize storage based on dense/deep storage that is inexpensive to operate and best suited for archival responsibilities.  Finally, if the shared storage infrastructure provides you with tools that facilitate trending and analysis (a requirement in my opinion), then pools can be created for unplanned or near-planned growth such as spikes in data requirements from existing business units, or merger/acquisition activities.

 

  • Organizational Data Separation:  IT organizations with the need to support multiple business units, responsible for their own storage requirements and budgets, need to be able to guarantee exclusive access to the storage that is being consumed.  Storage pooling provides an ideal construct for this requirement.  IT organizations can place each unit’s data into its own storage pool and use role based access controls to ensure that only that unit has access to that pool.  In doing so, only then can SLAs, access guarantees, and security be mapped between the business unit and the location where their data resides be assured.

 

  • Application Data Separation: Pools of storage can be used to isolate business critical applications.  By creating a separate storage pool for the application data and setting the access controls to ensure that only the clients associated with that application have access to that pool, business units can further isolate and protect the manner in which sensitive data can be accessed.  More importantly, application I/O patterns and characteristics can be applied to specific storage pools that are based on physical drive characteristics, data pathways, and protection policies to ensure that application is deployed on and utilizes the right tier of storage - again, driving home the ability to apply SLAs to the manner in which storage is provisioned and operated in its steady state.

 

Pools of storage provide the next logical step in how organizations introduce storage efficiencies into their IT infrastructure.  As outlined above, the added value of having an abstracted storage access layer ensures that all of the physical storage is being utilized to its fullest capacity and that IT administrators are not managing siloed storage constructs.  However, making the leap to a SOI requires that the abstracted layer be transformed into a storage container imbued with services that make the storage resilient, elastic and applies additional storage efficiencies and services to the container.

 

My apologies for a long post but there is just too much to discuss here and I even skimmed over a good deal of the deeper content.  Join me in part 4 where there are lies to be told and the idea of Darwinism to be applied to these storage constructs.