Email_Footer_1_HiRes.jpg

 

The OnCommand team is announcing DevCon 2011 Chicago.  If you work with NetApp systems and you want to "up your game" you won't want to miss out on this opportunity. Come converge, develop and share with the NetApp experts. 

 

The conference agenda features sessions to get you started from "Getting Factory Default FAS into Production using SM 2.0 in less than an hour?" to more advanced sessions like "Create your own SnapManager" and  "Monitoring and Showback / Chargeback"

 

If you attend you can expect to rapidly increase your expertise with the OnCommand SDK & APIs for developing cloud and storage-as-a-service solutions that take advantage of NetApp's storage and service efficiencies.

 

This 3-day hands-on learning event will build your knowledge through talks, instruction and lab exercises.  You'll learn to build storage automation, data protection, monitoring and analytics applications. You'll have direct access to the engineers and product managers responsible for the OnCommand products and technologies.

 

So if you you want to expand your deep technical knowledge and help your company be more efficiant and competitive then this conference is for you. Take your NetApp knowledge to the next level.

 

The event takes place at the Hyatt Regency, Chicago O'Hare airport, October 5-7, 2011. And best of all it's FREE?

 

Register today at http://developer.netapp.com

Built-on.jpg


Yesterday we announced how NetApp Enabled Thomson Reuters to transform how the legal industry gets information.  Mark Bluhm, Senior VP and CTO, Shared Services Thomson Reuters Professional Division described the case study in TechONTAP.

 

Mark notes that:

 

"To manage our environment, we use the full complement of NetApp OnCommand™ management products including Operations Manager, Provisioning Manager, Performance Manager, and OnCommand Insight. This gives us a single set of tools that work across all our NetApp storage to simplify management, speed up provisioning, and identify performance issues. OnCommand Insight (formerly known as NetApp SANscreen®) gives us a consolidated view of our entire heterogeneous storage environment in terms of capacity, connectivity, configurations, and performance. It also provides alerts on component failures so that we can resolve issues before redundant components experience a second failure."

 

A shout out to our own Mike Arndt who has been working with Thompson Reuters for the past 6+ yearsand further points out:

 

"Many of the NetApp storage systems in this environment are used in a shared storage service offering, and being able to quickly identify which applications are driving the most I/O is an important capability that is provided by OnCommand Performance Advisor.  OnCommand Insight (formerly known as NetApp SANscreen®) gives Thomson Reuters a consolidated view of their entire heterogeneous SAN storage environment in terms of capacity, connectivity, configurations, and performance. It also provides alerts on component failures so that they can resolve issues before redundant components experience a second failure."

 

It's a great story. Congratulations to everyone that made it happen.

 

richard9

Preventing Service Outages

Posted by richard9 Jul 21, 2011

Cloud_Lock_2_HiRes.jpg

The demand for more and more public cloud services is upping the ante on management tools that can assure service delivery.  There’s no better proof of this need the recent Amazon outage. The Amazon story highlights how recovery from one outage can fail and lead to an extended failure.

 

The keys to preventing outages and this cascading effect demand tools that:

  • make sure your infrastructure is in conformance and can immediately alert you when non-conformant changes have increased the risk of failure;
  • allow you to perform “what if” scenarios on anticipated changes and assess the potential impact;
  • let you plan ahead to make sure you have sufficient capacity to adequately recover from likely failures.

 

OnCommand Insight’s capabilities can help address this outage scenarios. Let’s examine further.

 

Auditing Configuration Changes and Change Plans

OnCommand Insight Assure identifies and correlates service paths that describe the relationship between a particular application and all its mapped storage. The service path includes all virtual machines, physical servers, network devices, and storage systems required to support the application.  With Insight you set policies according to your best practices, to monitor and alert on violations that fall outside those best practices.  Even in thin provisioned environments you are alerted when you’re reaching storage pool thresholds.  This provides the information to proactively manage your storage services, assure service quality and prevent application failures.

Furthermore this information coupled with planning tools helps you model and validate proposed changes to your environment.  A simulation lets you check for configuration and policy violations that could lead to service quality issues or outages.  The net result is that you can proactively identify problems before they occur so they can be corrected without affecting your operations.

Insight Assure is designed to provide ongoing compliance for required IT audits. It provides comprehensive audit reports including a detailed audit trail of all events and changes and their impact. This enables you to correlate a change to a service so you can understand the capacity needed for large recovery events.

 

Assure
Figure 1) OnCommand Insight Assure shows policy violations and their future impact. This is an example of “missing redundancy” and its potential impact.

 

Capacity Planning

Many recoveries fail because there is insufficient capacity which causes a second outage that compounds the failure.  OnCommand Insight Plan lets you roll up data from multi-site environments to get detailed trending, consumption and forecasting analysis across distributed infrastructures.  You can forecast storage needs based on actual usage over time and run “what if” scenarios to forecast when storage will be depleted by tier, business unit, applications and data center.

 

Plan

Figure 2) OnCommand Insight Plan shows you how long you have until capacity reaches set thresholds and your configured and allocated capacity trends.

 

The Bottom Line

Preventing outages requires tools that enable you to set policies according to your best practices, dynamically monitor the configurations against these policies, and proactively alert you when potential problems could occur.  And just as important you need to make sure you have the capacity to recover if you do have an outage.

Forrester has publshed its "Storage Choices For Virtual Server Environments, Q1 2011" and NetApp has moved from 3rd with 24% of respondents using NetApp storage in 2008 to 2nd with 38% in 2010.  That's a 25% increase.  EMC still leads with 44% but NetApp is gaining fast.

6a00d8341ca27e53ef01538f839072970b-800wi.jpg

 

The research written by Andrew Reichman, shows that 91% of respondents report they are using virtual server technology for production workloads, compared with just 78% last year.  This growth in virtualization is consistent with what we are experiencing at NetApp.  Our recent Cloud launch highlighted how the move to Private Cloud starts with a virtual unified storage foundation.   We deifne 5 capabilites critical to virtual storage for cloud. Our growth as the storage of choice for vitual enviroments is in large part due to these key capabilities.

6a00d8341ca27e53ef014e8976f3a3970d-450wi.jpg

 

 

  • Storage efficiency through a single unified storage architecture and management interface that supports multiple protocols and multi-vendor arrays. The NetApp Data ONTAP operating system provides storage efficiency capabilites such as thin provisioning, deduplication, cloning, and Snapshot copies.
  • Scale up and scale out means having the elastic scalability to scale up, out or down to meet the dynamic demand of a shared IT infrastructure that is delivering IT as a service.
  • Non-stop operations so you can move data to balance workloads or other requirements because planned timedown is impossible in a shared infrastructure. Data Motion allows you to easily and quickly migrate and move data across multiple storage systems without disrupting users or applications that are accessing that data.
  • Secure multi tenancy to cost-effectively and securely partition a single NetApp system to support multiple tenants with MultiStore, combined in an SMT architecture with Cisco and Vmware to segment, isolate, and deliver shared server, storage, and network resources to different users, groups, departments or applications.
  • Service automation and analytics for automated storage provisioning, comprehensive visibility, monitoring, and proactive alerts of availability, performance and policy compliance.

 

Read more about how NetApp can help you build your Cloud

6a00d8341ca27e53ef014e894f89c1970d-320wi.jpg
Guest post by Kristina Brand, NetApp OnCommand team


Last week NetApp announced the new OnCommand management software portfolio as a vital component in its private cloud offering. As Richard noted in his blog: The New OnCommand Portfolio Supports Clouds, we’ve taken a prescriptive approach to guide you through your move to the private cloud and have defined it by four fundamental building blocks .

 

 

As part of last week’s announcement we also unveiled the new OnCommand Insight product line, which combines SANscreen® and Akorri® BalancePoint® to comprise the service analytics piece of the NetApp management solution set. OnCommand Insight is the cornerstone for managing and delivering optimized services in heterogeneous cloud environments. OnCommand Insight is also a testament of our continued commitment to serve our customers’ needs regardless of the vendor(s) they choose—we are committed to delivering flexible and efficient solutions to our customers so that they can build their business, and private cloud environments, on NetApp.

 

OnCommand Insight is comprised of four products –Assure, Perform, Plan and Balance, all of which are agentless and help you manage your heterogeneous environments. When forming your initial plan to build your private cloud you’ll want to fully understand your complete environment as it is today. Start by identifying all of your resources so you can understand what’s available, what is not optimized and what gaps you have, and then you can begin to define your plan and requirements.

 

Let’s focus on the starting point—visibility. Insight Assure discovers your entire storage service path to provide you end-to-end visibility of your physical and virtual environments. This is vital since you’ll want to understand what resources you have in place and how they’re being used in order to make decisions confidently about changes to your resource allocation. Having this visibility also enables you to identify and reclaim orphaned resources, including orphaned capacity from unused VMs, so you can make better use of your existing resources. This is critical when planning your private cloud environment, so you plan and purchase only what you need. You also know the exact services costs to your internal customers, enabling you to do cost “showback” and instill greater cost accountability going forward.

 

Managing change is difficult, but using Insight Assure can alleviate this with ongoing monitoring. You can validate that all of your changes are in-line with set policies. If changes violate your policies, you can see exactly what adjustments are needed to correct them and to avoid unplanned outages. This makes planning for migrations, consolidations and deployment of your private cloud less complicated, reduces the risk of changes—and minimizes the impact on service delivery. Once you’ve completed your change initiative(s), you are ready to take the next step and start using service analytics to optimize your private cloud services.

 

Stay tuned for the reviews of Insight Perform, Insight Plan and Insight Balance in upcoming blogs. There’s much more to explore on how NetApp can help you plan, deploy and manage your private, hybrid or public cloud. To learn more, visit OnCommand Insight, OnCommand management software or the OnCommand community.

Today we announced our new OnCommand management software portfolio.  It’s a very exciting day for all of us that have spent the last several months for this big outing.  The announcement
positions the OnCommand portfolio in the context of our Cloud solution.

Our Cloud story has evolved significantly and we are now in a position to be very prescriptive in helping customers move from virtualized infrastructure to private and public clouds.   Since beginning the Cloud program last year we now have over 30 service providers delivering more than 50 cloud services around the world.  NetApp’s cloud solutions have helped provide cloud services that now reach over one billion end users!


 

Our prescriptive approach defines four fundamental building blocks for moving to a private cloud.

 

OnCommand Service Catalog

 

Moving to a services model is more than the introduction of technology. It requires that you understand your customer requirements and define a set of services that can meet them. You should think of defining your services in terms of the quality of service each provides. The OnCommand services catalog allows you to specify service policy attributes such as efficiency with the application of thin provisioning and deduplication, performance tier attributes, availability by type of RAID and protection policies for frequency of back-ups and DR .

 

 

OnCommand   Service Catalog


OnCommand’s Service Analytics


OnCommand Insight (the combination of SANscreen and Akorri ) offers actionable knowledge through the use of sophisticated analytics to enable you to optimize your shared IT infrastructure.  Service analytics are essential to implementing a services model and private Cloud. You need to have visibility into your end to end infrastructure in order to understand how you are delivering on your services and to diagnose performance and capacity problems.  Additionally analytics provide the insight into how your infrastructure is being used in order to inform the users of the service the costs they are incurring. Even if you don’t implement a full chargeback system giving your users insight into the services they use and the associated cost (show back) can dramatically change behaviors such a choosing lower level, less costly services.

 

OnCommand Insight for service analytics

 

OnCommand’s Automation


NetApp’s new OnCommand unified manager gives the user a single view into defining and executing policy based automation workflows for provisioning and protecting data.  The new product integrates Provisioning Manager, Protection Manager, Operations Manager, SMVI and SMHV.  OnCommand supports integrations with popular management platforms to manage and automate the provisioning and protection of their data form VMware’s vCenter or Microsoft’s System Center.   Automation can dramatically reduce the time it takes to provision and protect your data often reducing provisioning times from weeks to minutes.


OnCommand Automation

 

Self-Service

 

OnCommand’s APIs support integration with applications that can easily provide self-service functionality to end-users. This allows customers to integrate the request for services directly into user workflows. For example, the request to create new mailboxes can automatically provision storage and enable protection based on the level of services defined.  This frees up IT resources for more critical strategic activities and means that admins can manage more storage without additional resources.


Self Service through OnCommand’s APIs

 

 

For more information on OnCommand visit our product page or join in the conversation at the OnCommand community

By Richard Treadway

 

 

 

This post is the first in a series discussing the importance of Management Software in the successful adoption of private and public Clouds.In this post I’ll discuss the business forces transforming data centers and how this transformation is driving the need for storage and service efficiency.

 

There has been a lot written about the pressures driving IT to reduce costs and be more responsive to business imperatives. Explosive data growth, inflexible dedicated architectures and inefficient use of resources are forcing data centers to consider new approaches.

 

Explosive Data Growth - General wisdom says that no matter how much space you have you’ll accumulate more things to fill it. When you move into your new home you have a lot of empty closets - 20 years later, not so much. So it is with data but the usual response of users when they run out of disk space is to just get more. The steady falling cost of storage has thus far let enterprises continue to solve the problem of data growth by simply adding more storage. The danger with this approach is that users assume space is infinite. This thinking has lead to an exorable explosion in data that has meant the cost of today’s storage capacity dominates many IT budgets. A recent survey by ESG

found that the majority of those surveyed (58%) had data growth rates of 11% - 30% annually (figure 1).

 

http://media.netapp.com/images/blogs-6a00d8341ca27e53ef014e60c527cf970c-800wi.jpg

Figure 1 - Annual Data Growth Rates, by Number of Production Servers

 

 

Dedicated Architectures - I may have a lot of storage space in my house while my next door neighbor is bursting at the seams.  Even while he needs more space and I have extra space it’s not so easy to move his possessions into my house.  There are all sorts of questions of liability and ownership to consider.  And so it is with dedicated architectures that create application silos.  If a system is dedicated to a specific application it needs to have enough capacity to account for fluctuating demands.  It requires a lot of agreements and infrastructure to create a system that can dynamically move applications, servers and data to match needs as they change.

 

Inefficiencies – Urban sprawl and single family housing has created inefficiencies in land use and commuting costs.  Planners are now realizing that centralized communities create efficiencies through shared space, reduced traffic and economies of scale.   So it is with IT infrastructure.  If I share resources and dynamically allocate them to meet my changing demands I can get a lot more efficiency and flexibility from my existing capacity.

 

Virtualization to the rescue – These dynamics point to the need for a new approach to that calls for dynamic shared resources.  Virtualization delivers on the promise of dynamic sharing of servers to increase utilization and increase flexibility.  This promise has put virtualization in what Geoffrey Moore calls the “the tornado” - It’s no longer a question of, “should I virtualize”, but “when, how and what.”

 

It’s a journey – The transition to vitualization isn’t a single event.   Moving from application silos to a shared infrastructure involves transformations at all levels in the IT organization.  As such it’s a journey that usually involves major initiatives to centralize, standardize, consolidate, virtualize, optimize and finally outsource.  These initiatives are driving two major transformations in the data center; infrastructure efficiency and service efficiency.  These transformations must drive equivalent changes at all layers of the stack including the storage layer (Figure 2). Leaving the storage layer allocated to specific virtualized servers doesn’t deliver on the efficiencies and flexibility that virtualization promises.  Just as the server layer is virtualized so must the storage layer be virtualized.  

 

http://media.netapp.com/images/blogs-6a00d8341ca27e53ef0147e41f0316970b-800wi.jpg

 

*Figure 2 – Data Center Transformation – Virtualization and Automation

 

 

Storage Efficiency initiatives are about deploying new technologies to bring about cost savings.  At the storage layer this means technologies that allow for the management of space as a single optimized pool.  The application of techniques such as thin provisioning, deduplication, clones and snapshots deliver substantial cost savings.

 

Service Efficiency involves automating processes and workflows as well as using analytics to drive efficiencies in delivering services.  Service analytics help you gain a holistic view of your storage infrastructure as a unified set of services using discovery, correlation, service paths, simulation and root cause analysis.  By aligning the relationship of storage resources with application service levels, storage analytics provide critical decision support metrics about how to improve availability, performance, and utilization efficiency across applications, business units, data centers, and storage tiers.

 

While storage and service efficiency promise considerable cost savings there remain serious challenges to realizing them.  NetApp Management Software is directly aimed at helping customers gain the efficiencies of virtualized storage and manage the complexity of these transitions to attain higher levels of storage and service efficiency.  Take the test and see how efficient your IT is.

 

 

In my next post I’ll discuss the challenges data centers face in realizing these efficiencies and solution approaches that have been successful with our customers.

Filter Blog

By author: By date:
By tag: