1 8 9 10 11 12 29 Previous Next

NetApp 360 Blog

422 Posts

By Michael Phelan, Senior Technical Marketing Engineer, NetApp

 

Hey Mister Oracle administrator: do you have flash in your environment?  If you don’t… Why Not?

 

There is no doubt that flash – specifically the EF540 - will make your Oracle environment run better. The questions you’re probably asking yourself and that are holding you back are:

 

1. Do I need to convert my existing environment? 

    • Because there is no way I’m doing that!

2. How much additional work do I have to do? 

    • I’m too busy right now, I’ll look into this in a few months.

3. How much faster?

    • If we do this I need to prove the business value to management.

 

First up the environment. Let’s just say you’re running on a NetApp FAS system today – doesn’t have to be this but we’ll go with that for this blog.  Let’s say things are going “OK” but you’re looking to get that performance jump. You’re monitoring the systems and the load is starting to impact the business.  NetApp has assembled a solution where the EF540 and FAS system work together using Oracle ASM disk mirroring.  The Oracle ASM disk groups verify that the two different arrays are bonded together. This design guarantees 100% synchronous mirroring and mirroring rebuilds in the event of a serious hardware failure.  The other thing this does is keep all the data on the spinning disks you’ve already invested in.  All this with no changes to your existing environment.

 

ef540-fas-oracle.png

Next up, how much additional work is involved.  Well let’s be honest, if you don’t happen to have Oracle ASM in production there will be some effort, but that will not impact the current production environment. Remember we’re not suggesting you convert the existing setup, just add the EF540 to get better performance.  The solution is simple and uses standard Oracle ASM mirroring that can be managed easily by most DBAs.

 

And finally, how much faster?  Well we have one customer who saw a 17-godzillion X improvement…  But really no one wants to spend the time and money to make this happen if in the end I’m not getting any faster or my data is not protected.  There are a couple of points here I want to make clear.  First, performance, which has two aspects, speed and latency, and it’s important to deliver both.  This solution is designed to help customers looking for extreme application performance with low latency.  The second is data protection – at speed – and in this solution, the NetApp FAS3240 storage array lets customers define service layers in the context of data protection through Snapshot, SnapMirror, and SnapVault technologies while getting the performance requirements met by the EF540.

 

In the graph below we show how the NetApp EF540 as a standalone system and the NetApp EF540 in combination with the NetApp FAS3240 delivering on this goal - it runs just as fast as Flash alone!  The bottom line is customers are demanding better performance.  In a couple years every environment will need flash just to keep customers coming back.  Be the admin that’s leading the charge.

oracle-on-ef540-flash-array.png 

To find more information about our E-Series Storage products, please visit our website: http://www.netapp.com/us/products/storage-systems/

Last week, NetApp outlined its strategy to the market to deliver seamless cloud data management bridging public, private, and hybrid clouds. Looking back, we wanted to share a few select pieces that best illustrate the growing capabilities of NetApp in enabling our customers to go further, faster. Stay tuned for more on NetApp’s cloud strategy and portfolio over the coming months.


Cloud_News_1_HiRes.jpg

NetApp pitches Data ONTAP, ‘universal data platform’ for cloudsInfoWorld

 

NetApp's hybrid cloud plan targets storage for any virtual systems - Network World

 

NetApp Previews Hybrid Cloud Storage Initiative - ITBusinessEdge

 

Sponsored Post: NetApp uses clustered data ONTAP to offer open approach to the cloud - GigaOM

Cloud_Partnership_1a_HiRes_RGB.jpg

Oracle OpenWorld is all about the Cloud this year and NetApp is no exception. With new features for customers who run our storage with Oracle Database, Enterprise Manger and Oracle VM and new FlexPod designs NetApp is making it easier for companies to manage, move and deploy their data where it makes the most sense. Customers can now choose to deploy a private cloud infrastructure fast or run their data in a public cloud leveraging economies of scale or even a hybrid approach without having to sacrifice data management capabilities. As we announced last week, NetApp is committed to providing seamless cloud data management across any blend of private and public cloud resources and our ongoing work with Oracle will support many of these efforts.

 

Much of what we are announcing at Oracle OpenWorld relates to enabling our customers who want to take advantage of all the features of NetApp Clustered Data ONTAP, the #1 branded storage operating system*. Now customers running NetApp storage with Oracle software can more easily take advantage of a universal data platform that enables non-disruptive operations and near-infinite scalability for software-defined data centers.

 

Everyone is so busy during Oracle OpenWorld it is easy to lose track of all the details. So here is a look at some of the announcements you may not have been aware of:

 

NetApp Integration with Oracle Database 12c

 

NetApp and Oracle have collaborated to provide the ability to quickly clone a PDB database from the Oracle Database SQL command line for Oracle Database 12c MultiTenent environments. This integration leverages NetApp FlexClone technology which allows you to develop and test applications faster by creating instant, space efficient clones of PDBs that shorten the design cycles and improve service levels. To leverage this integration you must download the NetApp Cloning Plug-in for Oracle Database. The plug-in is free and is now available to customers for download from the NetApp Communities site.

 

NetApp Updated Plug-In for Enterprise Manager 12c

 

The NetApp Storage System Plug-in for Oracle Enterprise Manager 12c delivers comprehensive availability and performance information for NetApp storage systems. By combining NetApp storage system monitoring with comprehensive management of Oracle systems, Cloud Control significantly reduces the cost and complexity of managing applications that rely on NetApp storage and Oracle technologies. A new version announced today (12.1.0.2.0) enables support for NetApp storage systems running clustered Data ONTAP, new consolidated views of database and storage performance for NFS and communication options enhancements. This plug-in is part of the Oracle Validated Integration program. The plug-in is free and is now available to customers for download from the NetApp Communities site.

 

NetApp Enables DBaaS in Oracle Environments

 

Customers also have the ability to deploy Database As A Service using either Oracle Database 12c with the NetApp Cloning Plug-in for Oracle Database or by using Enterprise Manager 12c Database As A Service solution. Both of these options can be powered by NetApp Flexclone technology due to the ongoing relationship between NetApp and Oracle.

 

NetApp Updated Plug-in for Oracle VM Storage Connect and RMAN Media Management Library

 

We are also updating the NetApp Plug-In for Oracle VM Storage Connect and the NetApp Media Management Library for Oracle RMAN to provide support for customer running clustered Data ONTAP.

 

Availability of New FlexPod Datacenter with Oracle Designs

 

Back in April we told you we were working on some new FlexPod designs for Oracle Linux and Oracle VM. We are pleased to announce that these designs are now available. They will enable our customers to address their infrastructure, database and application challenges with a unified, pretested, and validated data center solution that enables them to rapidly virtualize their business-critical Oracle Databases and applications confidently, securely, and at reduced cost. You can read all about it here.

 

For more information visit us at our booth in Moscone South #901 or visit our community page. You can also read about all of the NetApp solutions for Oracle on our web site.

 

[*]Source: IDC Worldwide Quarterly Disk Storage Systems Tracker Q4 2012, March 2013 (Open Networked Disk Storage Systems revenue).

By Michael Harding, Product Marketing Manager, FlexPod

 

FlexPod.pngOracle OpenWorld (OOW) is larger than ever, with 60,000 attendees expected this year. One of the largest technology shows, this one will cover a wealth of topics including Cloud and Big Data, but primarily centered on Business Applications and what’s news for them.  This emphasis on Business applications is what makes this show so important for FlexPod.


FlexPod is the converged infrastructure solution with validated designs that speeds IT infrastructure and application deployment, while reducing cost, complexity and project risk. In the spring at IOUG we talked about specific Oracle validated designs that we’ve developed on FlexPod, and this continued roadmap of validations and new programs for FlexPod continues on. Not long after OOW closes, FlexPod will be announcing new designs for Clouds, as well as FlexPod management enhancements, and the Cooperative Support program. Watch the FlexPodCommunity for this news as it’s released.  With FlexPod, companies can deploy business-critical solutions faster, with less risk, and reduce total cost of IT infrastructure at the same time.

 

How FlexPod Supports "Business-critical"


The term “Business-critical” gets batted around by vendors, as they do with “Cloud” and “Software defined…” whatever.   In the case of NetApp, it’s important to point out that with the latest enhancements to our #1 storage O/S, Data ONTAP, we are able to deliver the most non-disruptive storage layer for leading business databases and applications such as those from Oracle. FlexPod with clustered Data ONTAP minimizes business disruption through the ability to virtualize and non-disruptively move data volumes across physical systems.  That is, storage systems and system data can be worked on while business databases and applications are still running, thus eliminating storage maintenance windows and planned downtime.


Clustered Data ONTAP offers a unified scale-out storage solution for an adaptable, always-on storage infrastructure that accommodates today’s highly virtualized infrastructures. Organizations can virtualize storage across multiple HA pairs, managed as a single logical pool – and scale easily to tens of petabytes. Support for clustered Data ONTAP further differentiates the FlexPod platform, enabling customers to:


  • Deploy an adaptable IT infrastructure with unmatched flexibility
  • Reduce risk with an always-on environment, optimized for zero planned downtime
  • Achieve unparalleled efficiency and lower TCO with optimized resources

 

The other part making FlexPod a highly available platform for Oracle is the inherent HA features of the FlexPod solution.  FlexPod is a highly available infrastructure solution, with HA features at all layers of the stack, designed for tier-1 business applications:


  • At the server layer, FlexPod uses the Cisco UCS Blade Servers where blades can be quickly added and discovered and that leverage a highly available boot process.  Installation doesn’t affect other system components, and the chassis itself has redundancy features and configurability.
  • Cisco UCS Manager runs as two fully redundant instance able to be replicated across a pair of Cisco UCS Fabric Interconnects, and the Fabric Interconnects themselves contain extensive logic for managing heartbeat, problem detection and remediation.
  • The Cisco unified network components are installed in pairs with redundant cabling, providing redundant access to premise Ethernet, SAN, and management networks. 
  • NetApp FAS storage with Clustered Data ONTAP eliminates planned downtime and prevents business disruptions at the storage layer.

 

Shared Oracle-NetApp Customer proofpoints


flexpod-building-blocks.pngThere are a large number of shared customers between Oracle and NetApp.  Specific to FlexPod as a highly available and efficient IT platform, we have a few relevant, recent wins:


  • Groupe Mutuel Insurance transformed their data center application availability and performance while saving on power, space, and management with Oracle and FlexPod.  Pascal Sarech, the Infrastructure Manager there attested to why they bought the solution: “We operate around the clock so can’t afford systems failures. We must have high availability, disaster recovery capabilities, and a flexible architecture.”
  • Suttons Group, one of Britain’s most successful privately owned logistics company, runs Oracle on FlexPod.  According to Robert Sutton, the IT Director, “The most profound impact FlexPod has had on the business is that employees have started commenting about how their IT systems have not failed in a long Time. We have uptime certainty and absolute stability.”
  • Cyta Hellas, the leading integrated telecommunications company in Cyprus, started commercial operations in Greece in 2008.  They are realizing the value of FlexPod to support growth in an efficient way. According to the IT Manager at Cyta Hellas, “In terms of efficiency, connectivity, and design, FlexPod is a cut above the rest. Expansion and optimization come much cheaper, whether you look at hardware or staff resources.”

 

Oracle on FlexPod


We’ve recently published a number of Oracle-based reference architectures for FlexPod.  These new validated designs bring FlexPod converged infrastructure benefits to an expanding set of Oracle workloads.  The three most recent are:


 

Everyone knows that Oracle is the market share leader in RDBMS, and more than that, they are an ISV associated with Tier 1 applications.  The FlexPod integration with Oracle demonstrates the growing FlexPod investment in the Oracle database and application space, and maps to NetApp’s increasing movement into Tier 1 storage.

For our resellers and customers, the value of these designs is that they are pretested and validated Oracle database and application building blocks.  Resellers really like the velocity that FlexPod brings them – being able to more quickly get solution infrastructure and packaged applications deployed, allowing more energy to be focused on the customization and development end of these engagements.


There are specific Oracle related uses and benefit areas that we’re seeing a lot of on FlexPod:


  • Dev/Test – NetApp cloning allows admins to instantly create virtual copies of databases, that use very little incremental storage, which can be quickly deleted and re-created to speed test cycles.
  • Predictive performance – With FlexPod validated equipment DBAs and Application teams can expect constant, predictable system performance and uptime. 
  • Data protection – With native NetApp storage features like Snapshots and SnapRestore – which work with RMAN, Direct NFS client, RAC, ASM – databases can be backed-up or restored in minutes. 
  • Reduced licensing – NetApp storage systems contain their own significant processing capability, and by offloading RMAN backups, compression and encryption to the storage hardware, this frees-up server cycles, not just to improve performance but to reduce the need for licensed server processor cores

 

flexpod-community-link.png

 

 

Watch for upcoming FlexPod news in the FlexPod Community


I hope it’s a productive and educational OOW for everyone.  Monitor the FlexPodCommunity.com for Fall news on FlexPod, and while you’re there, share your thoughts and questions about FlexPod, Engineered Systems and Converged Infrastructure in general.

 

 

UPDATE 11/13/13: Link added for "FlexPod Datacenter with Oracle RAC on Oracle VM": http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/flexpod_orcracovm_fcoe.html

NETAPP_360

The Duality of Storage

Posted by NETAPP_360 Sep 19, 2013

By Peter Corbett, Vice President & Chief Architect of NetApp


I’ve spent most of my career working on storage, file system and database technology of one form or another.  This has given me some insight that others might find valuable.  I have a good vantage point on storage technology, and how it is evolving to meet the changing needs of its consumers, many of which are large organizations and enterprises running or using a variety of applications.  In any industry, it is important to understand trends, and the impacts of those trends, and this is no less true in the storage industry than in any other technology area.   I’d like to share some of my thoughts in a series of blog postings.

 

Clustered_Data_ONTAP_Trains_HiRes.jpg

It is an interesting time in the storage industry.  There are several trends at work that individually would each have a large impact on how storage is designed and deployed, but that combined are leading to even larger shifts.  In this first posting, I will discuss the fundamental nature of storage systems, and how the underlying requirements of storage are in many ways unique compared to other technology areas.

Storage has some unique properties, which make it different from computing and networking.  While some stored data is ephemeral, and can thus be created and destroyed quickly without much need to consider it outside of the application context in which it is needed, most data has tangible and potential value long after it is first created.  Much data can sit idly for long periods of time, even decades, without ever being accessed by an end user or application, yet it cannot be deleted.  Data is difficult to move, it must be catalogued, managed, owned and kept secure.

 

The title of this posting is the Duality of Storage, and that is what I want to talk about now.  There are two timescales that are important with stored data.  The first is how quickly a user might need to access any byte of it.  The second is over what timescale a user will need to keep it stored.  These timescales differ by many orders of magnitude with respect to each other, but also each of these timescales itself has many orders of magnitude of dynamic range for different applications, and depending on the age of the data being accessed.  The first timescale is the operational requirement for use of the data.  We can think of this as the primary performance dimension.  It includes read and write latency and throughput or bandwidth.  All else being equal, faster is better.  But all else is not equal.  Faster is more expensive.  Faster requires different technology.  Faster requires locality to reduce latency.  Slower allows both the time to use cheaper technologies and also increase distance.  Distance allows consolidation.  Consolidation allows the eventual crushing of data to its most efficient state of density and highest degree of sharing.

 

There are other factors at work here as well.  The durability of data – the degree of certainty that any particular byte of data can eventually be retrieved – also affects the cost of storing it.

 

Thus we find a duality in storage systems.  They function as an extension to the networking and compute infrastructure to store and retrieve data quickly (low latency) and at a high rate (high bandwidth). Just how quickly and at how high a rate determines much about the design and ultimately the cost of the storage system. But most of the data that is collected or generated by applications will have a much longer life that its brief bursts of operational glory.  It will be stored perhaps for years or decades, and it may be replicated, retrieved, scanned, moved, cataloged, migrated, and analyzed many times during its lifetime. This long time-scale dimension of data retention is a very interesting aspect of storage systems.  Much of the value of a storage system is in the handling of point-in-time replicas of data: backups, snapshots or versions.  The gap between the length of time old data must be stored and the speed with which it must be made available on demand create a set of constraints on the operation of the storage system.  The optimization of the system to store retained data as efficiently as possible while keeping it acceptably accessible at all times is one of the great challenges of storage system design.  Put simply time (in the form of increased acceptable latency) buys efficiency.  On the short timescales of operational latency, whether average access latency, maximum latency, first access latency, write latency, or post-write de-stage time, each additional time increment, whether microseconds, milliseconds, seconds, or hours buys additional opportunity for efficiency and opportunities to reduce the costs of storing the data.  Each increment of storage lifetime, whether seconds, hours, months or decades motivates pushing to lower cost, more efficiently utilized, storage.  There is a similar dynamic to the operational latency dimension in compute and networking, but the second dimension is unique to storage. 

 

The application of solid state flash storage technology into large scale storage systems has created a greatly accelerated intensification of the exposure and exploitation of this duality.  Flash provides a fundamentally different technology point in the multidimensional latency/capacity/bandwidth/cost/durability space, and thus motivates a rethinking of how storage systems are designed to leverage flash efficiently for different workloads.  At the same time, hard disk drive technology is continuing to evolve, and it is changing the way systems are designed to handle the long tails of retention of different point in time images of data efficiently. Flash fundamentally changes the opportunities to achieve high performance for the primary operations tier. Ever more capacious HDDs change the dynamics of efficiently and durably storing large numbers of versions and snapshots of large data sets for long periods of time.

 

Optimizing both the operations layer, which will mostly be contained in solid state storage, and the retention layer, which will mostly be contained in low $/GB HDDs, is the task we undertake as builders of large scale storage systems.  The available hardware technologies let us, actually compel us, to optimize in both spaces at once.

Part 5 of a 5-part series on choosing media for NetApp FAS storage


By Tushar Routh, Sr. Manager, Storage Products, NetApp


encryption.pngI want to wrap up this series with a discussion of NetApp Storage Encryption (NSE), which uses self-encrypting drives (SEDs) to enhance data security. With all the recent news about corporate and government espionage, there’s been a definite uptick of interest in security and encryption solutions.

 

Why have I saved NSE for last? Two reasons:

 

  • It builds on everything I’ve covered so far. Our self-encrypting drive portfolio includes both performance and high-capacity models, and we’ll be adding SSDs in the future as well.
  • We’re in the process of releasing several new self-encrypting drives.

 

Understanding NetApp Storage Encryption

NSE is the NetApp implementation of full-disk encryption (FDE) using self-encrypting drives (SEDs) from leading drive vendors. Because encryption and decryption take place on the drive itself after data is written by Data ONTAP or before it is read, NSE operates seamlessly with Data ONTAP features such as deduplication and compression. All data on a drive is automatically encrypted, so using NSE is an easy way to make sure that data at rest is protected while maximizing the ROI of your NetApp storage.

 

As you might expect, this technology is front and center for government, healthcare and financial organizations. The physical drives themselves are tamper proof, and NSE prevents unauthorized access to encrypted data at rest. It prevents someone from removing a drive or shelf of drives and mounting and accessing them elsewhere. In addition, it prevents unauthorized access when drives are returned after a drive failure and simplifies the disposal of drives.

 

Key management for NSE is provided by an external solution. NetApp has partnered with SafeNet to offer the SafeNet KeySecure Key Manager, available direct from NetApp and from our partners.

 

link-image.pngThe NetApp NSE Portfolio

All FDE drives that NetApp sells adhere to the Trusted Computing Group (TCG) AES-256 encryption standard. We also require FIPS 140-2 certification, which is a standard requirement for the public sector.

 

NSE is supported across all current FAS/V platforms; the main consideration is that you can’t mix encrypting and non-encrypting drives in the same storage system.

 

Our drive portfolio includes a 600GB performance drive and a just-released 900GB performance drive as well as a 3TB LFF high-capacity option. A 4TB LFF option is set to release this month. We plan to offer a self-encrypting SSD in coming months, so ultimately – although a release date has not been set – we plan to support Flash Pools that are fully encrypted once we have a self-encrypting SSD in our portfolio.

 

Wrap Up

NSE is part of a broader portfolio of encryption products that NetApp offers to enhance data security. As mentioned above, we also offer SafeNet KeySecure for simplified key management. Other encryption options include:

 

  • SafeNet StorageSecure inline encryption appliances that support granular encryption at CIFS/NFS share, export or volume level and compartmentalize shared storage into cryptographic silos.
  • Brocade Encryption Switches and blades (BES) that encrypt data at rest on disk and tape in Fibre Channel environments.

 

I hope this blog series has given you a better understanding of all the available NetApp storage options and how to combine them to create a storage subsystem that’s tailored for your needs. If there’s something you’d like to hear more about, please let us know in the comments.

 

Here are links to all the previous posts In case you missed any of them:

 

 

Stay tuned in early October for a guest blog post from SafeNet that will expand upon NSE with KeySecure.

This post was originally published to JR's IT Pad.

 

By John Rollason

 

The IT market is changing faster than ever. Industry trends including Cloud, Big Data, Mobile, Flash and the Software Defined Data Centre mean this will only accelerate. The landscape is changing for CIO’s as workloads are becoming more distributed and hybrid cloud becomes reality. So, given this complexity, what should CIO’s and the rest of us in IT be discussing and thinking about? Where you should be looking to save money and where to invest your time? The following '3-stage framework' is based on a talk given by Matt Watts, our EMEA Director of Technology & Strategy at a few events recently. Thought it was worth sharing.

 

Evolution+of+IT.jpg

 

1)      The cost of Commodity IT

The things IT has to do to support the business, necessary and time consuming, but typically add little value. Ask yourself the question ‘If I started today from scratch, what would IT do and not do?’ Once you’ve identified these Commodity areas you can make decisions about how to deal with them. Do you have them delivered to you as a Cloud service? Or do you build shared infrastructures that are highly efficient and automated to reduce the costs of running them. Typically a discussion focused on operational excellence and legislative requirements, rather than technology.

2)      The opportunity of Business Value IT

The things that create real value for your business, the applications that power your business, or the products that your business creates. With more focus here it’s amazing how much additional value IT can add through aggressive application of technology innovation. An example, accelerating test and development by enabling developers to instantly create database copies. The focus here should be providing high levels of automation and self-service.

3)      Be ready to invest in New Opportunities

Companies are looking to exploit data and information. Whether social media feeds like the Twitter Firehose (500,000,000 tweets streamed directly to you every day!), new analytics tools such as Hadoop to mine vast quantities of information for trends and patterns or BYOD (Bring Your Own Device) strategies to better enable your mobile workforce. For example, within NetApp IT we recently deployed a Hadoop solution, which has reduced queries on 24 billion records from 4 weeks to less than 10.5 hours, accelerating our team’s ability to respond to customer needs.  It enabled a previously impossible query on 240 billion records in less than 18 hours, further enhancing our proactive service capabilities. A recent survey by Vanson Bourne showed that 69% of C-level executives cite technology as one of the main reasons why business decisions are not being made quickly enough.

 

The CIO challenge

The opportunity for CIO’s to add value and competitive differentiation to the business increases exponentially as investment shifts from Commodity IT, to Business Value IT and New Opportunities. According to Gartner, 63% of IT budget is spent running current IT infrastructure, 21% on ‘Grow’ to meet the natural growth in application performance requirements and data, and only 16% on New Opportunities, where there is the potential to create the most value. This is the challenge for a modern CIO.  How do you maintain Operational Excellence whilst increasing investment on ways to add business value and exploit your unique data? How do you move from being a builder of IT to a broker of service?

By Jay Kidd, CTO, NetApp

 

If you work in IT in 2013, you work for an IT service provider. It may not feel that way to the majority of IT professionals yet, but it does now to CIOs. The past few years of “which cloud” debates – public, private, Amazon, Azure, SaaS – have settled into the Hybrid Cloud model. This aligns with a CIO’s view of an application portfolio to serve their business and the industry view of building a range of technologies delivering IT in compelling new ways. 

 

Cloud_Partnership_3_HiRes_RGB.jpgCIOs care about delivering applications and managing information that their organizations depend on to operate. They will pragmatically choose the delivery model for each application that balances service level, cost, and control. Some applications will be purchased in the Software-as-a-Service model. Some will be run by cloud service providers that offer virtual private clouds or public clouds on familiar platforms. Some will be done in the hyperscalar cloud services at Amazon, Microsoft, or Google. And some will continue to be run in an enterprise’s own datacenter and by the CIO’s own staff.   

 

The unifying model CIOs now seek is to view all of these options through a service lens. What is the monthly cost? Am I getting the operational flexibility and responsiveness that I need? Am I paying for more than I need? Can I change quickly? This comes naturally to the Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and IT-as-a-Service (ITaaS) service providers. However, it is a new way of life for traditional internal IT staff as they race to become internal service providers and deliver private clouds with the same service levels as public clouds. The CIO’s view is that this portfolio of service options comprises their Hybrid Cloud. Every organization may not need or use all of these service options, but the vast majority will use at least two or more. 

 

So the challenge then moves from building clouds to managing the interaction between the clouds that comprise a CIO’s application landscape. How does information move between private, public, and hyperscale clouds? Can I get any operational consistency across these models to homogenize the services I am offering to my internal customers? Can I do disaster recovery (DR) between cloud options? What mix optimizes cost?  

 

This is where common technologies across clouds deliver the most value. Managing virtual machines (VMs) in the same way in a private cloud and a virtual private cloud closes the seams for operations. And having a common set of storage services enables more graceful deployment of data and provisioning of applications. Data ONTAP provides this common platform. 

 

Data ONTAP is the most deployed enterprise storage operating system in the world. More data is managed under ONTAP than in any other single storage architecture. Hundreds of thousands of customers use ONTAP in their enterprises. Over 175 NetApp Service Provider Partners also have deployed Data ONTAP as the storage foundation for over 300 service offerings. Data ONTAP can also be deployed in a manner to allow direct use by Amazon EC2 compute instances. It can also be run as a Virtual Storage Appliance running in a VM on an application host. Data ONTAP gets around.

 

These Data ONTAP systems can be interconnected to build a data platform for the hybrid cloud. Data can easily be replicated between an enterprise data center and a service provider to enable DR services. And in the DR scenario, the same storage efficiency services (do you really want your VMs to expand 300% when you move them to the cloud?), database cloning, snapshots and application integration are available which makes the operational model the same on premise or in the cloud. Backup services are already available from several service providers built on Data ONTAP. And a common data platform can enable services like automatic translation between virtual machine formats invoked when data moves. The possibilities for innovation are tremendous. 

 

Hybrid clouds are new, and a lot of experience is yet to be gained before the CIO’s goal of managing an integrated portfolio of IT services can be fully realized. But those customers who are building their private clouds on Data ONTAP and those service providers delivering ITaaS on Data ONTAP already have a head start on a key technology that provides a data management foundation of a Hybrid Cloud relationship.

 

Hear more about NetApp and cloud in the videos below:

oow_logo_200x200.gifIt is that time of year again! That’s right, next week NetApp will be at Oracle OpenWorld 2013 at the Moscone Center in San Francisco! The event kicks off on Sunday, September 22nd and we have tons of activities lined up for you. Find everything you need to know from our booth location to features speaking sessions below.

 

Visit Us at Booth #901

Come see why thousands of Oracle customers rely on NetApp as their storage of choice. Stop by our booth (#901) to discover how you can accelerate business success by deploying sooner, running faster, and reacting quickly to changing business needs with NetApp Solutions for Oracle technologies.

 

Social Media at Oracle OpenWorld

No matter if you will be joining us in person or keeping tabs from afar, stay update-to-date by following @NetApp on Twitter. Also, share your thoughts and read what others are saying with our event hashtag #NetAppOOW. You can also follow us on Facebook and LinkedIn too.

 

footballntap.png

Want Even More? Tweet to win!

Each day we will be randomly selecting three lucky Tweeters to win great prizes. Share your experience by using the #NetAppOOW hashtag throughout the week and you could win!

 

Presentations in Our Booth

Did you know the National Ignition Facility at the Lawrence Livermore National Laboratory uses NetApp to power a large Oracle VM deployment for its research needs? This is just one of over 12 exciting topics that will be presented in our booth during the show, including solutions for Oracle Database and applications, cloud deployment, virtualization, flash technologies, and the benefits of FlexPod®. Be sure to stop by and participate in one of our presentations for a chance to win an exclusive official NFL football.

 

 

 

Featured Sessions

Expand your knowledge and enhance your Oracle OpenWorld experience by attending one of these sponsored sessions:

Best Practices for Running Oracle Database Instances on Amazon Web Services EC2 [CON4728]

Monday, September 23, 3:15-4:15pm

 

Tim Donovan, Service Provider Architect, NetApp

Abdul Sait - Principal Solutions Architect, Amazon

Tom Laszewski, Amazon Web Services


Harness the Power of Oracle Database 12c with Oracle Enterprise Manager Database as a Service [CON9583]

Tuesday, September 24th at 3:45-4:45pm

 

William Heffelfinger - Director, Enterprise Ecosystem Organization, NetApp

Matthew Mckerley - Vice President, Oracle

Adeesh Fulay - Consulting Product Manager, Oracle


 

Featured Demos and Solutions

NetApp is a recognized innovation leader in storage solutions, and we invite you to find out more about our solutions for your Oracle environment. NetApp can help you:

  • Deliver an innovative cloud infrastructure for Oracle apps
  • Accelerate database application performance and analytics with flash technology
  • Analyze critical business data quickly
  • Integrate Oracle Database 12c and DBaaS
  • Perform rapid Oracle backups and restores
  • Improve SLAs with clustered Data ONTAP® nondisruptive operations
  • Manage your data your way
  • Deploy infrastructure quickly with the FlexPod data center platform

 

We will have a FlexPod configuration and an EF540 on display in our booth. You can find all the details on our in-booth demos here.


 

Other NetApp Booth Activities

Leveraging NetApp Flash Technologies for your Oracle Database

 

Monday, September 23rd – 11:30am

Wednesday, September 25th – 12:30pm

 

Jeff Steiner, Enterprise Architect

 

Hear from NetApp’s leading database solution architect on the practical value of Flash technologies for databases. This overview will include real examples from real customers with real AWR data. The right solution is more than just throwing Flash at a performance problem, it involves understanding the real performance barriers and issues beyond raw performance numbers.

 

View the NetApp Mini-Theater Schedule at Oracle OpenWorld 2013, here.

 

Experts on Demand

Stop by the experts bar in our booth and have a one-on-one technical session with one of our experts, who can evaluate your unique situation and suggest solutions that will improve the way your business runs. Running Oracle applications in the cloud? Need to improve application uptime? Waiting too long to deploy critical infrastructure components? We can help. Whiteboard and markers provided at no additional charge.

 

Come see why thousands of Oracle customers rely on NetApp as their storage of choice for Oracle Database and applications. Whether you are deploying applications in the cloud, running high-performance databases, or consolidating costly infrastructure redundancies, NetApp can help you reach your goals. Stop by booth #901 to find out how.

 

Hope to see you at Oracle OpenWorld! For more information about our planned activities please visit our event community page.

 

By Brad Nisbet, Cloud Solutions Marketing Manager at NetApp

 

brad nisbet.jpgThe advent of cloud brings great promise for IT organizations to meet increasingly demanding business and operational objectives.  A cloud-based IT delivery model can speed up application development and provide flexible environments to accommodate the dynamic and unpredictable needs of the organization and its customers.  It’s easy to see why there is so much interest among companies to learn more about how they can benefit from cloud.

 

Despite the allure, however, many businesses are still reluctant to fully embrace a cloud model due to the perceived risks. As cloud services continue to evolve and become increasingly vital for success, it’s critical to properly understand the inhibitors that are leaving many unsure about how to jump into the cloud.

 

When thinking of cloud, the issue of security is often top-of-mind for many organizations, and rightfully-so - as any organization needs to carefully consider the security of their environment, whether on- or off-premises.  However, there are other common inhibitors to enterprise cloud adoption that are just as significant and increasingly top-of-mind, including:  managing complexity, creating or preserving IT agility, and maintaining control of valuable business data. These risks aren’t necessarily new to cloud computing, but as the cloud continues to change and take shape, so do the risks.

 

Complexity

Dealing with complexity is often a significant inhibitor for businesses that are considering incorporating public cloud into their IT environment.  Companies often struggle with the idea of ‘How do I get started?’  Choosing from the sheer variety of services offering different service levels, different virtualization and compute platforms, and different data management frameworks can be a daunting task for any CIO’s team.  In addition, many organizations are perplexed with how to manage elements of IT across a blend of private and public cloud resources, particularly the intricacies of managing data across disparate locations and platforms in a hybrid environment.

 

Developing a clear strategy, identifying which workloads can be moved off-premises and setting concrete performance requirements is a great way to begin identifying services and providers that can assist in this transition.   Additionally, taking a piecemeal approach at the start will allow organizations to experiment with different solutions to find the best combination of services that meets their needs.

 

To further minimize the fear associated with such a daunting and seemingly complex shift to public cloud, it’s also important for organizations to feel comfortable that once they take the leap, they have options to fine-tune and adjust over time (if not right away!)

 

IT Agility

Delivering IT is about meeting the needs of the business.  As these needs change, IT needs to adapt, and fast.  Immediate responsiveness is paramount.  For years organizations have been working toward delivering agility within the datacenter, and now as public cloud is folded into the strategy, the ability to move applications, workloads and data among cloud resources will be critical to extend this agility to the cloud.  However, this is easier said than done.

 

In short, IT agility means having the capability to fine-tune architecture and solutions over time in a dynamic environment.   Although choosing a cloud service provider to complement a set of IT services is indeed a means to deliver a flexible and dynamic environment, it doesn’t necessarily mean there is ongoing flexibility among the cloud providers.  Many organizations perceive cloud provider lock-in a significant hurdle to adopting a public cloud model – and without the right set of tools – it is.  The ability to choose best-of-breed solutions has been a cornerstone of agility in the datacenter, and so it will be for the cloud.

 

What CIOs and their teams really want is the ability to choose among cloud services knowing that, for whatever reason – change in business needs, policies, governance, location, etc., they can make adjustments with minimal pain and impact to the business.

 

Data Control

For years, large organizations have built their own virtualized data centers and private clouds as a means to ensure control of not only their IT environment, but perhaps more importantly, control of their data.  Talk to just about any IT organization and it’s clear the pains they’ve gone through to support the business with the right levels of data performance, cost, security, access, protection, and governance.

 

As the popularity of the clouds grows, more and more IT organizations are drawn to exploring public cloud options.  More than ever before, mandates from the C-level are being issued to “go figure out how to use the cloud,” whether it be from a regional cloud service provider or a hyperscale provider such as Amazon Web Services.  The IT organization is now faced with bringing public cloud into the mix, often managing across both private and public cloud services, without dismantling the data control they’ve worked so hard to achieve.

 

However – there is hope!  Thankfully, businesses can leverage the seamless data management and efficiency of Data ONTAP to build cloud infrastructures that balance private and public cloud resources while retaining full control and portability of their data.  As CIOs and their teams continue to evolve toward being brokers of services that span cloud resources, and as more diverse and demanding workloads move to the cloud, the role of IT will become increasingly vital to maintaining the level of control and efficiency across a hybrid cloud environment. 

 

IDC recently reported that spending on cloud computing is expected to triple in the next five years. While there remains an inhibition by some enterprises to fully embrace the cloud, ultimately, it is going to be part of the future of computing and IT infrastructure.   Understanding the risks and developing a clear strategy for dealing with complexity, agility and data control will ensure a successful transition to the cloud.

Part 4 of a 5-part series on choosing media for NetApp FAS storage


By Tushar Routh, Sr. Manager, Storage Products, NetApp

 

storage-subsystem-design.pngIn this post I’ll talk about how to take the components I discussed in the previous posts in this series—HDD, SSD, and shelves and interconnects—and put them together into a storage subsystem that will meet your needs now and in the future.

 

FAS Storage Guidelines

With all the recent attention to flash storage, it would be easy to conclude that HDDs are going to be eclipsed in short order. There are certainly use cases for dedicated all flash arrays, which is why NetApp released the EF540 Flash Array last year, and why we’re busily working on FlashRay, our next generation, scale-out all-flash offering.

 

However, HDDs—especially in combination with flash—aren’t going away any time soon. The economics of HDD and HDD/flash solutions relative to all-flash still make them the sweet spot for a lot of storage workloads.

 

Here’s what NetApp suggests:

 

  • For applications that need the most consistent performance with the lowest latency, choose the EF540 Flash Array or a FAS system with SSDs.
  • For other high-end workloads, deploy FAS with performance disks in combination with one (or more) of our flash-based caching technologies: Flash Cache (storage controller level), Flash Pool (disk subsystem) or Flash Accel (server caching). NetApp put a lot of time and effort into providing options to let you put flash where you need it. (You can find more guidance on choosing among flash options in our recently released Flash Storage for Dummies guide, this Tech OnTap article, and this white paper.)

    The DS2246 disk shelf is the best choice for this type of deployment because it can accommodate performance HDDs, SSDs, or a combination (for Flash Pool deployments) and because it offers superior performance density.
  • For nearline or capacity-oriented workloads, deploy capacity disks in combination with flash. The DS4246 disk shelf is a good choice here because it supports both high-capacity HDDs and combinations of HDDs and SSDs.
  • For backup or archival workloads, deploy capacity disks. The DS4486 disk shelf delivers maximum capacity per rack unit.
  • For purely sequential workloads, capacity or performance disks deliver good performance at a lower price than SSDs.
  • If you’re not sure what the I/O characteristics of your workload are, or you need to support a variety of workloads that may include both transactional and sequential I/O patterns, HDD plus flash options are once again a good bet.

 

disk-shelves-link.pngBest Practices

Here are a few best practices to keep in mind. 

 

  • Except for Flash Pool, don’t mix different media types in the same SAS stack.
  • A single FAS system can support multiple aggregates with different media types (performance HDD, high-capacity HDD, SSD) to address the needs of different workloads, or you can deploy clustered Data ONTAP and dedicate specific cluster nodes for specific media and workload types.
  • Use RAID-DP (with the possible exception of SSDs deployed for Flash Pool).
  • Make sure RAID scrubs are turned on for RAID groups containing HDDs (this is the default setting) to keep your drives healthy.
  • When deploying storage where length limitations are likely to be an issue, or when deploying a “stretch” MetroCluster, choose optical SAS for easier cabling. (Optical SAS will be available later this month.)
  • Follow the guidelines in the recently updated Storage Subsystem Resiliency Guide [9] to achieve maximum resiliency.

 

Wrap Up

 

A little up front planning—and consideration of the guidelines I’ve outlined here and in previous posts—will go a long way in helping you choose storage that is best suited for your particular workloads and needs. As you’ve no doubt noticed, there are a lot of options to consider: HDD versus SSD, SSD deployed as cache or persistent storage, different capacity points, and so on. Of course, NetApp experts as well as our worldwide network of partners are available to assist you in your decision-making.

 

You may have also noticed that I haven’t said anything about security up to this point. In the final blog post, I’ll wrap up this series with a discussion of NetApp Storage Encryption.

Every Monday we bring you top stories featuring NetApp that you may have missed from the previous week. Let us know what interests you by commenting below.


Cloud_News_1_HiRes.jpg

Evolving Legal Ecosystem: NetApp - InsideCounsel.com

NetApp's legal department shares its solutions to big problems, including how they’re out-innovating the competition with the Legal Ecosystem.


 

NetApp Global Alliances Head on the Value of Software Partnerships #VMworld - SiliconANGLE

NetApp's Maria Olson joins SiliconANGLE's the Cube to discuss the value of software partnerships at NetApp's MVP Event during VMworld.

 

Avnet’s Brian Mitchell Talks FlexPod: The Future of Converged Infrastructure - SiliconANGLE

Brian Mitchell of Avnet also stopped by theCube during the NetApp MVP Event to discuss his company’s role in the storage vendor’s ecosystem.

Part 3 of a 5-part series on choosing media for NetApp FAS storage


By Tushar Routh, Sr. Manager, Storage Products, NetApp

 

In my previous posts in this series I looked at HDD and SSD technology and discussed where they are today and where they’re going. This time I want to focus on underlying infrastructure: shelves and interconnects. Admittedly, these may not seem like the most exciting components, but they play a critical role in delivering availability, in how you deploy storage, and in how your storage is serviced, primary considerations when planning your storage subsystem.

 

Disk Shelves

Gears_1_HiRes.jpegNetApp currently offers a line of four shelves for FAS systems. All these shelves are serviceable from the front regardless of location in the rack for easy serviceability, and all are designed to be highly reliable with no single points of failure. On all shelves, shelf firmware upgrades are non-disruptive and alternate control path (ACP) provides out-of-band management.

 

All our shelves are SAS based:

 

DS2246. The DS2246 is our performance-and power-optimized shelf that packs 24 drives in only 2U of rack space using small form factor (SFF) drives. Compared to the DS4243 disk shelf, the DS2246 doubles the storage density, increases performance density (IOPS per rack unit) by 60%, and reduces power consumption by 30% to 50%.

 

DS4246. The DS4246 provides a balance between performance and capacity. It is 4U high and supports 6Gb/sec SAS connections. It can be configured with either 24 large form factor (LFF) high-capacity disk drives or a combination of SSDs and high-capacity disk drives to support Flash Pool configurations.

 

DS4243. The NetApp DS4243 is 4U high and supports up to 24 hard disk drives (high capacity or high performance) with 3Gb/sec SAS connection.

 

DS4486. The capacity-optimized DS4486 holds 48 high-capacity disk drives. This disk shelf uses a tandem disk carrier to enclose twice as many LFF disk drives in 4U of rack space. The rack can be supported by a raised floor in a traditional data center. Because of weight, the drives for all dense shelves ship separately. The tandem carrier cuts installation time in half.

 

netapp-disk-shelves.png

 

InterconnectS

disk-shelves-link.pngNetApp believes that SAS offers significant benefits for the foreseeable future. Today’s 6Gb/second SAS connections deliver more than adequate bandwidth, and the SAS-3 standard offering 12Gb/sec is coming.

 

The only limitation of SAS is the relatively short run length of the standard copper cables. NetApp is developing a family of optical SAS products to address this problem— the first in the storage industry to do so. Optical cabling will address two main issues:

 

  • Cabling Limitations. In a crowded data center it can be tough to stay within the 20M SAS cable limit. With much longer cable runs, optical SAS fixes this problem; you can add new disk shelves wherever you have rack space available.
  • Stretch MetroCluster. A stretch MetroCluster lets you mirror data between data centers in separate buildings on the same campus. With optical SAS, you can span up to 500M distances without using a fibre channel to SAS bridge, making optical SAS cheaper and more convenient.

 

Shelves and Interconnects: Wrap Up

When it comes to deploying storage, shelves and interconnects are the building blocks. Choosing wisely will allow you to deploy a variety of media options while keeping your storage footprint and energy bill low and availability and flexibility high.

 

The DS4243 shelf will be phased out over time, and we may consider a non-redundant shelf design for archive disks, but otherwise NetApp is pretty happy with its lineup of disk shelves. On the interconnect side, we’ll be continuing our commitment to SAS and expanding optical SAS offerings.

In the next post, I’ll look at a few guidelines for storage deployment.



By Vaughn Stewart, Director of Technical Marketing, NetApp

 

2701.jpgI am often lucky enough to be invited to speak at industry conferences and events, participating in and occasionally leading panel discussions on storage related issues that matter most in tech community. I consider it an honor, and a wonderful opportunity, to participate in these types of substantive conversations. I am, personally and professionally, passionate about the topic of storage. I am always impressed with the breadth of knowledge and depth of commitment we devote as an industry to the relentless improvement of the solutions we offer.

 

While of course there are differences of opinion about the best way to address specific or intensifying problems—often shaped by belief in a particular company’s products or platforms—I am often struck by how aligned we are as a community when we identify both the overarching challenges our customers face and the most effective solutions we can offer to mitigate them.

 

What comes through, loud and clear, time and again, is that the role of storage is increasing dramatically, and that its operability within the support and deliver of business services is becoming more apparent to larger numbers of administrators and architects. When we pull back to look at the current demands of the market, a beautiful simplicity emerges: cloud administrators want more control, greater agility and better performance from their storage architectures.

 

At NetApp, we consider it our mission, our responsibility and simply good business to address these needs holistically. Which, realistically, is the only way to provide effective solutions to real-world problems. Focusing on one aspect to the exclusion of the others may drive an interesting hypothetical conversation, but it is not going to drive significant improvements or business benefits for customers. In some ways, it may even exacerbate problems or weaknesses in a storage infrastructure.

 

The platform NetApp offers to meet these three primary, critical needs is clustered Data ONTAP.

 

For control, clustered Data ONTAP allows the storage admin to provision a virtualized storage controller that we call a storage virtual machine. This technology separates the delivery of storage services from the underlying hardware—allowing us to deliver protocol, performance, capacity and data protection enabled by a rich role-based access control and granular quality of service mechanism.

 

From an agility standpoint, through our open and programmable APIs, both infrastructure management teams and application owners can consume and manage storage resources within their native applications, including Oracle, SAP, Microsoft, Citrix, VMware, Cisco, Symantec, OpenStack and more. For emerging or homegrown apps, developers can write data management directly to our open, programmable API framework.

 

At the end of the day, we all know that performance is a cornerstone of any storage array. Flash technology has significantly advanced the scale of performance for all storage platforms. In fact, our first generation of flash enablement in Data ONTAP, flash cache, delivered over 1.5 million IOPs back in 2011. Since then, NetApp has advanced the flash portfolio to include Flash Pool with clustered Data ONTAP, and have extended out to hosts through Flash Accel and our partner flash portfolio. Both of these all of these uses of flash enhance the scalability of clustered Data ONTAP.

 

We are all abundantly aware of the fact that we live and work in a hyper-fast world that is only getting faster. Storage teams are going to be challenged with scaling the volumes of data and the number of services they have to offer. That is why clustered Data ONTAP provides an enterprise scale-out model that is storage efficient, and provides the control mechanisms that allow application and infrastructure teams to move with the speed and agility they need to meet increasing demands and intensifying business realities.

 

While disagreements among vendors about how to satisfy the needs of storage and application teams may garner inordinate attention, we can all take pride in the fact that there is a significantly greater amount of agreement and alignment within our industry.

 

At NetApp, we passionately believe in the solutions we offer. We are always excited and genuinely happy to engage and participate in heated, even contentious debate—so long as it is grounded in facts, proven by benchmarks and focused on real solutions. Anything less merely serves—perhaps by design—to confuse customers. More than ever before, though, the storage challenges customers face are serious. As leaders and members of the storage community, as well as trusted strategic partners to our customers, we owe it to them to ensure our conversations are as well.

Part 2 of a 4-part series on choosing media for NetApp FAS storage


By Tushar Routh, Sr. Manager, Storage Products, NetApp

 

ssd-drive.pngIn the previous post in this series, I looked at some of the available HDD options. This time I want to focus on what’s happening with solid-state drives (SSDs) and how you can deploy them either for persistent storage or as part of a Flash Pool. Future posts will dig into deployment guidelines and take a look at NetApp Storage Encryption for the security conscious.

 

SSDs Continue to Evolve

Capacities of SSDs have been growing fast in recent years. While this growth will continue, the flash memory used in SSDs is running up against the same lithography limits as other types of semiconductors. The path to increased capacity for flash has been to shrink the feature size on each chip to deliver more capacity per chip. Current NAND flash devices are using a 2Xnm-class process (20-29nm feature size) and rapidly moving to a 1Xnm process (10-19nm feature size). At the same time, Enterprise SSDs have transitioned from single-level cell (SLC) flash components to multi-level cell (MLC) devices.

 

Using SSDs for Persistent Storage and Flash Pools

As with HDDs, NetApp is introducing new SSDs on an aggressive schedule. We currently offer 200GB and 800GB capacities and will be adding more options in coming months to provide a broader range of choices for greater storage optimization.

 

Today, you can use SSDs either as persistent storage—like any other type of drive—or as part of a Flash Pool that combines HDDs with SSDs to accelerate random reads and writes. Here are some things to keep in mind for each type of deployment:

 

  • For persistent storage use RAID-DP. When creating SSD aggregates for persistent storage, the best practice is to use RAID-DP for SSD RAID groups.
  • For Flash Pools use RAID 4.  As of Data ONTAP 8.2 you can mix RAID types in a Flash Pool. This allows you to use RAID-DP for the HDDs within a Flash Pool while using RAID 4 for the SSDs, reducing the cost for a given amount of usable flash.

 

netapp-shelf-media-options.pngNetApp likes to say that Flash Pool combines the capacity of HDD with the performance of flash, but it’s really more than that. SSDs provide the most benefit for random, transactional workloads. HDDs are actually quite good at sequential workloads, especially on a cost basis. Combining the two types of media in a single aggregate lets you benefit from SSDs for their transactional performance and HDDs for sequential performance without having to know the exact I/O behavior of every workload when you are architecting storage.

 

To learn more about Flash Pool and all of NetApp’s flash options including the EF540 flash array, check out the recently released book, Flash for Dummies (PDF).

 

SSDs: Wrap Up

While there’s no question that SSDs are changing the game when it comes to storage subsystem design, flash memory has some limitations. Flash cells can only endure a limited number of write cycles before they wear out. Newer technologies such as phase-change memory and resistive RAM (RRAM) are being discussed as a way to overcome these limitations, but it’s too early to tell which if any of these will emerge as a clear leader. In any case, it’s clear that solid-state storage in some form will continue to have an expanding role in the overall storage market, especially as the economics improve.

 

The next post will provide a few guidelines for deploying an effective storage subsystem including shelf and interconnect details.

Filter Blog

By author:
By date:
By tag: