1 2 Previous Next

SANbytes: Where SAN and NAS are ONTAP

19 Posts authored by: maryling

Guest post by John Rozwadowski, Business Development Executive, Brocade Communications

 

I’m a car guy.  Yeah, I dream about screaming down the road with a Lamborghini – the V12 turning heads as I stare back with my mirrored sunglasses!  Alas, reality always gets in the way.  Lamborghinis are too expensive and totally unpractical, not to mention the fact that my thinning hair (or growing bald spot – whichever terminology you want to use) means I’d just get a sunburn on my head. What I do want to discuss, however, is Minivans – and how cool a minivan can really be. 

 

Wow, that feels like a cold shower, huh?? 

 

Minivans are arguably the most versatile vehicle on the road today. They don’t have the ‘Italian sports car look’ or the ‘sophisticated executive’ look, but are significantly more practical and efficient than any other car on the market.  They have tons of space and seating, get very good gas mileage, and are easy to get in and out of.  On top of that, the newer minivans have some killer features:  blind spot information systems, multi-angle rearview cameras, built in cooled “storage” box, heated/cooled seats,  and ridiculous stereo and DVD systems with wireless headphones (so you don’t have to listen to Barney singing for the umpteenth time…) 

 

All told, the minivan is not only an effective and efficient car – but also contains lots of interesting features which can almost make them - dare I say it - cool. 

 

In my opinion, the market perception of Fibre Channel is a lot like the market perception of minivans. On the surface, it seems like a lot of people want to replace Fibre Channel with some other “newer” or “cooler” technology.  And yet, when you take a little time to discuss their goals and requirements – we more often find that they are describing the attributes that Fibre Channel has delivered for years.

 

Brocade recently released our seventh generation of Fibre Channel ASICs and like the newest minivans, it’s not only practical and efficient, but also loaded with lots of cool features that architects and Network Admins find extremely useful. Our latest products not only have doubled the speed (16Gbps) but have also been optimized to work in a high density and bursty I/O environment, which is exactly the environment being driven by Virtualization and Cloud trends.  We’ve also added cool new features, like Virtualization optimization, and automatic diagnostics, which allows for greater visibility into cable or optics problems before they become network problems.  And our Partners and Customers continue to tell us that their Fibre Channel gear is extremely reliable (with some saying we are managing six-9s of reliability) and continues to be easy to use.

 

If Fibre Channel is a minivan, then it’s a “sleeper,” one with a V12 with a supercharger under the hood, heated seats, and lots of other cool features.  Brocade continues to innovate and develop Fibre Channel because, like the Minivan, it’s the ‘go-to’ product for our most important and mission critical items.  For network administrators, it’s their data.  For us moms and dads buying minivans, it’s our kids. 

 

And with Fibre Channel, you get that reliability and those cool features all in one.  It’s like having a minivan that drives like a Lamborghini – without the need to put sunblock on your head…

By Bill Henderson, Principal Solution Architect, QLogic

 

In many of his previous SANbytes blog posts, Jason Blosil has provided key findings and industry trends which have clearly demonstrated that a rising tide has lifted FCoE (Fibre Channel over Ethernet) as an emerging new standard. Invited to be a guest blogger for SANbytes, I am happy to concur with Jason’s findings! FCoE is now a leading technology in the data center trend of consolidating the network. FCoE provides a direct mapping of Fibre Channel onto Ethernet and enables Fibre Channel traffic to be natively transported over Ethernet networks.

 

Puzzle_3_HiRes.jpgWhen deciding on implementing an FCoE solution, you have two methods of CPU processing available to you: a software initiator or a hardware offload engine.  The real question you have to ask yourself when evaluating each of these options is, “Which device do I want to do the work?” Or more specifically, where do you want the processing of the data to occur - on the server’s CPU or on the hardware offload engine residing in the FCoE converged network adapter (CNA)? Software initiators allow you to leverage the benefits of FCoE SANs using cost-effective 10GbE NICs with the servers running the CPU cycles, while the hardware offload engines on the FCoE CNAs are designed to process the CPUs, thus freeing up the server’s cycles.  In this discussion, I will focus on considerations for hardware offload technology and why it might be your new best friend if you have data center virtualization objectives and are considering future upgrade paths.  

 

What’s virtualization got to do, got to do with it?

You might ask why I’m bringing virtualization into this discussion. Virtualization has tremendous value and its benefits are unquestionable – increased efficiency, data center consolidation, and simplification - just to name a few.  With virtualization, it is important to consider one component of the hardware that is especially getting its work cut out for itself, and that is the physical server(s). Specifically, there are two important performance considerations when it comes to virtualization:

 

  1. Increased density of applications per physical server
  2. The addition of a virtualization layer

 

Both will require increased I/O performance from the physical servers and the CPU horsepower to efficiently scale VMs and provide the I/O bandwidth needed for enterprise applications. Add to that the additional burdens on the server, such as additional storage requests, and it can take a real toll.  Instead, why not offload these performance demands to a Converged Network Adapter with a full hardware offload engine of FCoE, iSCSI and TCP-IP network stacks? To put this into perspective, consider how a video card is used in a personal computer for gaming: it offloads the video processing from the CPU. Gamers (I know some of you out there must be fellow World of Warcraft or Call of Duty fans!) are well aware of the advantages of offloading the video processing and using graphic adapters to conserve all available CPU resources for the video game itself. Advantages of the video card include vibrant high-definition images, and, most importantly, CPU acceleration to support the most performance-hungry gaming applications (and I am not just talking about Tetris or Ms. Pacman here). This approach is similar to a hardware offload engine.

 

Virtualized environments also often need to share a converged network among multiple types of workloads, all with different business-related priorities. Support for industry standards such as N_Port ID Virtualization (NPIV) integration is used in virtualized environments, allowing multiple Fibre Channel initiators to occupy a single physical port, easing hardware requirements in Storage Area Network design, allowing for simplified management and realizing overall cost savings.

 

Investing in your todays and tomorrows.

One other topic to consider when researching FCoE implementation options is a subject we are all aware of: investment protection. How can investment protection be achieved with FCoE CNA technology?

 

As mentioned in previous SANBytes blog posts, FCoE, whether with the software initiator or the FCoE CNA, brings the benefits of I/O consolidation and efficiencies while preserving current IT investments by working seamlessly with existing Fibre Channel and iSCSI knowledge and management tools which can be applied directly to FCoE.  FC concepts such as WWNs, FC-IDs, LUN masking, and zoning still apply.  This technology is also compatible with existing FC and iSCSI drivers and managed applications that are currently deployed in millions of systems.  Therefore, a full forklift upgrade is not necessary.

 

An additional benefit of FCoE CNA technology is that you may decide not to use FCoE immediately and instead utilize 10GbE networking in your data center. FCoE CNAs can support this and maintain an iSCSI-only network configuration.  However, when the time comes to up-level to hardware offload FCoE CNA technology, you can simply change the FCoE CNA to FCoE networking.  With FCoE CNAs you have the capability to start with iSCSI technology, confident in the knowledge that you can seamlessly transition to FCoE CNAs when the time is right, without a complete switch and replacement of your networking adapter investment. 

 

Hardware offload FCoE CNAs - Giving your servers and pocketbook a much needed break. 

Overextending your server’s CPU cycles - which can often be a result of virtualization - may cause you to pause and consider if you are overextending your servers and the CPU processing they can handle, potentially burning them out long before their expected life cycle. And what about your hardware investment? Will it work with your current hardware while having the capability to support your future requirements?  If you are considering FCoE CNAs with a hardware offload engine, you are headed in the right direction.  The benefits of converged networking are not just a promise to be fulfilled in the far-off future - they are available now and realizable today.

==================================================================================================

A great resource: QLogic's NetApp partner page describes more fully how our two companies work together.

 


By Lawrence Bunka, Sr. Product Marketing Manager, Business Continuity

 

Business_Applications_Stop_Watch_1_HiRes.jpgFor data availability, as for golf, the lower the score the better.  In the case of business continuity, the ultimate number is zero, as in 0 RPO (Recovery Point Objective) and 0 RTO (Recovery Time Objective).  Getting to these numbers isn't easy.  It requires an investment in people, processes and technology.

 

Well, here's a new number to consider: 143.

 

More specifically, 143%. As in Return on Investment (or “ROI”).  That's what Forrester Consulting recently found as the three year ROI based on a composite organization in a commissioned Total Economic Impact™ study they conducted, interviewing seven customers using NetApp MetroCluster.  This was backed up by another number: 11. More specifically, 11 months. As in, the payback period for this composite organization based on the same group of customers.[1] 

 

Forrester’s methodology examined four areas:

 

  • Cost
  • Benefits to the organization
  • Flexibility; and
  • Risk

 

This approach is the foundation of Forrester’s Total Economic Impact (TEI) methodology, which looks beyond a cost only focus to also consider the enabling value of technology in it capacity to positively affect a business.

 

Why does this matter?  IT investments need to justify themselves in two ways. First, they need to provide the features and performance expected of them.  The best solutions address more than IT problems, they also provide value to the business.  Second, they need to deliver tangible returns in the form of revenue, cost savings, or both.

 

The Forrester study provides one perspective, but we know there are more.  What has your experience been with making the case for business continuity? Are there benefits beyond those that Forrester examined? Do some business continuity solutions deliver more ”bang for the buck” than others? Let us know your thoughts in the comments section.
 


[1] The Total Economic Impact Of NetApp MetroCluster™, a commissioned study conducted by Forrester Consulting on behalf of NetApp, April 2012

By Karthik Ramamurthy, Sr. Product Manager, NetApp

 

With the release of NetApp’s Data ONTAP Clustering, customers now have the scale-out NAS solutions they’ve been looking for, and their excitement is palpable. To keep performance consistent while enabling scale-out, NetApp released support for NFSv4.1/pNFS in Data ONTAP Clustering that leverages this scale-out feature from the client side.


So what exactly does pNFS do for customers? To better understand this, let me illustrate with an analogy (although it may date me): remember “back in the day” when you’d arrive at an airport gate only to find that your flight had been moved to another gate, and you’d have to race over to the new gate, only to potentially find that it had been changed again? To improve customer experience, airlines took advantage of cloud-capable smartphones and are now sending timely gate information directly to your cellphone, so you can go directly to the gate, instead of wasting valuable time hopping from gate to gate.


Clients in a scale-out NAS environment have a similar problem to those of us old enough to have flown in those days: as you scale your storage environment, there can be many challenges introduced, such as management, performance, data mobility, and data access path inefficiencies. In Data ONTAP Clustering, depending on the location of the volume in the cluster namespace, the I/O requests are served locally or remotely, using a cluster-hop (the equivalent of having to go to a gate only to find out that you’re supposed to be at a different gate).  pNFS effectively solves this problem by moving the intelligence about where the data is located from the storage to the server, which helps improve I/O response during those times when data is moving from one system to another (no more cluster hop!). Let’s dig a little deeper by looking at a pNFS block diagram:

 

pNFS blog v3 image.jpg

 

pNFS separates the meta-data from the data associated with any I/O request. In addition to this separation, it provides a direct path to the data for any client. pNFS leverages Data ONTAP Clustering technology and provides an optimized path to the storage that essentially eliminates a cluster hop to serve the data – it is this feature which provides the benefit of scalability without diminishing performance. Clients supporting pNFS always get their data served by the network addresses that are hosted on the same physical controller as the storage.

 

So where does pNFS fit in for customers? Pretty much everywhere that customers want a scalable NAS with direct paths for the I/O. The “top-of-mind” use cases include home directories, scratch spaces, enterprise applications, and ITaaS environments. Bear in mind, however, that although pNFS is out of infancy with Data ONTAP Clustering, it’s still a while away from being ready for customers wanting better performance than the traditional NFSv3 that they are used to. Eventually, its higher performance and a rich partner support ecosystem will make pNFS an ideal candidate for most workloads, enterprise or otherwise.

 

While pNFS as a specification supports blocks, objects, and files, NetApp’s focus for pNFS is on files and on enabling customers to deploy their file services workloads on pNFS. Our partner, Red Hat, has been the first to market with the pNFS client as part of RHEL 6.2, and there is a rich roadmap of client support down the road from other vendors. Good things to come!

 

If you want to beef up your knowledge of pNFS, here are some other good resources available to you:


  1. A new technical report: Parallel Network File System Configuration and Best Practices for Data ONTAP 8.1 Clustering
  2. pNFS: A New Standard for Shared, High-Performance Parallel I/O white paper
  3. This excellent technical presentation by Red Hat

We came across this great guest blog post by McAfee on the Integrated Data Protection space, and thought we'd repost it here. Many people don't know that NetApp Data ONTAP has supported array-based antivirus (AV) scanning for several years. With ONTAP 8.1 clustering, we’ve kicked this up a notch by running the AV engine directly on the array rather than on a connected appliance.

 

To bring our customers up to speed on the advantages of running the AV engine in the array, we asked Jim Waggoner, CISSP, Director of Product Management at McAfee to write this guest blog.

=====================================================================================================================================================================================

McAfee blog image-small.png

By Jim Waggoner, Director of Product Management, McAfee - Core Anti-Malware Solutions

 

I was at a customer site yesterday where I was asked to offer personal insight into why they were getting repeating occurrences of the same infections on systems. Whenever I get this question, I have a standard set of questions that I ask in turn to cover the best practices that customers have adopted to help reduce outbreaks. The questions started like this.

 

Question 1: Do you have endpoint protection installed on every system?

Customer Response: Yes

 

Question 2: Are you scanning all files rather than a subset of files?

Customer Response: Yes

 

Question 3: Are you updating your antivirus signature files (DATs) on a daily basis?

Customer Response: Yes

 

Question 4: Are you running weekly scheduled scans on every system?

Customer Response: Yes

 

Question 5: Are you scanning your network file shares?

Customer Response: Uh…no.

 

That is where I stopped the line of questioning.

 

I did probe so that I could understand the reasons why they were not scanning network file shares, especially since one of the primary vectors of propagation in the enterprise are these central data repositories. For them it came down to concerns about performance and the impact that running network scans have on file copies and client-server operations.  At one point in time, they had a negative experience with a competitor’s antivirus product when they had enabled the scan feature within the policy to scan all files copied to and from the network. At some point in time they received a flood of calls into the helpdesk complaining about the performance degradation when files were being copied to or from the file share. They remembered the download time increasing from, say, 30 seconds to 5 minutes. Once they changed the policy back to disable network scanning the calls stopped, so they stayed with the less secure policy after they migrated to our solution.

 

That was the point where I responded to them about still needing to scan the file shares to stop malware propagation, but not doing it from the endpoint.  Instead, install the security on the storage controller so that 1) all files copied to the file shares will be scanned and 2) you don¹t have to worry as much about systems in your environment where endpoint protection is not or cannot be installed. There was a bit of resistance at first, but by the end of the meeting I had convinced them to try it. Once they do this, I know we will be able to put a stop to the outbreaks. And if we don’t, I just continue with the questions. The next question is, “Do you let your users have administrator access?”

I should be studying for Finals, but I’m not. I’m huddled around a table with seven others: two pimply-faced boys, an accountant-looking type guy with wire-rimmed glasses, a burly red-headed man that I’ve mentally nicknamed “Paul Bunyan,” a girl with a nose ring, a man who looks like a leprechaun, our sadistic Dungeon Master, and a girl with greasy black hair. Oh wait, that’s me. 


In some other plane built by our imaginations, we are being overrun by wyverns and worgs. The barbarian and the warrior, played by the two pimply-faced boys, are valiantly swinging away at the enemies while bleeding profusely from open wounds. Nose Ring’s thief is hiding behind some boulders, Leprechaun’s bard is singing a song meant to confuse the enemy, and Paul Bunyan’s cleric – the group’s sole healer - has got his hands full trying to keep people from dying. Meanwhile, the accountant’s mage is about to die, while my mage is already dead. She is face-planted into the dirt, waiting for a rez.

 

If only I had not broken my cardinal D&D rule: never go adventuring unless there are at least three clerics (or healers) in the party. We magic-users are the most fragile (as I explained in a previous post, we’re brainy, but not so brawny) so we need healers to keep us alive.

 

Three healers in the party would mean that they could all actively participate in healing and fighting. If one of them died, another healer could resurrect them. This way, it doesn’t fall to a single person to make sure hit points are replenished, especially if everyone is taking heavy damage. It is my application of an operations management concept, or the “pooling” method, to Dungeons & Dragons. Except I didn’t apply it on the miserable night described above.


Cleric of Sune.jpg

This is the character I SHOULD have been playing that night...my cleric (of course, I didn't have her back then). Can't believe I am carrying her around in my purse. It's a good thing I'm married, or else I'd probably never get a date.

 

 

In case you’re wondering how I’m going to relate this back to NetApp, I’ll get right to it: it turns out they also apply the pooling method in the data storage world, as storage admins can create pools of storage to avoid creating any stranded islands of data. NetApp made some recent enhancements to its E-series platform that has improved upon storage pools even further with the concept of “dynamic disk pools.” Essentially, dynamic disk pools turn the traditional method of storage pooling on its head: instead of “pool on RAID,” it’s “RAID on pool,” to borrow liberally from my colleagues, Rip Wilson and Mark Henderson, product marketing managers for the E-series.


Pool on RAID: Traditional RAID groups get assigned to specific disks. In traditional storage pools, the pools are then layered on top of this RAID foundation as an abstraction layer to be divvied up or built up across multiple groups of RAID disks.  If there is a failure, however, all of the slow, non-scaling aspects of rebuilding to a single - and previously idle - spare can make the process of RAID rebuild utterly miserable. And if you’ve got a big data environment running high-performance applications, you can’t afford misery.


RAID on pool (Dynamic Disk Pools): All the drives in a pool participate (like all the healers in an adventuring party), and data stripes are randomized across some ten drives in the pool.  Other stripes use a different set of ten drives.  If there is a failure, all the other drives continue to participate to reconstruct any missing segment parts (just like if one healer dies, the other healers can fight on as well as heal the downed healer), which are rebalanced across the remaining drives.  This is faster because of the parallelism, or the fact that data is moving all over the pool, versus ganging up on just one poor spare drive. How much faster? Up to 8 times faster.


Of course, in a single post I can’t do this new technology very much justice, so take a gander at the IDC report, “The IT Data Explosion is Game Changing for Storage Requirements,” for a more in-depth description of the benefits of NetApp’s dynamic disk pools and how they’re upping the game for traditional storage pools and our OEM partners.


As always, we would love to hear from you. Whether you’ve got a D&D horror story to tell, a picture of your miniature to post, or your thoughts on storage pools to share, we would welcome your comments. 

By Sharyathi Nagesh, Technical Marketing Engineer

 

With the compound annual growth rate of unstructured data projected at more than 60% over the next few years, network attached storage (NAS) solutions are trying to keep pace with that growth. Future NAS solutions should not only provide huge scale-out and performance, but also tools that allow storage administrators to manage and monitor these huge data sets.

 

This exponential growth of unstructured data poses technical challenges for the storage administrator to meet business and legal requirements. To tackle these challenges, storage administrators need data governance tools. Data Governance tools help to understand data risks, improve storage efficiency, manage effective rights, perform data set content profiling, and assure data compliancy. These tools help business owners to understand the value of data and build effective business cases.

 

Using the robust File Policy (FPolicy) solution built in to NetApp Data ONTAP®, NetApp has partnered with Symantec to offer an industry-leading data governance solution called Data Insight (hereafter, “DI”). DI integrates with the NetApp storage system via the FPolicy-based file notification framework.

 

     FPolicy provides:

      • Support for NFS and CIFS file access to volumes
  • Real-time in-band notification
  • A reliable notification framework

 

The DI scale-out architecture can support both very simple as well as large deployments, in both instances minimizing hardware requirements.

 

Symantec Data Insight connects to NetApp storage arrays using standard Data ONTAP interfaces and:

  • Non-intrusively collects activity events using FPolicy
  • Collects permissions and metadata information using incremental scans

 

 

 

 

With this integration, DI can provide the following benefits:

 

Content profiling and effective rights management: DI solves the problem by leveraging proprietary pattern identification and heuristics algorithms. This helps in discovering and managing the custodians of unstructured data, helping in data center efficiency, mitigating risk, and adhering to compliance.


Compliancy and Data loss prevention:   DI integrates with the popular Symantec Data Loss Prevention suite to provide a single window to manage compliance with requirements like HIPAA and PCI (to protect “PII,” or Personally Identifiable Information) and to configure effective data protection rules for your data set.


Data management: DI helps in classifying data according to multiple parameters, enabling effective data lifecycle management. The solution provides visualization of data storage utilization and helps the effective planning of data centers in areas such as expansion strategy, data tiering, and migration strategy.

 

Resources

For more information on the Data Insight solution, please refer to our joint solution brief or to the solution page on Symantec.

By Mathew Devanny, NetApp Professional Services Consultant

Battle of the GUI CLI.jpgIn the “debate” between GUI (graphical user interface) and CLI (command line interface), let us harken back to a scene from Star Wars Episode IV for instruction (yes, Star Wars has proven to be a ripe garden of IT lessons thus far, hasn’t it?) Obi Wan introduces Luke to “the Force” and talks about what life was like as a Jedi “before the dark times, before the Empire.” He tells Luke that he and Luke’s father were close friends, and then gives Luke his father’s lightsaber. (Alas, it was at that moment that my eternal fascination with lightsabers was born – it’s too bad swords aren’t made of light after all.)  As Obi Wan reminisces, he goes on to say that a lightsaber is “not as clumsy or random as a blaster,” and that it was “an elegant weapon for a more civilized age.” Of course, if you’re a Jedi, you can use the lightsaber with great skill, precision, and accuracy. If you’re not a Jedi, then you might prefer the blaster, with which you can just point and shoot and hopefully carve out a wide swath of damage.

So both have their uses, and both depend upon the skills of their users.

We could make the same argument for the GUI and the CLI. Most SAN administrators prefer to use GUI-based SAN management tools over the CLI because – and let’s face it – the CLI is old (much like Obi Wan is old), and it’s hard to remember all those commands.  But there are some reasons why the lightsaber – I mean, the CLI – may still be better for your organization.

For example, if you’re working in an environment where the people planning SAN changes must be different than the people executing SAN changes, a CLI-based change plan is the only practical way to communicate the plan. Executing a CLI-based change plan is as simple as selecting one command at a time from the plan and pasting it into the window containing the SSH session to your switch. A GUI-based change plan still requires the operator to interpret the plan for himself/herself before executing. 

 

That’s not to say that GUI-based management tools are never useful. The GUI-based management tools provide great dashboard reports, are much better at representing diagnostic information, and can manipulate many switches from the one central portal. Common tasks like zoning can also be much more intuitive when performed via the GUI and may be preferred by less experienced SAN administrators.

 

Some vendors’ GUI-based management tools permit you to send CLI-based commands from the GUI-based application. This may be ideal if you have a third-party operations team executing SAN changes in your environment, as it provides a central location from which operators can make changes, but also removes the variables from your change procedure by reducing the operator’s task to a simple cut-and-paste of CLI commands into a window.

 

For other fabulously insightful pointers like this one, check out the recent Fibre Channel SAN Best Practices Guide that I co-authored.


****************************************************************************************************************************************************************************************************

For Past Star Wars IT Lessons

  1. A Princess Can Love Two Rebels
  2. Build Two Death Stars
  3. Pick the “Where,” Not the “How.”
  4. Leave out the Midi-chlorians: Keep Things Simple.

By Ron LaPedis, Business Continuity, Security, HA Computing, and Storage guru

 

There are very few companies that exist to run compute farms. Organizations usually exist to provide a product or a service, and they use computers as tools to support their businesses. For example, Google uses their computers to serve advertising (the search engine thing is just a sideline). Banks and stock exchanges exist to move money around and use their compute farms to make it easier. Bob Cratchit never had it so good.

 

Business_Applications_Traffic_Light_1_HiRes.jpgSo normally your organization’s computers are humming away doing their work, and all is well with the world. But neither computers nor infrastructure last forever, and it is possible to suffer an outage of some kind. The outage can be from software or hardware error, a backhoe cutting your transmission or power lines, a cyber-attack, or a regional event such as an earthquake, hurricane, or tsunami.  It can even be from something as simple as someone pushing the wrong button.And this, of course, is why your organization does business continuity planning. You do have a business continuity plan, right? If you don’t have a business continuity plan, you might want to get started with that. I’d suggest you stop reading this blog and go here – now.


For those of you whose organizations have already done business continuity planning, you should have IT recovery as a component. For those of you who think that IT recovery is the same as business continuity, you also need to stop reading this blog right now and get a better understanding of what business continuity is all about.


Okay, are you still with me? That means that you have a business continuity plan in place with IT recovery as a component of it. As part of the business impact analysis (BIA) portion of the business continuity planning process, you should have determined your RTO and RPO – recovery time objective and recovery point objective. The first number defines the amount of time that the business process can be unavailable while the second defines the “freshness window,” or how much work in progress can be lost. From an IT point of view, the RPO determines how many in-flight or not backed up transactions can be lost. Is it that bad if you lose a transaction if your computing infrastructure takes a hit? It depends.


If Google loses your transaction, you hit the search button again. If Amazon loses your transaction, you scratch your head wondering if you really placed the order that you thought you already placed and you enter it again. But if your brokerage, the New York Stock Exchange, or an international commercial bank loses a transaction, it could be much worse. Stock transactions build on each other. When you sell a stock, someone else is buying it, and if one transaction disappears it could be catastrophic to try to go back and figure out who owes whom for what. In the case of an international commercial bank, what happens if they receive a multi-billion Euro transfer only to lose it due to a power failure? Can they ask for a replay? Maybe, but even if they could, think of the interest they may lose while the lost transaction is being researched.


So even if you don’t need continuous availability of your data, you might not be able to survive the loss of a single transaction – and this is what NetApp MetroCluster – a feature of NetApp’s Data ONTAP operating system - is all about. MetroCluster takes a high-availability NetApp storage system and separates the two controllers and disk hardware up to 160km. But you don’t need to geographically disperse a MetroCluster. Both storage systems can be in different areas of the same building to protect your data from a computer room fire, or in different buildings on the same campus to protect you from a whole building outage. But the best thing about MetroCluster is that because it’s one storage array that has been split into two pieces, it costs a lot less than competing solutions that require you to buy two entire storage systems. In fact, MetroCluster costs about half as much and has half the complexity of competitive solutions. And when your business is at stake, simplicity is what it’s all about – as Jason Blosil showed us in his most recent post on IT Lessons from Star Wars. Find out more about what MetroCluster can do for your organization on the NetApp web site.

Guest Post by Betty Liang, Director Enterprise Product Marketing, SafeNet

 

Since partnering in 2010, NetApp and SafeNet have been working together to deliver leading storage security solutions to the market.  Last November, we announced KeySecure, the industry’s first high-assurance enterprise key management platform based on the OASIS Key Management Interoperability Protocol (KMIP 1.0). NetApp is reselling KeySecure as a replacement for the DataFort Lifetime Key Management appliance (LKM). Now you might be wondering, who is SafeNet, what’s a KeySecure, and why should you care?  Glad you asked. 


Locked NetApp.jpg

First, SafeNet, headquartered in Maryland, is the largest pure-play security company in the world. Companies like Citigroup, Bank of America, Gap, Netflix, Starbucks, Cisco, and Dell, are a few of our 25,000 customers who rely on SafeNet to secure their most critical information assets. We provide encryption platforms for applications, databases, file and storage systems, and virtual machines. We also offer hardware security modules, authentication, and a range of other security solutions.

 

Second, SafeNet KeySecure is an enterprise key management appliance that enables security teams to centrally and uniformly manage the lifecycle of cryptographic keys for all their organization’s encryption platforms. Featuring support for the KMIP standard, KeySecure can manage keys for storage encryption platforms (NAS, SAN, self-encrypting drives, tape libraries), data encryption appliances, HSMs, and cloud encryption deployments. In addition, KeySecure offers the scalability to support millions of keys across data centers and cloud environments.

 

Third, why should you care? Fundamentally, key management represents a big challenge for customers today. After years of risk mitigation, compliance mandates, and evolving threats, organizations are suffering from encryption creep—disparate, isolated pods of encryption deployments scattered across workgroups, infrastructure elements, and locations.  The problem?  Each encryption pod has its own keys, its own policy enforcement, and its own mechanisms for managing keys across their lifecycle.

 

Given the vital nature of keys, this proliferation presents significant issues for enterprises today. If keys are compromised, so is sensitive data. If keys are lost or inadvertently deleted, so is data. Further, the isolated nature of key management breeds inefficiency that costs companies dearly—whether in terms of capital costs, staffing costs, failed audits, or errors and oversights.

 

KeySecure represents a solution. With KeySecure, organizations can centrally, efficiently, and securely manage cryptographic keys and policies—across the key management lifecycle and throughout the enterprise. With KeySecure, security teams can uniformly view, control, and administer cryptographic policies and keys for all their sensitive data—whether it resides in the cloud, in storage, in databases, or virtually anywhere else. And I hope this blog answers some of your questions about SafeNet and the KeySecure appliance. Please join the conversation on the NetApp community, or visit the SafeNet blog site.

By Ray Mar, Product Marketing Manager

 

In my last post on SnapProtect, I compared backup to plumbing, and talked about how NetApp is helping to improve your plumbing – I mean, your backup – with a new release of SnapProtect. Today I want to examine a few of these new features and show you what their benefits are.

 

Person_Rain_1_HiRes Small.jpgSingle File SnapRestore (SFSR) is a NetApp-unique feature that allows for a portion of a NetApp Snapshot copy to be quickly reverted into the production volume without the need to copy data from the source Snapshot and write it to the target volume.  As of this latest release (SP5), the SnapProtect solution now has the ability to leverage this NetApp technology to restore individual VM’s as part of the Virtual Server Agent (VSA) as well as for CIFS and NFS volumes through the NAS iDA. By using the SFSR to facilitate restores of Virtual Machines and NAS files, restore times will be considerably faster for customers requiring a restore operation taking potentially 10’s-100’s of GBs to be restored in seconds, not hours, while not affecting other data on the volume.  Pretty cool stuff if you’re looking to reduce restore and recovery windows. (And who isn’t?)


Job Based Retention is also a new feature with SP5 for SnapProtect that decreases backup windows even more than before while maintaining the index and restore capabilities SnapProtect offers.  With traditional retention, Snapshots would be grouped into Cycles (a full catalog and following incremental updates until the next full catalog is initiated) and Days (minimal time to keep the Snapshot before it is aged off).  While very powerful in maintaining a backup architecture, some customers like to just keep a certain number of Snapshot copies instead of managing the backups using the Days/Cycles method.  With Job Based Retention, a copy within the storage policy can be defined for retention to only keep the number of jobs (Snapshots) as the only retention requirement.  Along with the job based retention, a new way of managing the catalog created during backups is implemented that will allow for a backup cycle to age the full base index and keep the resulting incremental backups. All restore capability would still be supported for the data catalogued for the full, but the Snapshot associated with the full can be aged off. This can greatly reduce the number of full indices created for environments with millions of files in a single volume. 


VMware Wild Card Find is a feature that has been extended to include VM’s spanning multiple datastores.  The Find function within SnapProtect gives you the ability to search the backup for a specific file or group of files.  With a wild card search, you will now be able to find VM’s meeting specific naming requirements and find them quickly for restore.  This enhancement also decreases the time required to restore data in your greatest hour of need.

 

Finally, SnapProtect is offering a VMware Restore Plug-In for users to do self-service restores of individual files within a VM.  The feature offers a way to access the backup data directly from vCenter for VMware admins. It also has a web console that users can log into and see only the VMs they are allowed to see within set security rules.  They can browse their backups and find specific files they want to restore without the need to contact the backup admin.  This is a big benefit for VM owners that may have lost a file or need to revert to a version before a VM change.  The file can be quickly copied to their desktop for placement where they need it.  Now, it’ll be twice as easy for an admin to quickly access data for restore operations.


All in all, there are some pretty cool things with SnapProtect that can immediately benefit your backup environment. Check out our latest technical whitepaper that discusses how you can optimize your storage and data management for virtual environments (be forewarned: you will have to enter some lead capture information to download the paper...but it's well worth it, I promise).

 

Now, if they could just make home plumbing as easy…

By Ray Mar, Product Marketing Manager

 

Have you heard the analogy that backup is like plumbing? When I thought about the analogy the other day, I immediately had a horrible flashback to some expensive plumbing I had to replace in my home because some dreaded roots found its way into the pipes. This created some “performance issues,” so to speak.  Of course, it works like a charm now…thousands of dollars later.

 

When you think about an IT shop that invests a great deal of money into their backup infrastructure, they also need to make sure their “plumbing” isn’t creating issues for them. But time and time again, we hear of many customers facing the same challenges – “I can’t backup within my allotted window,” or “I’m unable to recover data when I need it.” Sound familiar? This can largely be attributed to traditional methods of backing up that can’t meet the needs of a modern infrastructure - particularly one that’s virtualized.

 

Enter NetApp Integrated Data Protection! If you know NetApp and its technologies, like Snapshot® copies, replication, and deduplication to name a few, you already know the efficiencies these bring to your storage infrastructure. With the company’s newest data protection management solution, NetApp SnapProtect®, they are delivering even more simplified, cost effective, and accelerated backup and recovery to our customers’ day to day backup problems by combining its modern approach with tape for a single end-to-end solution. It’s like what my contractor did - integrated about 20 feet of new piping into my existing piping, made from this new material that won’t “wear out” while simultaneously keeping those tree roots out.

 

Trees.jpg

Yeah, trees are great on the surface. Until their roots get into your plumbing.

 

Why am I telling you this? Well, if you’re currently a NetApp shop or looking at NetApp as a provider for storage, you might also want to think about including SnapProtect in your infrastructure. What is exciting are the new capabilities NetApp released today with SnapProtect Service Pack 5 (SP5).  This release has many enhancements to the solution that will benefit customers looking for a data protection product integrated with their NetApp FAS storage systems.  The release follows CommVault’s Simpana R2 release made available earlier in the year.  [Note: NetApp SnapProtect is an OEM of a portion of CommVault Simpana software].  This release features some really nice enhancements to make SnapProtect a great solution, including enhanced VMware protection and accelerated backup and recovery capabilities. In my next post, we’ll examine a few of these features.

Guest Post By Maryling Yu, Product Marketing

 

In Dungeons and Dragons, my highest-level character was a Level 28 Mage. (Yes, I play a magic-user, and yes, it’s because in real life I am more brainy than brawny. And yes, I enjoy roaming through fantasy realms blasting anyone who looks at me funny with a Finger of Death spell.) When I heard the news that NetApp had taken the win – for the 4th time in 4 surveys – in SearchStorage’s Quality Awards for Enterprise Arrays, beating out an alphabet soup of stiff competition (EMC, IBM, HDS, and HP, among others), I started to think: does this make NetApp the equivalent of a Level 30 Mage in D&D? Yes, I think so.


Even my character – a blond wisp of an elven lass named “Selonae” – as powerful as she was, got “killed” off every once in a while. (Oh, and this is her, by the way):


Selonae.png

Copyright George Mayer

 

You don’t see the parallel? Okay, let me explain. In D&D (2nd edition rules, anyway), you roll dice to determine how many points you can apportion over ability scores in 6 areas: Strength, Dexterity, Constitution, Intelligence, Wisdom, and Charisma. Obviously, this means that you can’t have maximum scores in all 6 attributes, or else you would be a minor deity. The idea is that to have high scores in certain attributes, you’ve got to give up scores in other attributes, making you vulnerable. My mage, for example, is long on Intelligence and Wisdom and short on Strength and Constitution.


In the SearchStorage Quality Awards for Enterprise Arrays, they ask customers to rate storage vendors on a scale of 1.00 to 8.00 in 5 main categories: sales-force competence, initial product quality, product features, product reliability, and technical support. 213 people responded to this survey, providing 320 system evaluations, so this was no statistically insignificant joke of an exercise. NetApp garnered top scores in 4 out of 5 of these categories (sales-force competence, initial product quality, product features, and technical support), and that was how we managed the overall win. And in the category we didn’t win (product reliability), we still posted a strong third-place showing (an average score of 6.58, vs. IBM’s winning score of 6.69). So that means that with our products, you don’t have to sacrifice Strength for Intelligence…er, I mean to say, quality for features, or a knowledgeable sales force for good ongoing support. NetApp can truly be considered an “all-around” performer – the D&D equivalent of a brawny AND brainy mage.

 

But not only is this super-mage called “NetApp” well-stacked in attributes, it’s also collected its share of conquests and experience points. In Quality Awards IV (4th edition), we shared our glory with EMC. In the Quality V (5th edition) survey, we won outright. Similarly, in the Quality VI (6th edition) survey, we won outright. And this time (Quality VII), we’re again standing alone at the top. As Rich Castagna so aptly put it:

By now, everyone should be convinced: NetApp Inc. is an enterprise data storage powerhouse and not just a major network-attached storage (NAS) player.

 

 

And I say this makes NetApp the equivalent of a Level 30 Magic-User…yep, that’s my story, and I’m sticking to it!

By Ron LaPedis, Security Solutions Marketing Manager, NetApp

 

Net2Vault, an enterprise-class cloud backup and disaster recovery firm, has been running a new service in stealth mode for a few months and finally went public with it in this press release, followed up with a customer disaster recovery success story here. In a nutshell, they are running NetApp storage arrays in SunGard data centers and selling data backup and disaster recovery services solely to NetApp customers.

 

Cloud_Lock_NetApp_Small.jpg

 

Now you may think that it is crazy to limit your company to serving customers that only use one brand of storage array, but it’s crazy like a fox. You see, only NetApp offers a combination of the write anywhere file layout (WAFL), primary storage deduplication and compression, thin provisioning, and storage-efficient snapshots that are made up of mostly air.

 

Unlike the competition, when a NetApp customer takes a volume snapshot, only a block of pointers is created that point to the actual data. If the user changes a block of data on the “live” copy of the volume, WAFL writes a new block and updates the live pointers while leaving the snapshot pointers alone. Take 100 snapshots and all you have are 100 sets of pointers.

 

To back up a NetApp storage array, you move your snapshots to another storage array. In this case, the backup storage array belongs to Net2Vault. The first time that a snapshot is moved to Net2Vault, all of the data in the snapshot copy needs to be copied, but since the data in a snapshot is unchanging, you don’t need to worry that data changed “behind” you. In some systems, while you are moving block 60, changes might be made to block 50 that relate to changes made in block 70. You will have the new block 70 but the old block 50 and that gives you a corrupt disk copy. Since a NetApp snapshot copy takes almost no time to make and the blocks are frozen in time, this is not an issue.

 

But let’s get back to our story. Once the initial snapshot is copied to Net2Vault, subsequent copies are pointers and changed data – and on most storage arrays, data is read much more often than it is updated. Because up to 255 snapshot copies can be taken per volume and take very little space, Net2Vault can store a year’s worth of backup in very little space, passing the savings on to their customers. You can learn more about the NetApp technologies being used by Net2Vault and how they are saving money for their customers here.

In case you don’t know, NetApp Data ONTAP has supported array-based antivirus (AV) scanning for several years. In ONTAP 8.1 running in cluster-mode, we’ve kicked this up a notch by running the AV engine directly on the array rather than on a connected appliance.

 

To bring our customers up to speed on the advantages of running the AV engine in the array, we asked Jim Waggoner, CISSP, Director of Product Management at McAfee to write this guest blog. Happy 2012, everybody!

=====================================================================================================================================================================================

McAfee blog image-small.png

By Jim Waggoner, McAfee

 

I was at a customer site yesterday where I was asked to offer personal insight into why they were getting repeating occurrences of the same infections on systems. Whenever I get this question, I have a standard set of questions that I ask in turn to cover the best practices that customers have adopted to help reduce outbreaks. The questions started like this.

 

Question 1: Do you have endpoint protection installed on every system?

Customer Response: Yes

 

Question 2: Are you scanning all files rather than a subset of files?

Customer Response: Yes

 

Question 3: Are you updating your antivirus signature files (DATs) on a daily basis?

Customer Response: Yes

 

Question 4: Are you running weekly scheduled scans on every system?

Customer Response: Yes

 

Question 5: Are you scanning your network file shares?

Customer Response: Uh…no.

 

That is where I stopped the line of questioning.

 

I did probe so that I could understand the reasons why they were not scanning network file shares, especially since one of the primary vectors of propagation in the enterprise are these central data repositories. For them it came down to concerns about performance and the impact that running network scans have on file copies and client-server operations.  At one point in time, they had a negative experience with a competitor’s antivirus product when they had enabled the scan feature within the policy to scan all files copied to and from the network. At some point in time they received a flood of calls into the helpdesk complaining about the performance degradation when files were being copied to or from the file share. They remembered the download time increasing from, say, 30 seconds to 5 minutes. Once they changed the policy back to disable network scanning the calls stopped, so they stayed with the less secure policy after they migrated to our solution.

 

That was the point where I responded to them about still needing to scan the file shares to stop malware propagation, but not doing it from the endpoint.  Instead, install the security on the storage controller so that 1) all files copied to the file shares will be scanned and 2) you don¹t have to worry as much about systems in your environment where endpoint protection is not or cannot be installed. There was a bit of resistance at first, but by the end of the meeting I had convinced them to try it. Once they do this, I know we will be able to put a stop to the outbreaks. And if we don’t, I just continue with the questions. The next question is, “Do you let your users have administrator access?”

 

Regards,

Jim Waggoner

Director, Product Management

McAfee - Core Anti-Malware Solutions

Filter Blog

By author: By date:
By tag: