In Data Management and Automated Teller Machines, I described a vision of data management. The gist was that application administrators ought to be able to provision and manage data themselves, without bothering a storage admin, just as I can get cash from an ATM myself, without waiting for a bank teller.
ATMs are only safe because banks have policies that detect problems and determine how much cash I can withdraw at a given point in time. Likewise, our ATM vision of data management requires tools to let storage admins easily define data management policies.
Our new Protection Manager focuses on policies for data protection. A policy is a rule that describes how to protect the data. The idea is to let storage admins reflect the corporate rules, guidelines or SLAs (service level agreements) independent of specific NetApp technology. A policy can say "make copies every week and keep them for at least a year" or "retain undeletable copies for seven years." Our automation engine evaluates which technologies are available (has the customer licensed SnapVault? SnapMirror? SnapLock?) and connects the plumbing in a way that satisfies the policy's goals. Over time, the engine monitors whether the data conforms to the policy's goals. The key point is that you can tell the Protection Manager your goals and let it figure out the details.
Protection Manager lets you define policies in a graphical, intuitive way. A simple picture represents the policy. An icon on the left side represents the primary storage, and one or more icons on the right represent copies of the data. Arrows between primary and copy show the type of copy. Click the diagram to edit how and when the transfers should happen. Should a mirror update once an hour, or just at midnight? Is the backup window open all day, or only at night? How many primary copies should be retained and how many backup copies? The tool isn't just about backups and snapshots. Our plan is to also support the undeletable and unalterable copies required to comply with government regulations.
After you have defined exactly how the policy works, you can give it a name. Maybe "Gold" means an offsite mirrored copy updated throughout the day plus a year's worth of backup copies, "Bronze" means one backup a day at midnight kept for just one week, and "SEC-17A" means unalterable and undeletable copies kept for 7 years.
You can apply a policy to a single volume or LUN, but you can also apply them to a user-defined group called a dataset. If you have a large number of LUNs that all support the same application, you can group them together in a dataset and apply the policy to the dataset as a whole.
The idea is that instead of worrying about hundreds or thousands of mirroring relationships for hundreds or thousands of LUNS and volumes, you can define a handful of policies, group your data into a much smaller number of datasets, each of which gets the appropriate policy. Another benefit is that defining standard policies makes it easier to deliver storage broadly as a service within a company. Formalized policies lay the foundation for execution, predictability.
We don't yet allow application admins to set protection policies on their own, but that is the next step. Our plan is to add these features to our own application integration tools, like SnapManager for Oracle, but we understand that not everyone uses those tools, so we are also offering APIs so that we can incorporate these capabilities into frameworks like Oracle Fusion, Microsoft .Net, or SAP NetWeaver.
We haven't yet achieved the full vision—to be honest not even close—but I think we are ahead of most vendors. Others have talked about this kind of model for data management, but we have a big advantage because we have a unified architecture that spans our whole product line: primary to secondary, high-end to low-end, and SAN to NAS to iSCSI. Our storage management team can focus on cool new features instead of on how to make incompatible architectures—like DMX, Clariion, Centera and Celerra—look more or less the same.