By Juan Orlandini, Principal Architect, Datalink

 

juan-orlandini.pngI recently had a conversation with a developer friend. He's been in there industry for a long time, but has never had an IT position. To him IT was always "just a provider of resources" and sometimes an "impediment." We got to talking about how things are changing in the IT world and how this might change how he leverages the resources that are available to him. Despite him being a very savvy techincal person, he surprised me by not really getting the "cloud transformation" that we are in the midst of. We ended up chatting for a while more before an analogy came to mind that helped him get it.

 

My friend started development in the late 70's and was taught by teachers and professors that grew up in the 60's. The landscape of computing resources and programmer expectations was much different then. It was expected for world class developers to understand not just the language they were developing on, but also the intricate details of the hardware they operated on. This was the only way that programs could be developed efficiently. Then things changed. Computing, memory, and storage resources became cheaper and cheaper.  Storage and memory saw Kilobytes turned into Megabytes into Gigabyte and so on. In turn, compute resources began to be measured in kiloflops, giagflops, and terraflops. All but the most demanding applications went from running on constrained environments to rarely leveraging the resources available to them. The development world saw this and started focusing more on programmer efficiency rather than an intense focus on resource efficiency. Highly sophisticated development languages, environments, and frameworks were developed that give today's programmers the ability to code world class applications in a fraction of the time it would have taken a couple of decades ago. That road was (and to be fair) is still rocky. Battles were fought over the language, developer methodology, platforms, and many other aspects. Regardless, few can argue that today's development isn't significantly more programmer friendly and productive than it was "back then."

 

cloud-maturity-stages.pngIT is now in the midst of the same change. Until very recently, we managed our IT systems as if we were programming them in assembly language. IT administrators were expected to know the intricacies of all of their components to mind-numbing detail so that they could all be used to the highest efficiency. Well run shops knew all of the details of the servers, networks, storage and applications on those. You could ask the storage guys and they would know which exact track or tracks of disks held which data. The network folk would know all of the traffic, it's resource usage, and effect on the rest of the environment. The server guys understood all of the arcana of their operating systems and hardware and could tune each to amazing efficiencies.  But all of that is really hard. And it takes a very skilled person years to master all of these things. And it changes all of the time.

 

However, things are getting better. Each of the components of IT's environment (server, storage, and network) is going through a transformation. Rather than being siloed individual stacks of resources, each is becoming a maleable virtualized resource that is pooled into "virtual data centers." Even more interestingly, these resources are being integrated by vendors into cohesive pre-defined resource pools that can be programmatically allocated and de-allocated. The path to the virtualized cloud is being shaped by management frameworks that take the drudgery, error, and to many extents the human factors out of the equation. In essence we are creating a programming language of IT. As in programming today, there is still a call for specialists that know all of the details, but by and large much of that is being abstracted and made highly efficient. We aren't quite to the point where we can define all of the required elements in a single language (what we are calling "orchestration layer"), but we are getting closer every day.

 

With that analogy, my friend finally got it. The value is really in the efficiencies that we can extract out of higher order semantics. We don't have to have all of the details because the systems take care of that. This is all being enabled by some key technologies: server virtualization, storage virtualization, network virtualization, common APIs, and orchestration tool sets to integrate all of them. Organizations are being given the choice of building their own clouds (their own apps) or leveraging public clouds (off the shelf apps). The choice of which to use is similar to the software world. Few would develop their own word processor today, but many still build or customize their own CRM systems. These choices will mature and crystalize over the next few years.

 

At Datalink we are focusing on helping customers through this transformation. Come visit us at booth 2229 at VMworld or read more about this on our blog at http://blog.datalink.com.

 

 

Juan Orlandini is a Principal Architect at Datalink. He’s been in the IT industry for 25+ years and is responsible for working with customers on new data center architectures. He blogs about a number of topics at blog.datalink.com.

This post was originally published on TomTALKS.


What do Shimon Peres, Warren Buffett and Sidney Poitier have in common? Join NetApp Vice Chairman, Tom Mendoza, as he talks about embracing change in your life – the fourth element that makes NetApp’s  culture so unique.

 

NetApp extended its Virtual Storage Tiering (VST) strategy by introducing NetApp® Flash Accel™ enabling business applications to run up to 90% faster with new server cache solutions from NetApp and its partners.

 

NetApp Bulks Up Virtualized Storage Strategy - Channelnomics

Meanwhile, among the slew of touted benefits, NetApp’s VST paves the way for partners to work with company, developing technologies that integrate with the company’s storage management operating system Data ONTAP software.

 

NetApp Intros Software To Manage Multivendor PCIe Flash-Based Storage - CRN

"This will be an extension of the success of NetApp's ideology with FlexPod. NetApp adopts best-of-breed technology. In this case, it's flash storage acceleration, which leverages technology customers already have on their sites."

 

NetApp Extends Storage Reach to the Server - ITBusinessEdge

With more storage starting to show up in the form of Flash memory that is directly attached to the server, traditional storage vendors are starting to have an issue with relevancy…NetApp this week moved to address that issue head on with its first foray into server-side technologies.

 

NetApp touts Fusion-io partnership, server flash management - Computerworld

“Flash Accel is a piece of software that turns any server PCI-e or SSD drive into a cache for the backend ONTAP system", said Paul Feresten, a senior product marketing manager at NetApp.

 

NetApp offers 2TB flash cache on SANs after signing up Fusion IOThe Inquirer

The firm also revealed that it has signed a reseller deal with enterpriseSSD vendor Fusion IO, which hints strongly that Netapp's SAN boxes using Flash Accel will incorporate Fusion IO's PCI-Express SSDs.

 

Cloud_News_1_HiRes.jpg

This post was originally published on Breaking Through The Cloud

 

With the summer quickly drawing to a close, we at NetApp are gearing up for a bevy of Fall events.  The first one is VMworld US starting Sunday August 26th.  Yes, that's in about a week, but Don't Panic.  Here's my second annual quick guide to what NetApp is up to at VMworld.  Start your planning now.


Visit us at booth 1402

 

We will focus on how an agile data infrastructure joined with VMware empowers enterprises to go further, faster.  Specifically, we'll focus on four areas:

  • Virtualizing business critical applications with confidence
  • Building an innovative cloud infrastructure
  • Modernizing end-user computing
  • Accelerating business with FlexPod

 

Here are a few ways you can interact with us in our booth:

JT Snow.jpg

 

VMworld 2012 CEO Roundtable

 

VMware just announced this roundtable session that will be moderated by Chris Anderson, editor-in-chief, Wired.  Along with CEO and President of NetApp Tom Georgens, the other two panelists will be Dell Chairman and CEO Michael Dell, EMC Chairman and CEO Joe Tucci.  I'm looking forward to this session entitled "The Innovator's Paradox".  It will take place at 4 p.m. PT on Monday, Aug. 27 at Moscone Center South, Gateway Ballroom 103/104, San Francisco.

 

NetApp Speaking Sessions

 

With so many great speakers to choose from, I don't know where to start!  You can check out the complete list here.  If nothing else, you should be sure to catch our spotlight session as this promises to be a treat for sports fans.  Here are the details:

 

You and 9 of your best friends can receive game tickets plus a private tailgate party in your hometown stadium.  This is an exclusive opportunity for attendees of our Spotlight Session featuring NetApp Founder Dave Hitz, Luke Norris, CEO of Peakcolo and Raghu Raghuram, Executive Vice President of Cloud Infrastructure and Management, VMware.  Entry forms will be available at the session doors.  Be one of the first 400 in line and also receive a copy of Dave Hitz's book, "How to Castrate a Bull."

 

Come to the live drawing immediately following this session at 12:30pm, NetApp Booth 1402.  Winner will be selected by JT Snow, former San Francisco Giants player.  Must be present to win.

 

SS1011 - NetApp Founder Dave Hitz: How to Transform Your Business with an Agile Data Infrastructure, featuring customer PeakColo and Raghu Raghuram, VMware 

 

Tuesday, 8/28, 10:30 AM - 11:30 AM

 

IT departments are struggling to manage the scope, scale, and complexity of their infrastructures in today’s increasingly data-driven enterprises. A new design approach is needed: a converged, virtual storage infrastructure that combines powerful new clustering technologies for unlimited scalability and non-disruptive operations with the industry’s most comprehensive, yet elegantly simple data management software. This new paradigm will empower businesses to achieve efficiency and growth as well as address IT economics and rollout at scale. Join Dave Hitz, EVP and co-founder of NetApp, as he introduces NetApp’s next generation storage platform. This agile data infrastructure – intelligent, immortal, and infinite – lets you harness, not fear, the power of data for effective decision-making, innovation and competitive advantage. NetApp’s 20 year track record of innovation continues with multiple new technologies that push the boundaries of what’s possible to help businesses go further, faster.

 

Dave Hitz, Founder and Executive Vice President , NetApp
Luke Norris, CEO , PeakColo
Raghu Raghuram, Executive Vice President of Cloud Infrastructure and Management, VMware

 

Social Media at VMworld 2012

 

Here's a list of NetApp folks who will be tweeting and or blogging.  Join in the conversation!

 

I will be remiss if I didn't give a great big shout out to our partners at VMworld.  Check out the list here and be sure to swing by their booth and say hi to them.

 

How about you?  I would love to hear what you are looking forward to seeing and doing at VMworld US 2012.  Feel free to leave a comment.  See you in San Francisco!

By Blair Semple, Director of Business Development, SafeNet

 

Blair 2011.jpgData breaches are increasing in number and complexity year over year, bringing security to the forefront of the storage market. To date, the biggest problem with storage encryption wasn’t encryption itself – DataFort, now SafeNet StorageSecure, has been providing storage encryption for years. Now that NetApp and SafeNet have collaborated to develop StorageSecure, it is possible to encrypt NAS storage at the CIFS or NFS level.

In reality, the biggest problem with encrypting storage is managing the encryption keys, especially with the recent explosion of data. New technologies make enterprise key management a simple, cost-effective solution you can layer over top of your existing storage architecture.

 

For all you storage admins who are new to security, here’s a high level overview of key management, why it’s important for storage, and how you can use it to increase visibility, maintain control, and achieve compliance.

 

What is Key Management?

Key management provides the secure generation of encryption keys, the ability to prevent unauthorized access, ensures the availability of the keys, create and enforce policies during the life of the key, and securely destroy the key (and its associated data) at the end of the life of that data to ensure data is properly destroyed.

 

Steps in the Key Management Lifecycle

  1. Key Generation. It’s important to use specialized hardware, not software, to generate your keys.
  2. Access Management. Keys are used to authenticate and authorize access to specific data in accordance with policy or regulatory requirements.
  3. Availability Assurance. As data is backed up and replicated, keys need to be available  from division to division and site to site so the key is accessible wherever the data is located.
  4. Policy Management. During the life of the data, keys will need to be rotated periodically to keep the data secure. Some of these policies are standard practice, and others are required for regulatory compliance.
  5. Secure Destruction. At the end of the data’s life, the key needs to be securely destroyed, effectively destroying the data as well.

 

Key Management for Storage

For storage applications maintaining availability of keys can be very challenging when you take into consideration all the points at which that data is stored and accessed. As all storage admins know, sensitive data can be stored in multiple locations, usually at the primary datacenter and then also a disaster recovery site. Key material needs to be available at these locations as well, so that even if a datacenter is destroyed or the key manager is lost, you don’t lose your keys and with them the ability to access the data.

 

Access to data in storage environments needs to be strictly controlled. Encryption keys are used to maintain and augment encryption mechanisms at the storage location itself to restrict data even more than the standard system requirements.

And, finally, the destruction of data at the end of its life is very important, especially for financial, HR, healthcare, or other sensitive information that is subject to industry or government regulations. A good key management system allows you to prove compliance throughout the life of the key, and also at the end by securely deleting the key and all data associated with that encryption key. At the natural end of life life of data, you can simply delete the key and effectively destroy the data. The ability to destroy keys and their data is also very important in case of a security breach, or for the military and intelligence communities. In case of a tactical emergency, data stored on an airplane or submarine can be instantly destroyed simply by deleting the encryption key, preventing that information from falling into the wrong hands.

 

The biggest differentiator between key management for the storage world and other use cases is that we are often required to keep data for a very long time. Healthcare information, for example, may need to be stored securely for the entire life of the patient. In this case, we are looking at decades of securing that data and managing those keys. As a result, we will always have a great number of encrypted data objects, so an effective  key management solution must be able to scale to support a great number of keys.keymgmtinfopack.gif

 

The Future of Storage and Key Management: KMIP

Until recently, one of the biggest issues with encryption in storage has been that, each encryption vendor had their own proprietary key management solution, so that managing encryption keys for an organization’s array of storage security products was an expensive and time-consuming, if not impossible, job.

 

In 2010, an international consortium of industry leaders, including SafeNet, NetApp, VMware and others, released a cross-platform standard called the OASIS Key Management Interoperability Protocol (KMIP). Now, any product adhering to the KMIP standard for key management can be managed by a single enterprise key management solution, providing better data security and reducing expenditures for managing multiple products.

 

SafeNet KeySecure is the industry’s first high-assurance, KMIP standards-based enterprise key management solution, and was developed from NetApp’s Lifetime Key Manager (LKM). Using KeySecure, you can to centrally protect, manage, and control data, keys, and policies across a wide range of heterogeneous storage systems, including Quantum Tape, NetApp Storage Encryption (NSE), legacy NetApp/ Decru DataFort, and the new SafeNet StorageSecure. As technology grows and changes, KeySecure will continue to support any storage system built on KMIP protocol. Encryption_NAS.jpg

 

Regardless of which key management solution you choose, using KMIP standards-based key management will allow you to easily manage keys across multiple storage platforms throughout the entire life of your data, giving you control of your encrypted storage.

 

If you’re looking for more information on StorageSecure, KeySecure or any other SafeNet encryption technologies, stop by the SafeNet booth 1901 at VMworld next week for a demo. Mike Wong and I will be presenting on Monday and Wednesday in the SafeNet booth on the importance of encrypted storage. You can also download our free whitepapers:

 

Understanding Enterprise Key Management

Encryption in NAS Environments: Requirements & Keys to Success

 

And you can always read more on the SafeNet Data Protection Blog: www.data-protection.safenet-inc.com.

Now that the Summer Games are behind us, we have a unique view of just how much conversation and data was created during the Games, especially on social media. We already knew how much data was projected to be created around the games, but now that we've seen how it all played out, NetApp has created an infographic illustrating the record-breaking impact of the "Social Games." Once again the data is stunning - for example, there were more Tweets during the Opening Ceremony alone than during the entire 2008 Beijing Games. See all of the the details below:

 

Olympics-Infographic-social-media.jpg

This post was originally published on Big Data Bingo

 

profile-image-display.jspa.pngI presented last week at Soft Grid 2012 on Big Data, Analytics and the Smart Grid. My presentation entitled, "Big Data, Smart Data and the Grid - It's Ok, It's All Data" can be seen starting at about the 36 minute mark from the link below. I also chime in quite a bit on the moderated Q&A panel that follows, starting at the 1:08 mark.

 

My talk focuses on: the importance of hot and cold data as opposed to structured and unstructured data, the parallels of the evolution of the Smart Grid as compared to the evolution of analytics and the notion that, at the end of the day, it's all just data - big, small or indifferent.

 

http://www.ustream.tv/recorded/24726881

 

The description of the panel and the other speakers are here:

 

Thought-Leaders Forum

Hear four short talks from experts throughout the industry supply chain on a number of soft grid, analytics and cloud-based themes, including consumer analytics, grid infrastructure analysis, the use of cloud-based infrastructure for smart grid services, network-based analytics, big data storage and processing, MDM and other topics. Each speaker will present for 15 minutes with a 30 minute session at the end for moderated discussion and audience Q&A.

Daniel Hokanson, Director of Product Management, Ecologic Analytics, a Landis+Gyr Company
Jim Walker, Director of Product Marketing, Hortonworks
Patricia Florissi, Vice President, Americas & EMEA CTO, EMC Corporation
William Peterson, Director, Marketing, NetApp

We just announced an extension to our Virtual Storage Tiering with NetApp Flash Accel. Watch this short video to learn what Flash Accel is, what it does, and how NetApp customers can dramatically increase their application performance.

 


 

And watch this other short video to understand how Flash Accel differs from other server cache products.

http://youtu.be/2-zBBjds7xA

NETAPP_360

NetApp in the News, 8/20/12

Posted by NETAPP_360 Aug 20, 2012

Every Monday we bring you top stories featuring NetApp that you may have missed from the previous week. Let us know what interests you by commenting below.

 

Strategy: 18 Mobile Productivity AppsInformationWeek

NetApp Mobile Support App was recognized by InformationWeek as one of the top 18 mobile productivity apps.

 

Social Media and the C-Suite: Increasing Engagement in the Wake of CIO 100 AwardsForbes

Forbes discusses why C-level executives, including those at NetApp, are expected to become more active with social media in the wake of the CIO 100 Awards.

 

Center for Enterprise Architecture receives $300K donation from NetAppPenn State Live

The Center for Enterprise Architecture (EA) at Penn State’s College of Information Sciences and Technology (IST) on their recent donation of storage systems and software from NetApp.

 

Big Data Needs Big Storage: Where to Keep Gigabytes, Terabytes and Petabytes of DataForbes

While big data centers today, including NetApp’s, rely on hard drives for storage of gigabytes and terabytes of information, this Forbes piece shares some ideas on how to handle petabytes of data.

 

Cloud_News_1_HiRes.jpg

By Amy W., NetApp employee

 

IMG_5561new.PNGSome people might wonder how it is possible to be with a company as long as I have. The simple truth is that I told myself that the minute I stop feeling challenged and valued, I would look for a new gig. That day has never come.

 

Every four or five years I have either found, developed, designed or been asked to take on a new challenge at NetApp. I’ve worked in three departments and am about to move to a fourth. This has given me exposure to many sides of the business, to people I would have never known and provided me with a very diverse background on which to draw when faced with everyday tasks and issues.  I am eternally grateful to work for a company that gives you chances to spread your wings. 

 

 

When I am asked to name one thing that really illustrates our culture and makes NetApp a “great place to work” I am taken back to March of 2000…

 

 

I was 8 months pregnant with my first child and we were having an All Hands meeting. There I was standing at the back of the room getting restless, as people generally do during these types of events, but enjoying the opportunity to hear our executives share their thoughts and strategies.  As they transitioned to the stage to start an executive panel the CEO at the time, Dan Warmenhoven, pointed to an empty chair and said, “As we get ready to get this panel started, I wanted to point out that we have an empty chair up here on the stage. This is Chris’ chair…”  Chris was the VP of HR at this time. He went on to say “I asked to have this chair remain empty because I wanted to make a point.  Chris’ fourth grade son is giving a presentation today at school. He was really nervous about delivering a speech and he asked his mom if she would come to support him at school.” 

 

 

Dan continued, “Chris knew this meeting was today and it was very important to me. But we talked about it and I want to emphasize that Chris made the right decision today. Her son needed her.  She has a family and a life and it is important that we stay focused on what is really important.  Don’t get me wrong. When you are here I want you to work hard. But I also want you all to have a life outside of NetApp. If you don’t have a life – go out and get one! Now, let’s get started….”

 

 

I had tears streaming down both cheeks and I looked around to see many people who were also touched. It is not every day you have a CEO point out that he wants his employees to have a life.  Working around the clock has never been a big part of the culture here. Of course there are those who don’t know how to work any other way. But most leaders at NetApp value a well-rounded team. Being good people, supporting our peers, communities, schools and families is a big part of what it means to work at NetApp. 

 

Twelve years have passed since that meeting at Great America and I now have three children ages 12, 10 and 6.  I can honestly say that this company follows through with the promise of work-life balance.

NETAPP_360

NetApp in the News, 8/13/12

Posted by NETAPP_360 Aug 13, 2012

Every Monday we bring you top stories featuring NetApp that you may have missed from the previous week. Let us know what interests you by commenting below.

 

[Video] Silicon Valley Companies Power OlympicsNBC Bay Area

NetApp’s Amy Love on how the Summer Games proves consumers are embracing technology to drive business.

 

When Life Gives You Lemons, Make Lemonade: How Walz Group Increased Its Customer Base By 800% - Forbes

Forbes highlights the Walz Group, who’ve used best-of-breed converged infrastructure for the cloud, provided by NetApp, Cisco, and VMware, to capitalize on its 800% growth.

 

Accelerate Application Modernization with Converged Infrastructure - SearchStorage

Ray Austin takes readers through some examples of where a converged IT infrastructure was used to modernize an enterprise application environment and increase business agility.

 

Cloud_News_1_HiRes.jpg

What fuels innovation? How do organizations' innovations reduce costs and achieve greater results? Watch as Theresa Wahlert, Director of Iowa Workforce Development and winner of NetApp's 2012 Innovator of the Year Award for the US Public Sector, sits down with NetApp 360 to tell their story.

 

Do you use NFS or CIFS to access data stored in large repositories? Better watch out – there is a new kid in town!

 

Traditionally, large amounts of unstructured data (or Big Data) have been stored as files in file systems. Retrieving data meant that you needed to know the file share, the directory (and sub-directories) and have at least a rough idea what the file name and extension would be. Increasingly, this just doesn’t work anymore – today IT departments already manage content repositories that store hundreds of millions or billions of files, often across many locations. As the amount and complexity of data stored in enterprises grows, it becomes increasingly important to find a better way to store, manage and retrieve this data.

 

A key path to solve this issue is to leverage technology and standards that have been specifically developed to provide this idea of a single namespace for billions of data sets and across locations and even managed services that might reside off-premise.

 

On the technology side, NetApp just released StorageGRID 9.0 (http://www.netapp.com/us/company/news/news-rel-20120809-203086.html). StorageGRID was developed from the ground up to support large, distributed content repositories – managing billions of data sets and petabytes of capacity across hundreds of sites in a single namespace. With this technology, you know what data you have in your repository and you can control where this data is stored (locations, tiers, etc.).

 

On the standards side there is CDMI (http://www.snia.org/cdmi), the Cloud Data Management Interface. CDMI is a standard developed by SNIA (http://www.snia.org), the Storage Networking Industry Association, with heavy involvement from a number of leading storage vendors, including NetApp. CDMI not only introduces a standard to ingest and retrieve data into and out of a large-scale repository, it also enables applications to easily manage this repository and where the data sits.

 

CDMI has arrived in the real world

 

NetApp StorageGRID already supported NFS and CIFS, as well as an API on top of RESTful HTTP (http://en.wikipedia.org/wiki/Representational_state_transfer). So why is NetApp adding support for CDMI? It’s very simple – we believe that standards are important and that ultimately our customers will benefit from an ecosystem of solutions built on standards. Already a number of companies are working on supporting CDMI or have announced support for CDMI, so while still a bit early from an adoption perspective, the momentum is clearly there.

 

CDMI is the new NFS

 

When it comes to creating and managing large, distributed content repositories it quickly becomes clear that NFS and CIFS are not ideally suited for this use case. This is where CDMI shines, especially with an object-based storage architecture behind it that was built to support multi-petabyte environments with billions of data sets across hundreds of sites and accommodates retention policies that can reach to “forever”. NetApp’s Distributed Content Repository solution based on StorageGRID and E-Series storage systems fits precisely into this space.

 

 

Find out more about our Distributed Content Repository solution in the solution brief here: http://media.netapp.com/documents/ds-3339.pdf

 

Read the Big Content white paper here: http://media.netapp.com/documents/wp-7161-0512.pdf

 

Watch me talk about Big Content: http://www.youtube.com/watch?v=96g98Gb_rWE

 

What are your thoughts? Have you implemented object-based storage and want to share your experience? Go ahead and leave your comments below.

This post was originally published on TomTALKS.

 

You manage things and you lead people – this is at the heart of Tom’s view on leadership vs. management. Join us to hear more.

 

Every Monday we bring you top stories featuring NetApp that you may have missed from the previous week. Let us know what interests you by commenting below.

 

London Olympics Sets Big Data Records SiliconAngle

SiliconAngle looks closer at recent content produced by NetApp to highlight the sheer volume of big data that is expected to be shared and discussed during the London Summer Games.

 

New Benchmark Results for Unified Scale-Out StorageSearchStorage

Mike McNamara and Dimitris Krekoukias on how NetApp’s unified scale-out storage provides for both maximum operations and latency.

 

Striking Into New Territory: Google and Others Have Deep Pockets and Big PlansSiliconAngle

Kristen Nicole mentions how NetApp and Fusion-io's partnership put both companies in a better position to make their mark on the storage market.

 

Cloud_News_1_HiRes.jpg

NETAPP_360

Big Data Takes the Gold

Posted by NETAPP_360 Aug 3, 2012

With the Summer Games in full swing, NetApp has created an infographic illustrating the volume of big data that is projected to be shared and discussed across billions of devices around the summer games. The figures are staggering – for instance it’s projected that 60 gigabytes of information per second is expected to flow across British Telecom’s networks (that’s the equivalent of all of Wikipedia every 5 seconds).  See all the details below:

 

bigdata_olympics_1h.jpg

What if you choose to take on adversity as a challenge to grow? You take risks, you struggle, and, sooner or later, you come out a winner. The Walz Group stands testament to that scenario.

 

The Walz Group, a critical communications company based in Southern California, builds software that enables management of documentation between lenders and borrowers. Although the recession created an unprecedented opportunity for Walz, the company also found itself staggering under the pressure of its outdated IT infrastructure. With difficulties managing distributed storage, experiencing hard drive failures, and spending 60% of IT time mending what was broken, Walz met its SLAs but business growth outstripped infrastructure capabilities by far.

 

Walz decided to take the bull by its horns and set out to reimagine IT. It focused on the value provided by a cloud computing model and chose an agile infrastructure with the required levels of security, reliability, and scalability. With a single infrastructure solution for all application tiers, Walz can now provide much quicker service while maintaining high information-security standards. Even with fewer resources, Walz is able to meet 99.998% uptime SLAs.

 

Bart falzarano walz video.png

 

With a firm handshake between IT and the business, Walz created an inseparable partnership that has taken it to glorious heights. The company has seen 800% growth in its customer base and an incredible 185% revenue increase in three years. Clearly, with technology, it’s not about big beating small any more. It’s about fast beating slow.

 

Bart Falzarano, chief information security officer of the Walz Group, will discuss the company’s achievement in an exclusive one-on-one meeting with NetApp’s own Vaughn Stewart- at VMworld (end August, 2012). Get Vaughn’s perspective here after the event, and also check out the Walz business brief.

hillsboro-data-center.png

By Bob Lofton, Senior Director, IT Foundational Technologies at NetApp

 

Have you ever wondered how NetApp decides where the company’s next data center will be deployed? It’s actually a long selection process with a laundry list of criteria that a hosting city needs to meet before we even consider a site. Some criteria at the top of the list include both geographical and technological requirements, such as the city’s track record of supporting high-tech firms; standardized permit and inspection processes; good relations with local power companies; and stability of the region from a climate and natural disaster perspective.

 

As NetApp’s core products and solutions portfolios expand to support the rapidly changing IT and business requirements posed by trends like virtualization, the cloud, and big data, developing an agile, efficient data center infrastructure to support our corporate programs nondisruptively is imperative.  That’s why we take so many factors into consideration and why several vendors are considered for the job. In Fall 2010, we realized that we needed to find a site for a “lights-dim” production data center to increase operational efficiencies and collapse two existing California-based data centers in Sacramento and Sunnyvale into one central facility. As part of NetApp’s larger data center strategy, we looked for a new location to support our customer-facing and internal back-office corporate systems. 

 

In working with Digital Realty Trust, Inc., a provider of data center solutions to more than 10% of Fortune’s Global 200, we sat down with the local governments at each site under consideration and quickly went from 12 candidates to 3 finalists. The City of Hillsboro, Oregon, really rolled out the red carpet for us and proved to be the best municipality that I have ever worked with. They are savvy about data centers and we didn’t need to educate them about the facility requirements or the impact, which was a major bonus.

 

Other key benefits that the City of Hillsboro offered are enterprise and e-commerce zone benefits, no sales tax, and proximity to an airport with regular flights to both Sunnyvale, California, and Raleigh, North Carolina. The quality of technical talent in Hillsboro was also a significant draw for us, since it supports NetApp’s culture of a collaborative and open environment in which employees are treated as our greatest asset and encouraged to drive innovation. The tech-savvy talent in Hillsboro was a major incentive to opening the data center in this region.

 

As part of this new data center rollout, NetApp is leasing the 5.1-acre property in Hillsboro for 10 years and expects to invest another $40 million in equipment and employ roughly 10 people. All future NetApp business applications will be built in this data center starting in October 2012 and all existing business applications that are internally hosted will be migrated to this facility over the next three years. We are excited to be part of the City of Hillsboro’s economic development by driving jobs and IT business in the area, and we are thrilled to start realizing the operational benefits of our new, agile corporate data center.  

Filter Blog

By author:
By date:
By tag: