Today it’s easy to forget what that world was like:
A single server handled a single workload. Outside mainframe and certain proprietary systems, the way organizations utilized servers was very much a 1:1 relationship, and the average X86 server was running at 15% CPU utilization.
Nearly every new project required standing up a new server. Servers often had to be purchased, Ops had to perform a physical implementation, the network team had to provide connectivity, the server team had to load and patch the O/S, DBA’s had set up the databases and the application owners had to load and patch the applications.
If the application outgrew the server, or migration to a new platform was desired, the above process had to be repeated…again and again.
Ten years later, most IT Organizations have virtualized 70-90% of their server environments, with the following benefits. Consider the functionality we now take for granted:
A single physical server can handle any number of virtual workloads and CPU utilization can reach 80% or greater.
Provisioning of a new server can be automated and/or self-service can be provided to end-users. A new server can be brought online almost instantaneously.
Servers can be easily moved from platform to platform, location to location, or location to cloud
Progress followed a similar arc on the network side, where nimble and efficient virtual networking has replaced the prior generation of (literally) hard-wired and brittle infrastructure.
Having transformed the way we consume compute and network bandwidth, we remain limited by enterprise data bound to physical infrastructure. As we look to develop applications faster, create new SLA’s and utilize cloud services, this gap between applications and data grows larger.
Consider the parallels:
A copy of data is typically required for each use case. It is common for organizations to have separate copies for backup, snapshot, remote replication, dev, test, QA, analytics, etc. 20+ copies of data is not unusual in large organizations.
Nearly every new project requires provisioning another copy of data. Storage often has to be purchased, Ops has to perform a physical implementation, the storage team has to provision capacity and make the copies, the network team has to provide connectivity, the server team has to build the file system and DBA’s have to scrub/mask the data.
The tools for making copies today are storage vendor specific. Data cannot be copied easily from platform to platform. Moving data between disparate platforms or to the cloud is largely a manual, labor-intensive process.
Copy Data Virtualization addresses the fact that these problems can’t be solved at the storage layer. They must be solved at the application layer, with a platform that can communicate with any storage system. Actifio Copy Data Virtualization provides such a platform… an application-centric and infrastructure-agnostic solution that changes everything, in ways parallel to its peers in compute and networking:
A single physical copy of source data can be used to create an unlimited (subject to i/o requirements) number of virtual read/write capable copies.
Provisioning new copies of data can be fully automated via customer-defined workflows, and end users can make instantaneous self-service copies.
Data can be moved from platform to platform, location to location or location to cloud. Data can be accessed anywhere.
Much has been said about the revolutionary nature of Actifio, and it’s true that we’re a highly disruptive technology. Where the established storage vendors birthed a generation of data management that was infrastructure-centric and application-agnostic, Actifio is just the opposite… an approach that drives everything down from the application SLA, and treats the storage hardware as a commodity.
In another sense, though, we’re just the logical next step in a shift toward virtualized technology that’s been underway for a while now. Some people seem to understand us more clearly in that light, and that’s just fine by us.