A few months ago, news of the CodeSpaces debacle made the rounds on the internet. In case you’ve forgotten, the cloud-native source code hosting company was forced to permanently shut its doors after hackers gained access to their AWS login and deleted just about everything, backups included. There were a lot of lessons that could be learned from this event, but the most important one is simple: If you have services in the public cloud, you need to protect them. And if these are critical services, simply snapping the data and replicating within the same cloud environment is not enough (it definitely wasn’t enough for CodeSpaces). More and more, I’m hearing companies talk about the concept of cloud-to-cloud, which ideally would mean protecting data across multiple public clouds, or else in a separate and highly protected environment within the same cloud-provider.
Experts agree that if you have critical data, you need to protect it, usually with secondary and tertiary copies that are independent from the primary. Usually this comes in the form of disaster recovery technologies (like replication) ALONGSIDE point in time technologies (like backup and snapshots), and then transporting the data offsite. Most of you are probably thinking, DUH. DUH, I agree. But why is it, that when it comes to cloud, organizations forget these simple rules? I’ve said it before, and I’ll say it again: If you protected data onsite, you must protect it in the cloud. Whether it’s SaaS, PaaS, IaaS, or anything else -aaS, if you would protect it on-premises, you need to protect it in the cloud.
This is where cloud-to-cloud resiliency comes in.
If you’re looking to protect a critical workload in the cloud, it makes the most sense to protect it within the cloud as well rather than bring it back on premises. This could be protecting it within the same cloud provider, but separating copies by region, login credentials, etc. Or, it could mean protecting data across multiple clouds. Today, 15% of companies overall use, or plan to use, a multiple public cloud approach but I expect that number to grow significantly in the coming years.
And this brings up the emerging challenge of HOW to protect this growing number of cloud-native workloads. This is a question I’m pleased to have an answer to. This week we announced SteelStore cloud-based solutions for AWS: an enterprise backup solution that runs natively In AWS, providing efficient protection of cloud workloads. Three new models of SteelStore AMIs are available today in the AWS marketplace, which enable customers to:
- Recover on-premises workloads in the cloud. For organizations without a secondary disaster recovery location, or for companies looking for extra protection with a low-cost tertiary site, SteelStore AMIs are the key to enabling cloud disaster recovery. Using on-premises SteelStore physical or virtual appliances, data is seamlessly and securely protected in the cloud. If the primary site is unavailable, customers can quickly spin-up a SteelStore AMI and recover data to Amazon EC2.
- Efficiently protect cloud-based workloads. If you already have production workloads running in Amazon EC2, you know that protecting those workloads in the cloud is just as critical as if they were running on-premise. NetApp’s new SteelStore AMIs offer an efficient and secure approach to backing up cloud-based workloads. Using your existing backup software, SteelStore AMI deduplicates, encrypts, and rapidly migrates data to Amazon S3 or Glacier.
Do you run primary workloads in the cloud? If so, how do you protect them? Check out the new SteelStore cloud-based appliances running in AWS marketplace.