Liberty Mutual Insurance Co. is in the midst of a massive cloud migration that will affect more than 40,000 systems and applications across the globe running on everything from Windows servers to mainframes.
The company has already moved 68% of its workloads to the public cloud and aims to slim down from three data centers to just one by 2024. “Our ultimate goal is to get 100% to cloud,” said Eric Drobisewski, senior architect for global digital services at the insurer, which employs 45,000 people in 29 countries.
But Liberty Mutual isn’t tying itself to a single cloud. Part of its migration strategy is to build or re-platform applications on top of an abstraction layer that makes the underlying cloud service invisible.
“In the end, these resources are all utilities and we need to treat them the way we’d treat home electricity,” Drobisewski said.
In pursuit of that goal, the company created a set of implementation standards that provide for consistent speed, reusability and security. “It’s a curated set of reusable patterns that let developers and engineers get off the ground quickly and automate features they might have otherwise built themselves for back-end security and governance,” he said. “Hundreds of people across the organization came together to put the technology, security and processes in place.”
Not long ago, such an ambitious strategy would have been unthinkable. As recently as 2019, the concept of a comprehensive multicloud environment was considered a pipe dream. But technology providers and their customers have been hacking away at the problem and are now beginning to build applications – both for internal use and commercial sale – that combine resources from multiple public and private cloud platforms in a way that is nearly invisible to the user.
This extension of multicloud computing goes by various names. SiliconANGLE’s research affiliate Wikibon adopted the term “supercloud,” a term coined by Cornell University researchers in 2017. Others have referred to them as “metaclouds,” “cross-clouds” and even “cloud of clouds.” The nomenclature matters less than the expected payoffs.
“We’re able to bring new insurance products and integrate them back into consumers’ hands much more quickly,” Drobisewski said. “We’ve invested heavily in allowing software developers to move quickly with modern toolsets to be more effective and faster.”
The concept of a supercloud is less revolutionary than evolutionary. “It’s industry clouds and multiclouds munged together,” said Gartner Inc. Analyst Craig Lowery. “This is a continuation of edge, hybrid and multicloud technology stacks that brings more immediate value.”
A recent survey of 1,800 IT decision-makers by VMware Inc. found that 73% said their enterprises use two or more public clouds today and 81% plan to do so by 2024.
“It’s been growing as a thing for the last five years and somebody just gave it a name,” said David Linthicum, chief cloud strategist at Deloitte LLP. “The idea is to stop building security and operations systems three times and instead use a layer of technology above the clouds that provides all that functionality.”
There are sound business reasons behind that goal. The VMware study found that organizations that leverage multiple clouds with automated operations and secure access to applications and data from any device and location release new applications 42% faster and spend 41% less time fiddling with infrastructure. Liberty Mutual expects its supercloud to reduce annual IT expenses by 28% through 2024 and eventually eliminate as much as 40% of fixed-run costs, Drobisewski said.
But the mechanics of building superclouds are a lot trickier than the concept. Basically, each public cloud provider does things a little bit differently, ranging from the way they store data to how they manage networks. Abstracting each provider’s infrastructure into a common service layer runs the risk of also abstracting away the unique value each provides.
Third-party vendors have come up with some solutions. They say that in most cases, they can not only preserve each cloud service provider’s unique value but can even improve service quality by building on top of the common layer.
However, at this point, there is no governing standards body or set of generally accepted tools for building superclouds. Most solutions are handcrafted and unique, a fact that’s likely to hold back supercloud adoption until standards become clearer.
The drive to create those standards “won’t come from the public cloud vendors because they have an incentive to keep you in their clouds,” said Danny Allan, chief technology officer at Veeam Software Corp., a maker of backup and data protection software. “It will come from outside vendors or an industry working group and there’s no such effort now that I know of.”
Still, a lack of consensus isn’t likely to slow the trend. “At the end of the day it is, in essence, an abstraction that gives enterprises what they call their ‘four-plus-one’ strategy: one cloud that uses all the major cloud service platforms plus whatever is on-premises,” said Steve Mullaney, chief executive of Aviatrix Systems Inc., which sells a cross-cloud networking platform.
“Long-term, the infrastructure should be completely transparent,” Allan said. “Customers should choose the consumption rate and be able to move seamlessly across infrastructures.”
Commercial firms lead
In the commercial software arena, superclouds are becoming commonplace and even emerging from companies outside of the traditional technology sphere.
For example, the Goldman Sachs Financial Cloud, which was launched last November by Goldman Sachs Group Inc., delivers analytics tools developed internally by the financial services on top of the Amazon Web Services Inc. cloud. Goldman Sachs expects the package both to generate revenue and differentiate itself from other financial firms.
Deloitte LLP’s ConvergeHealth is one of a series of vertical market commercial services the company is assembling from multiple clouds. Capital One Financial Corp. recently entered the software business with a suite of data management tools it developed on top of Snowflake Inc.’s cross-cloud data warehouse. It sees its Slingshot cloud manager as the first of a line of cloud data management products that will create a new revenue stream.
“It’s challenging to bring new software to the world but our teams have perfected ways to to build [software-as-a-service] with security, resiliency, performance and scale,” said Salim Syed, Capital One Software’s vice president of engineering. “We feel we have a very good product.”
Snowflake is one of the most advanced commercial supercloud providers, according to Wikibon, with a multicloud platform that spans all three major infrastructure-as-a-service platforms — AWS, Microsoft Corp.’s Azure and Google Cloud — while making the location of data transparent to users, according to Christian Kleinerman, Snowflake’s senior vice president of product.
“We didn’t want to become another version of silos in the data center,” he said. “It was important to have a single, central system that interconnects them all.” The technology the company developed, called Snowgrid, enables people to collaborate on a single copy of data with a common set of controls and governance policies regardless of where the data physically resides.
As recently as three years ago, few prospective customers asked for such features, but “over the last year they’ve realized the value of having a single stack,” Kleinerman said. Multicloud portability “has gone in importance from a one or two to a nine or 10. Cloud independence is now a major reason customers come to Snowflake.”
Other data management vendors such as MongoDB Inc., Couchbase Inc. and Databricks Inc. also tout cross-cloud compatibility as a selling point. MongoDB is “a developer-friendly platform that is moving to a supercloud model running document databases very efficiently… and creating a common developer experience across clouds,” Wikibon Chief Analyst David Vellante recently wrote.
Dremio Corp., a high-profile distributed data startup, addressed the problem by building an architecture that processes queries in a distributed fashion on the infrastructure where the data lives. “We connect to all these different things and push down the query processing to that system,” said CEO Tomer Shiran. “We will actually spin up Azure or [AWS] EC2 instances with our code running on them.”
Such technical wizardry is typical of the solutions that developers are inventing to deal with the supercloud’s inherent complexity. “It’s a lot of do-it-yourself stuff right now as far as what the stacks should look like,” said Deloitte’s Linthicum. “There haven’t been a lot of people thinking about it until recently because, until the last year, there wasn’t a lot of interest in it.”
The need to bridge cross-cloud incompatibilities has been driven by several factors. One is the rise of edge computing, an architecture that distributes processing across a wide network of devices and compute nodes. Each of the big cloud providers has its own edge strategy, but enterprises with far-flung networks don’t want to be tied to a single provider.
A big reason for that is latency. Edge devices, particularly those that collect data in real-time, need to be close enough to a cloud data center, or region, to enable the high-speed communication this is needed for rapid decision-making. For latency-sensitive applications, that distance may be as little as 100 miles.
“Where you place elements of your workloads matters,” said Matt Baker, senior vice president of corporate strategy at Dell Technologies Inc. “Latency of more than 10 milliseconds can kill some applications. Locality becomes critically important.” Superclouds give organizations more latitude in which cloud regions to use.
Snowflake touts its multiregion reach as a strength. “Once a customer makes a query, all the code that runs in a specific region is native to that region,” Kleinerman said. “Instead of a software translation layer, the user model is the point of abstraction.”
A second factor is simplicity. Businesses don’t want to have to wrestle with the fine points of each cloud provider’s operating and management stacks, particularly at a time when IT skills are in desperately short supply.
“If you’re having trouble hiring for your AWS cloud, how are you going to add Azure into that?” asked Amanda Blevins, chief technology officer for the Americas at VMware.
Economics and labor scarcity means that “the dumb thing would be to solve every security and FinOps problem for each cloud and keep around the skill sets to run them,” said Deloitte’s Linthicum. “We’re going to reach a complexity state where the number of tools and talents we need exceeds the operations budget.” FinOps is the practice of creating visibility and accountability to manage cloud spending throughout an organization.
The VMware study cited these low-level compatibility issues as a major disadvantage of the current multicloud landscape. “For developers, each cloud provider has unique infrastructure, interfaces and APIs that add work and slow the pace of their releases,” it said. “Each additional cloud increases the complexity of their architecture, fragmenting security, performance optimization and cost management.”
Snowflake’s Kleinerman likened the current situation to the need for smartphone developers to build functionally identical applications for both Apple Inc. and Android platforms. “Developers are 10 times more excited than CIOs about this,” he said. “Instead of building three versions of one app, you can write it once and run it in multiple locations.”
A third motivator is to gain access to the offerings from the different cloud service providers that best meet their needs. Google, for example, is widely recognized as having the best analytics tools, while Microsoft’s business applications are its strength. “We don’t want to limit the ability of innovators to use best-of-breed services,” Linthicum said.
But there is a multitude of impediments to be overcome. One of the biggest is data gravity, or the difficulty of moving large amounts of data between clouds. Organizations building sophisticated data analytics and artificial intelligence training models don’t want to wait hours for a terabyte of data to move from one cloud to another.
“A lot of the solutions for shifting workloads don’t address the data challenge,” said Liberty Mutual’s Drobisewski. “Data mobility in many ways is the most challenging problem right now.”
Distributed data management vendors have come up with some clever ways to address the gravity problem, usually involving distributing queries to the infrastructure where the data resides. “You can’t be transferring terabytes of data to do a join,” said Dremio’s Shiran. His company uses local caching and technologies such as the nonvolatile memory express storage access and transport protocol, “so we don’t have to keep going back to the [original] resource for every single input and output.”
Each cloud service provider also has its own approach to networking, security and backup “and those are burdens to developers,” Drobisewski said. “We’re looking at how we can have a common protocol that allows you to interact with all of those equally,” with a common API layer, “so you’re not that worried about which cloud provider you’re working with.”
Aviatrix built a supercloud that optimizes network performance and automates security across multiple CSPs. “We actually improve the functionality,” Mullaney said. “The CSPs provide primitive networking and are limited to a shared service designed for millions of small customers. We not only connect across all of them but also add in advanced services.” The approach appears to be resonating with customers: Mullaney said Aviatrix is on track to book $100 million in annual recurring revenue this year.
Then there’s the problem of data portability. Each CSP favors a different storage protocol, which doesn’t necessarily work with another’s. Each also offers different kinds of block, file and object storage. “Making a storage system looking the same across every provider takes some doing,” Linthicum said.
Here, again, third parties are inventing solutions. Snowflake uses external tables that interact with each provider’s preferred storage format and loads data into a neutral format.
Veeam addressed the problem with a self-described file system similar to that used in compression algorithms such as ZIP and RAR. The compressed object includes not only files but also the software needed to decompress them. ”It’s a file system within a file,” Allan said. “It enables the supercloud because now you have a portable, self-describing thing that can be moved anywhere, powered on and it knows the format of the host.”
Security is also a multicloud hairball. “Each cloud provider has its own security tools and approaches,” the VMware study concluded. “In addition to implementing security controls in individual clouds, enterprises must also secure communication between clouds and their respective workloads, applications and end users.”
“We’ve got some real problems to solve around authentication, identity, data lineage and data security,” Maribel Lopez, founder and principal analyst at Lopez Research, said in a SiliconANGLE Supercloud22 interview. “Those are going to be sort of the tactical things we’re working on for the next couple of years.”
At Liberty Mutual, supercloud security has been “a huge focus,” Drobisewski said. In the wake of COVID-19 lockdowns, the company adopted a zero trust model for perimeter security and has since applied access controls down to the individual cloud API. “As we get more cloud native architectures in place, we’re looking to move our focus on zero trust beyond redefining the perimeter and taking a more workload and application-centric approach,” he said.
Finally, the observability challenges of monitoring even a single hybrid cloud are daunting. Sophisticated tooling will be needed to manage supercloud environments that may encompass thousands of services. “If any service suffers an outage it could cause you to have an even bigger outage,” Lowery said. “It’s a question of not knowing when you’re going off a cliff.”
Roll your own
All of those solutions have one thing in common: They are bespoke projects that are unique to individual vendors and user organizations. Are broad industry standards likely to emerge? Some efforts are underway.
Crossplane, for example, is an open-source project being incubated by the Cloud Native Computing Foundation that’s intended to let organizations build cross-cloud control planes. However, it requires users to run software containers and the Kubernetes container orchestrator, which are cloud-native constructs that don’t apply to most legacy applications.
“The CNCF can make things happen from a Kubernetes perspective, but they’re fairly limited to containers,” said Veeam’s Allan. “Kubernetes workloads are certainly rapidly expanding but the vast majority of workloads are images” running on bare-metal or virtualization layers that can’t easily be moved across platforms.
VMware is one of the most prominent providers bidding to become the arms dealer for superclouds. “There’s been a big shift to cross-cloud services at VMware to let customers run workloads where they choose,” said VMware’s Blevins. “We have those higher-level services to be able to manage and observe.”
For example, the company’s vRealize cloud management suite, CloudHealth FinOps application, Secure Access Service Edge and Tanzu Observability platform have all been adapted to support multiple clouds. The company’s virtual desktop infrastructure can play a part in unifying clouds at the user level. VMware also has a strong portfolio of edge services and relationships with all the major CSPs.
“The hyperscalers’ partnerships with us are recognition that this is something customers want and need,” Blevins said.
All this begs the question of whether the big public cloud providers will ever give in and agree to cooperate in the name of making superclouds possible. The expert consensus is that won’t happen soon, if ever. “Letting users run on [other clouds] or in their data centers isn’t part of their business model,” Blevins said.
There are signs, however, that even the biggest of the big now acknowledge that customers favor more interoperability and that a rising tide will ultimately lift all boats. “The reality is that they’re going to make more money if the supercloud is successful,” said Linthicum. “Adoption of cloud computing will go up. Everybody’s going to win.”
Lowery said the big cloud providers may have concerns about third parties taking over the relationship with their customers, but they don’t have much of a choice. “It won’t be possible for the hyperscalers to build superclouds for what everyone wants. Ultimately, they will see this as a way to sell more,” he said.
Dell’s Baker believes that all cloud services will ultimately be hybrid. “In the early days of architectural shifts, the best thing to do is to use as open an ecosystem as possible as opposed to carving out a stack,” as each hyperscaler has done so far, he said.
That doesn’t mean underlying infrastructure is ever likely to be completely abstracted. For example, private networking services typically establish a direct link between the customer and a particular cloud vendor. Some applications will be best built to take advantage of a particular database or analytics suite. And the supercloud may actually give platform providers more incentive to develop services that don’t lend themselves to cross-cloud abstraction.
Nevertheless, the overall trend is clear and that’s goodness for organizations that have struggled with years of complexity. “Clouds are really very sophisticated operating systems with services that meet business needs, not just programmer needs,” Lowery said. “We’re moving away from operating systems and focusing on the business value. That will continue.”
Indeed, said Linthicum, it’s the “single most exciting thing” in cloud computing. “It’s a tectonic shift,” he said. “It’s absolutely the right thing to do, but there’s a tremendous amount of work still to be done.”