Accelerating digital transformation with a services approach modern cloud
Multi-cloud, hybrid computing
The multi-cloud phenomenon supported by hybrid computing is at the forefront of the cloud’s modernization. It’s exciting because “it allows you to run different parts of your computing in different places,” Martin reflected. “This allows you as a customer to maybe run some stuff behind your firewall and then when you run out of capacity with your fixed resources, your data center, and you need more than that, you can burst out into one of the cloud providers and use their resources.” There are numerous advantages of this approach, including these:
♦ Best-of-breed functionality: Multicloud hybridization enables users to select the best resources whether in clouds or on-premise. For example, if “you have AWS where you use, let’s say, SageMaker, and then you have GCP where you use BigQuery, you should be able to use your data from all of these places or clouds and nonclouds and get intelligence out of it,” Rao said.
♦ Security: Although it’s difficult for firms to match public cloud providers’ spending on cybersecurity, organizations can always keep highly sensitive data on-premise with this paradigm.
♦ Regulations: Compliance and data sovereignty issues determine where companies can actually store their data. Hybrid and multi-cloud deployments enable them to meet these requirements while still accessing resources in other locations.
♦ Latency: Latency, bandwidth strain, and networking costs also factor into which cloud is best for a specific application or use case. Hybrid computing lets users pair their data with their compute to address this concern.
Although cutting-edge multi-cloud deployments rely on cloud-native technologies, a more pressing concern for most organizations is avoiding the dreaded vendor lock-in syndrome. As Martin noted, most public clouds don’t charge to replicate data inside them, but charge (in some cases, highly exorbitant) egress fees for moving data out for multicloud use cases. On the one hand, large public cloud vendors are increasingly creating what Melcher termed “an ecosystem for one-stop shopping.” These commonly include a variety of tools for content services, analytics, integration, and other horizontal needs. On the other hand, vendor lock-in contradicts aspects of the API economy. “You don’t necessarily have to be beholden now to a single vendor to deliver a solution when your content is in the cloud. A lot of vendors are opening access so you can integrate services from a variety of different vendors to create a transformative solution that wasn’t possible before,” Richman said.
Vendor lock-in typically takes the form of proprietary storage formats, high fees to move data out, and the appropriation of companies’ metadata. Shrewd firms can counteract these effects via client-side end-to-end encryption and by not tying cloud usage to their applications. Instead, it’s better to rely on public clouds’ compute and storage. “In the long run, you’ve got to have an exit strategy with any vendor,” Martin forewarned. “That could be negotiating power for your pricing. Egress can include having to recode your applications, too.”
So long as they stay wary of vendor lock-in, organizations can avail themselves of multi-cloud deployments alongside on-premise ones by primarily using options for SaaS and PaaS, which are especially relevant for content services. The more meaningful cloud use cases, however, are predicated on dynamically running applications—for example, spinning up a knowledge graph on demand to aid with text analytics for real-time customer service assistance—wherever is best for deployment with the cloud’s elasticity.
That portability is almost exclusively provided by cloud-native approaches involving containers, Kubernetes, and serverless computing. These resources are well-suited for digital transformation. They enable companies to undergo digital transformation while benefiting from tying cost to demand and taking advantage of resiliency. In addition, they can have mobile and web, IoT and edge, and digital transformation working for them with low-cost infrastructure, Rao explained. Here are some specific benefits:
♦ Hybrid and multi-cloud support:
These cloud-native expedients underpin hybrid computing (especially with orchestration platforms such as Kubernetes) by enabling organizations to fluidly position resources wherever they need them “in a matter of minutes,” Martin said. Moreover, they do so by decoupling applications from the cloud resources they’re running on. This supports the most modern of cloud characteristics. “Gartner has this term ‘bring your own cloud,’” Melcher noted. Thus, with containers, “You Bring your own cloud and the product can run on it,” he said.
♦ Reduced infrastructure: As Rao observed, cloud-native options are also renowned for their capacity to drastically decrease the infrastructure for running applications. In this way, infrastructure (or overhead) is not just defined in terms of hardware but also by personnel expertise to implement hardware and networking practicalities. Serverless computing improves upon even the ephemeral infrastructure of the Kubernetes and containers approach with an approach that’s “as easy as connecting some set functions and Lambda functions and then moving things back and forth,” Rao said.
♦ Elasticity and scalability: Cloud-native capabilities are aligned with the elasticity and scalability that are some of the cloud’s core benefits. Melcher described these advantages with the following use case: “You might need a tremendous amount of compute. If you’re going to go buy that compute and stick it in a data center, that’s going to take you a long time to complete. But in the cloud, you can spin up a farm of 1,000 servers for an hour and be done.”
Companies and Suppliers Mentioned