ISG Software Research Analyst Perspectives

Cloud Computing Realities Part 2: Hybrid and Multi-Cloud Architectures

Written by David Menninger | Sep 27, 2022 10:00:00 AM

In my first perspective on cloud computing realities, I covered some of the cost considerations associated with cloud computing and how the cloud costing model may be different enough from on-premises models that some organizations are taken by surprise. In this perspective. I’d like to focus on realities of hybrid and multi-cloud deployments.

Let’s start with why hybrid and multi-cloud deployments are important. The cloud offers the promise of instantly available and infinitely scalable resources. It also offers the promise of elasticity to increase or decrease computing resources as dictated by demand. For these reasons and others, our research shows that three-quarters of organizations are using cloud-based resources in analytics and data efforts. The research shows that among those with cloud-based deployments, 52% are using more than one cloud provider. It also shows that three-quarters of organizations are still using on-premises resources. The combination of these datapoints suggests there are significant requirements for hybrid and multi-cloud architectures.

Unfortunately, hybrid and multi-cloud still really means running systems in parallel on different platforms: For example, running one database instance in Amazon Web Services and another instance on-premises. The work to link those two systems is left largely to the organization running the system. Products that are available in the market today offer little integration between on-premises deployments and cloud-based deployments. On the positive side, we are beginning to see consolidated consoles for managing multiple systems, regardless of where they are deployed. But each instance is still largely independent of others.

To achieve the full benefit of the cloud – especially elasticity – we need to get to the point where systems are distributed across on-premises and multiple cloud providers without limitation. The boundaries between the deployments should be fluid. If additional resources are needed to meet heavy loads, the system should be able to burst from on-premises resources to cloud-based resources automatically. And it should be able to contract automatically once the additional resources are no longer needed. Further, workload management capabilities should be able to optimize deployments across instances, taking into consideration cost and performance differentials, expanding to the least expensive, most performant resources and contracting off the most expensive, least performant as demand fluctuates. There are few systems which can operate that way today.

These considerations are complicated when comparing sunk costs of on-premises resources with the various costing models of cloud-based resources. In addition, the boundaries between resources can’t be too fluid. As pointed out by my colleague, data sovereignty issues dictate that some data cannot be move moved across geographic boundaries due to regulatory issues. So, workload management capabilities will need to be even more sophisticated.

Again, the point of this series of Analyst Perspectives is not to discourage the use of cloud resources. Hopefully, it can help raise awareness of some of today’s realities for using the cloud. Perhaps it might even prompt some software providers to consider product improvements to address some of these issues. In the next installment of this series, I’ll be addressing the intersection of business continuity and the cloud.

Regards,

David Menninger