When I brief executive and IT teams, my opening slide isn’t about models, it’s about data. Ninety-two percent of organizations report an AI strategy, according to IDC's Future Enterprise Resiliency & Spending Survey (June 2025; n=885). Nearly all are embedding generative AI across their operations, and only 1.7% are not yet adopting AI agents (IDC’s Future Enterprise Resiliency & Spending Survey, April 2025; n=893). The question is no longer whether AI will reshape applications, but how data architecture must evolve so pilots become revenue-driving, risk-reducing, customer-enhancing systems.
AI workloads are elastic, data-hungry, and latency-sensitive, performing best when compute and data resources align for optimal performance, compliance, and cost. Most core infrastructure is already in the cloud, with public cloud capturing 62.2% of DBMS revenue in 2024, according to IDC's 2024 Worldwide Database Management Systems Software Market Shares. Multicloud extends this momentum, running services across providers and regions, close to customers, regulated data sets, and without concentrating risk in one tech stack.
Executives say, “Our data is distributed; consolidating into one platform is unrealistic.” I agree. Enterprises span regions, business lines, and decades of systems. A multicloud strategy doesn’t force relocation; it connects applications to data wherever it resides. Some 60.1% of organizations prefer cloud providers for hybrid or multicloud solutions (IDC's Enterprise Infrastructure Survey, 2025; n=675). For leaders, it comes down to partnerships, principles, and the power to move workloads without friction.
That said, adoption isn’t friction-free. I see several recurring pressure points.
Consistent, low-latency data connectivity and integration. Real-time AI workloads require data and compute to be colocated. Without integrated vector databases, already planned by 74.5% of organizations (IDC IT Quick Poll on Agentic AI and Data, Q2 2025; n=102), teams risk performance slowdowns, stale context, or inconsistent results.
Data sovereignty and compliance at scale. Residency and control requirements vary by country and industry. Already, 36.8% of organizations use sovereign public cloud, and 43.0% plan to adopt it. Among adopters, 44.5% rely on global platforms with sovereign controls (IDC’s Digital Sovereignty Survey, 2025; n=955). Balancing global scale with local rules requires location-aware data routing, auditable lineage, and regional controls.
Balancing dynamic scaling with cost efficiency. As AI pipelines expand, compute, storage, and networking costs fluctuate. Organizations must place workloads dynamically across clouds to minimize duplication, avoid vendor lock-in, and control spending.
Multicloud matters for the following reasons.
Organizations can co-locate AI and operational data by placing vector indexes and transactional stores close to each application surface (and user). This minimizes network hops and improves responsiveness.
They can enable HTAP (Hybrid Transactional and Analytical Processing) patterns by running real-time analytics on live operational data across providers and regions. This reduces batch delays and keeps models, rules, and agents current without the need for duplicate pipelines.
Multicloud architectures support improved scalability by leveraging multiple cloud providers, enabling a global reach with lower latency and enhanced resilience against disruptions. They also deliver cost efficiency, allowing enterprises to balance price and performance, minimize overhead from egress and replication, and size infrastructure dynamically.
Companies can align with sovereignty requirements by combining global platforms with sovereign controls or regions. Residency policies can be applied per data set or tenant under a unified control plane.
Finally, multicloud strategies help reduce single cloud provider risk, preserving flexibility and choice.
When considering a multicloud data platform, organizations should look for:
Integrated vector and operational data so that embeddings remain alongside JSON and tables, which avoids unnecessary ETL steps and speeds up responses.
HTAP and in-app analytics that allow OLAP workloads to run on the same live data that applications write to, reserving data warehouses for heavy or historical jobs.
Cross-provider, multi-region support with first-class capabilities across cloud providers. Broad regional options enable organizations to optimize performance, cost, and compliance.
Global reach with failover so that data and services can be deployed close to users, while ensuring rapid failover when incidents occur.
Residency controls, such as per-data set or per-tenant placement, regional partitioning, and enforcement mechanisms to meet sovereignty mandates.
Portability and reduced lock-in through open standards, automation, and tooling that allow workloads to move across clouds without extensive rewrites.
The Road Ahead
The future is scale. IDC projects there will be 1.3 billion agentic AI agents by 2028, expanding workflows and demanding governed, observable access to operational data across clouds. That growth will challenge assumptions about locality, monitoring, governance, and unit economics. In that world, multicloud is not a buzzword but the model that aligns performance, compliance, and resilience while preserving flexibility.
Organizations with a unified data strategy and a multicloud-ready platform will scale next-generation apps confidently as the landscape shifts. For more context, read the IDC Spotlight, Enabling Next Generation Applications with Multicloud Platforms Spotlight, sponsored by MongoDB.
Next Steps
Ready to modernize your applications? Visit our solutions page to learn about the MongoDB Application Modernization Platform (AMP).