Europe’s next DMA fight is about cloud lock-in and AI control

Europe’s next DMA fight is about cloud lock-in and AI control

The EU’s Digital Markets Act was first understood through app stores, search engines, social networks, advertising systems, browsers and mobile operating systems. That was never the whole story. The next fight is moving into the infrastructure beneath digital markets: cloud computing, AI assistants, search data, operating-system access, model deployment, switching costs, data portability and the commercial terms that decide whether a business can move at all.

Table of Contents

The European Commission’s first DMA review has made that shift explicit. It says the law remains fit for purpose, but cloud services and artificial intelligence are now priority areas for fairer and more contestable digital markets. The Commission has already opened three cloud market investigations, including two that examine whether Amazon Web Services and Microsoft Azure should be designated as gatekeeper services. It is also examining whether certain AI services could fall under the DMA’s “virtual assistant” core platform service category. Reuters reported the move as a new phase in the EU’s effort to turn the DMA toward cloud and AI after early enforcement against existing platform bottlenecks.

This is not only a Big Tech compliance story. It is a business infrastructure story. Cloud providers host the software, databases, analytics pipelines, AI training workloads and mission-critical systems used by banks, manufacturers, retailers, hospitals, public bodies, media firms and start-ups. AI services are becoming the interface through which workers search, write, code, buy, schedule, analyse, summarise and act. If a few companies control the inputs, distribution points and defaults around both layers, Europe’s concern is no longer limited to consumer choice on a phone screen. It becomes a question of whether companies can compete, switch suppliers, build AI products, protect sensitive data and avoid being trapped inside a vertically integrated stack.

The DMA is moving from platform screens to digital infrastructure

The DMA’s original political promise was easy to explain: very large digital platforms should not be allowed to use their gatekeeper position to tilt markets in their own favour. The law identifies “core platform services” and imposes obligations on companies designated as gatekeepers. The official DMA page describes the law as a tool to make digital-sector markets fairer and more contestable, with objective criteria for identifying large platforms that act as gateways between business users and end users.

Early enforcement naturally focused on visible choke points. App developers cared about app-store rules. Advertisers cared about data and ad intermediation. Publishers and rivals cared about search ranking and access. Consumers saw browser choice screens, consent flows, app installation changes and data portability tools. Those issues mattered because they sat directly between a business and its audience.

Cloud and AI move the DMA into a deeper layer. Cloud computing is not merely another online service. It is the rented infrastructure of the digital economy: computing power, storage, networking, databases, developer tools, machine-learning platforms, identity services, observability systems and deployment environments. AI adds a new control layer on top of that infrastructure. The models run somewhere. The assistants need access to operating systems, apps, data, permissions and user attention. The best-funded AI systems are often distributed through platforms already designated under the DMA.

That is why Europe’s next regulatory conflict is broader than whether AWS or Azure must file another compliance report. Cloud and AI decide the shape of future dependency. A retailer that builds its customer analytics, recommendation engine, data lake and internal AI assistant around one cloud provider may face high switching costs for years. A start-up that trains and deploys models through one hyperscaler may become dependent on that provider’s chips, credits, managed services and marketplace rules. A software vendor whose product needs deep access to Android, iOS, Windows, cloud identity or search data may find that the strongest AI services get privileged routes into the user’s workflow.

The Commission’s review states that the DMA has already produced changes such as consent mechanisms, data portability tools, choice screens and interoperability measures, but it also records stakeholder concerns about sidestepping, technical difficulty and incomplete market impact. The Commission decided not to rewrite the DMA immediately; it chose enforcement, regulatory dialogue, awareness and targeted procedural improvements instead. That matters because it shows a regulator trying to use an existing framework before asking for a new one.

Cloud and AI test whether that framework can stretch. The DMA already lists cloud computing services and virtual assistants among core platform service categories under the regulation. The hard issue is not whether the words exist in the law. The harder issue is whether the Commission can prove that a specific service acts as an “important gateway” and whether the current obligations fit the technical reality of cloud infrastructure and AI distribution. A browser choice screen is visible. A cloud licensing term, a proprietary API dependency or an AI assistant’s privileged operating-system permission is harder for a buyer, developer or regulator to see.

That opacity is precisely why the regulatory battlefield is shifting. Platform power has moved from the shop window into the wiring.

The first DMA review gave Brussels a mandate to look below the surface

The Commission’s first review was legally required before 3 May 2026 under the DMA’s review cycle. The review did not conclude that the law had failed. It concluded the opposite: the framework remains fit for purpose, but full impact will take more time and enforcement must stay active. That conclusion lets the Commission avoid two traps. It does not have to admit that the DMA was too narrow. It also does not have to launch a broad legislative reopening that could take years and invite heavy lobbying.

The review’s language on cloud and AI is more than a side note. It says cloud services and AI are “critical priorities” for fair and contestable digital markets. It points to cloud market investigations opened in November 2025 and AI-related specification proceedings involving Alphabet. It identifies AI themes around interoperability, self-preferencing, access to data, cloud dependencies and cooperation across regulatory tools. Those themes map directly onto the structure of AI competition.

The timing is revealing. The Commission has now moved through an early enforcement cycle. It has designated gatekeepers, received compliance reports, opened investigations and imposed its first DMA fines. Apple and Meta were fined €500 million and €200 million respectively in April 2025 for DMA breaches, according to the Commission’s announcement. That enforcement record gave the DMA institutional weight, even though many legal fights remain open and gatekeepers continue to challenge the Commission’s interpretation.

The cloud and AI turn now rests on a simple regulatory judgement: markets can become closed long before consumers see a monopoly price or a broken product. A company may still have choices on paper. It may technically be able to export data. It may have a contract that allows termination. It may even use multiple providers. The competitive harm sits in the friction: retraining staff, rewriting code, rebuilding databases, duplicating compliance evidence, reworking identity architecture, renegotiating software licences, absorbing downtime risk and losing access to native AI services tied to the original provider.

The DMA was built as an ex ante law, meaning it tries to prevent unfair gatekeeper conduct before damage becomes entrenched. That is why cloud and AI belong in the conversation. Traditional competition law can punish abuse after a lengthy case. The DMA can impose conduct rules on designated gatekeepers ahead of time. In cloud and AI, waiting for years may leave the market already locked around proprietary infrastructure.

The review also states that the Commission does not plan to change the DMA’s designation criteria for now. That is a strategic choice. The Commission is saying it can work with the current combination of quantitative thresholds and qualitative market investigations. For cloud, that matters because cloud may not fit consumer-facing user thresholds as neatly as social networks or app stores. A cloud service can be an important gateway without hundreds of millions of direct end users logging in. Its power may run through business customers whose services reach those end users downstream.

That distinction explains the political sensitivity. If the Commission can designate a cloud platform despite an imperfect fit with classic user metrics, the DMA becomes more flexible. If it cannot, the law risks missing the infrastructure layer behind the next decade of digital competition.

Gatekeeper power now sits in the stack, not only at the interface

The public often sees platform power as a matter of screens: which app store appears on a phone, which search engine opens by default, which social feed captures attention. Business users experience platform power differently. They feel it as a stack.

A stack is the chain of technologies that makes a digital service work. At the bottom are chips, data centres, networks and energy. Above that sit cloud infrastructure services. Then come databases, analytics tools, identity systems, software-development pipelines, security layers, model-training environments and application platforms. At the top are apps, AI assistants, search interfaces, marketplaces and consumer-facing experiences.

A gatekeeper at one layer can influence adjacent layers. A mobile operating system can steer users toward its own assistant. A search engine can shape discovery for AI answers and rival services. A cloud provider can bundle storage, compute, databases, AI model access and productivity software. A productivity suite can feed identity, documents and collaboration data into an AI assistant. An advertising platform can connect audience data, measurement, bidding and commerce.

The DMA’s original gatekeeper list already shows that many designated firms are not single-product companies. The Commission’s gatekeeper portal lists Alphabet, Amazon, Apple, ByteDance, Meta, Microsoft and Booking among current designated gatekeepers, with designated core platform services including Google Search, Google Play, YouTube, Android, Chrome, Amazon Marketplace, Amazon Advertising, Apple’s App Store, iOS, Safari, iPadOS and other services.

That matters for AI because AI services rarely compete in isolation. A virtual assistant is stronger when it can read the calendar, send email, access files, interact with apps, use device sensors, call APIs, remember preferences and complete transactions. A model is stronger when it has reliable compute, distribution, developer tooling, enterprise trust and access to proprietary data. A cloud AI platform is stronger when it is bundled into a broader software ecosystem.

The Commission’s Android AI interoperability case captures the point. The proposed measures would let competing AI services interact more effectively with Android applications and perform tasks such as sending email through a user’s preferred app, ordering food or sharing a photo. The Commission says Google largely reserves those capabilities for its own AI offerings on Android phones and tablets, including Gemini.

The case is about Android, but the principle travels. If the operating-system provider controls which AI assistant gets deep task access, then AI competition is not decided only by model quality. It is decided by permission architecture. If the cloud provider controls the easiest route to GPUs, managed model hosting, vector databases, proprietary developer tools and enterprise identity, then AI competition is not decided only by creativity. It is decided by infrastructure access.

The stack turns competition into a dependency chain. A company that appears free at the application layer may be constrained by choices made two or three layers below. A European AI start-up can build a strong model, but if training costs, cloud credits, deployment tools, marketplace distribution and enterprise procurement routes are controlled by a small group of vertically integrated hyperscalers, the market may still tilt toward incumbents.

This is the real reason cloud and AI have become Brussels’ next battlefield. The EU is not regulating cloud as a tidy sectoral add-on. It is trying to understand whether the stack itself is becoming the new platform.

Cloud computing turns switching into a competition test

Cloud competition is not measured only by headline market shares. It is measured by the cost and difficulty of changing course after a business has committed. In consumer markets, switching might mean downloading a different app. In cloud markets, switching may mean redesigning a company’s technical architecture.

A workload is rarely portable in the abstract. It depends on identity settings, networking rules, database formats, storage classes, monitoring systems, security policies, automation scripts, API behaviour, latency assumptions and staff knowledge. Even open-source technologies behave differently when wrapped in managed cloud services. A Kubernetes deployment may be more portable than a proprietary serverless application, but it still depends on surrounding services. A database migration may be possible, yet risky, slow and expensive. A data lake may be exported, but the analytics jobs, permissions and governance rules around it may not move cleanly.

The Commission’s cloud investigation announcement names the relevant practices directly: interoperability obstacles, limited or conditioned access to business-user data, tying and bundling, and potentially imbalanced contractual terms. The same announcement says cloud computing is the backbone of many digital services and crucial for AI development.

That combination is why switching is a competition issue. A cloud customer may begin with cheap credits, fast deployment and attractive managed services. After several years, the same customer may discover that its cost base, security architecture, developer workflow and AI roadmap depend on one provider’s proprietary services. A rival cloud provider may offer lower prices or better performance for some workloads, but the migration cost may erase the saving. The incumbent does not need to block switching outright. Friction can do the work.

The UK’s Competition and Markets Authority reached similar concerns in its cloud investigation. Its final decision summary said competition was not working well in cloud services markets. It described public cloud infrastructure services as shared computing resources on demand, including processing, storage, networking, IaaS and PaaS. It also found that UK cloud customers spent £10.5 billion in 2024, with spending growing nearly 30% per year since 2020, and said cloud services underpin AI model development and deployment.

The CMA also found the IaaS market highly concentrated, with Microsoft and AWS each holding a 30–40% share in 2024, while Google had a much lower 5–10% share in IaaS and PaaS. Its summary pointed to the potential durability of those positions and the role of AI-related cloud services.

Europe’s DMA cloud review is therefore not occurring in a vacuum. National and international competition authorities have been studying similar problems: egress fees, committed spend discounts, software licensing, technical barriers, interoperability limits and the commercial power of integrated hyperscalers. The EU’s distinctive move is to ask whether a digital-markets law designed for gatekeepers can address those problems before they harden further.

For businesses, the immediate question is not whether the Commission wins every case. The immediate question is whether cloud procurement must now treat portability as a strategic asset rather than a technical afterthought. Switching rights are worth little if systems are designed in a way that makes switching commercially irrational.

AWS and Azure investigations signal a qualitative test for cloud gatekeepers

The Commission’s November 2025 cloud investigations are structurally important because they examine AWS and Azure even though the announcement says the services may not meet the DMA gatekeeper thresholds for size, user number and market position. The question is whether they act as important gateways between businesses and consumers despite that threshold issue.

That is a qualitative test. It asks whether a service has gateway power because of its role in the digital economy, not only because of a simple count of users. Cloud infrastructure is a strong candidate for that approach because its end-user impact is indirect. A citizen may never log in to Azure, but the public service they use may run on Azure. A shopper may never know that a retail site uses AWS. A patient may not see the cloud provider behind a health platform. The business user is the direct customer, but the cloud service can shape downstream markets.

The Commission will assess factors such as size, number of users, network effects, scale and scope effects, lock-in, switching costs, conglomerate structure and vertical integration. Those factors are well suited to cloud. Hyperscalers benefit from scale in data centres, procurement, chips, networking, global regions, security compliance and developer ecosystems. They benefit from scope across compute, storage, databases, analytics, AI, productivity, advertising, marketplace distribution and enterprise sales. They benefit from vertical integration when cloud services connect to software licences, AI models, operating systems or marketplaces.

AWS and Microsoft will argue that cloud remains competitive, fast-moving and full of customer choice. There is evidence for dynamism: customers can and do use multiple providers, new AI infrastructure firms are rising, Google, Oracle and specialist GPU clouds are investing heavily, and large buyers have bargaining power. Yet competition law does not require a market to be static before regulators intervene. It asks whether specific conduct or structural features limit contestability.

The cloud investigations also open a second question: do the DMA’s existing obligations fit cloud, or does the Commission need to update obligations through a delegated act? The Commission’s press release says the third investigation will assess whether current DMA obligations can tackle cloud practices and may lead to a report within 18 months proposing updates to cloud-related obligations by delegated act under Articles 12 and 49.

That is the line businesses should watch. Designating AWS or Azure would be one kind of intervention. Updating obligations for cloud would be broader. It could shape interoperability, data access, contract terms, bundling and portability across designated cloud services. It would also invite intense debate about security, service reliability, proprietary innovation and the boundary between legitimate product integration and exclusionary lock-in.

A crude cloud rule could create operational risk. A weak rule could leave customers trapped. The Commission’s task is harder than in a consumer-facing app-store case because enterprise cloud systems carry uptime, cyber, data protection and sector-specific compliance requirements. A bank cannot switch a regulated workload as casually as a user changes a browser. A hospital cannot test interoperability rules by experimenting on live patient systems. A manufacturer cannot treat production downtime as a minor inconvenience.

That is why the qualitative test matters. If the EU treats cloud gatekeeping as a technical infrastructure problem, it can focus on commercial and architectural bottlenecks. If it treats cloud merely as another platform brand, it will miss where the real constraints sit.

AI turns cloud power into a control point for future markets

AI has made cloud infrastructure more politically charged because advanced AI is compute-hungry, data-hungry and distribution-hungry. Cloud providers supply the compute. Platform companies often hold the data and distribution. The same firms increasingly offer models, AI assistants, developer platforms and enterprise software integrations.

The Commission’s DMA review identifies “cloud dependencies” as one of five AI-related themes, especially access to cloud computing services for training and deploying AI models and services. It also identifies interoperability, self-preferencing, access to data and cross-regulatory cooperation.

Those themes describe the AI value chain. A smaller AI firm needs training infrastructure, fine-tuning tools, deployment capacity, customer data access, APIs, app distribution, operating-system integration and trust. Any bottleneck can matter. A model that cannot reach user workflows may lose to a weaker model with better distribution. A service that cannot access device-level capabilities may lose to the default assistant. A developer that cannot obtain compute on predictable terms may slow down while a vertically integrated competitor moves faster.

Global cloud spending shows why the infrastructure layer is becoming more powerful. Synergy Research Group reported that enterprise spending on cloud infrastructure services reached $107 billion in Q3 2025, with Amazon, Microsoft and Google together accounting for 63% of spending. It later reported Q4 2025 spending of $119 billion and a 2025 full-year market of $419 billion, with generative AI helping drive growth.

AI demand does not only increase the size of the cloud market. It changes the competitive meaning of cloud scale. Providers with deep capital reserves can secure chips, build data centres, design networking fabrics, negotiate energy supply, create proprietary silicon, subsidise start-ups through credits and package AI services into enterprise contracts. Providers without that scale may compete in specialised niches, but they face a much harder path to broad platform relevance.

Europe’s concern is not that large cloud providers are successful. The concern is that AI could make existing cloud concentration self-reinforcing. If the strongest AI developers receive the best compute terms from the largest cloud providers, and those cloud providers use AI services to attract more enterprise workloads, the loop tightens. If enterprise customers adopt a provider’s AI assistant because it is bundled into the productivity suite or cloud platform they already use, alternative AI vendors may struggle to reach the same customers even with strong products.

The CMA’s cloud summary shows the same logic. It says AI is changing how competition works in cloud services, particularly through accelerated compute and AI-based cloud services. It also notes that Microsoft, AWS and Google are vertically integrated providers of accelerated compute and AI-related cloud services, including through partnerships with AI model developers.

That finding matters beyond the UK. It captures a structural reality for Europe: cloud is no longer a neutral utility sitting under AI. Cloud is becoming part of the AI product. The provider’s compute, model catalogue, developer tools, data services, security controls and enterprise software integrations increasingly form a single commercial offer.

The DMA will struggle if it treats AI as a standalone application category. AI competition is entangled with cloud terms, operating-system access, search data, default placement and enterprise contracts. Brussels appears to understand that. The question is whether its tools can keep pace without turning every technical integration into a legal battle.

Virtual assistants are becoming the new default layer

The Commission’s review says it will examine whether certain AI services should potentially be designated as “virtual assistant” core platform services. That phrase may sound narrow, but it could become one of the most contested definitions in the DMA.

A traditional virtual assistant was a voice-triggered tool that answered basic questions, set timers, sent messages and controlled smart-home devices. An AI assistant is broader. It can search, summarise, draft, reason over documents, generate code, call external tools, schedule meetings, interact with apps, manage files and execute multi-step tasks. It may work through voice, text, screen context, browser extensions, productivity software, mobile operating systems or enterprise applications.

That expansion creates a default-layer problem. The assistant closest to the operating system or productivity environment can become the first point of action. If a user asks an assistant to book travel, compare suppliers, summarise legal documents, find a restaurant, buy software or analyse a spreadsheet, the assistant can shape which services are considered and which are ignored. The assistant does not need to look like a search engine to influence discovery. It can become search, recommendation, transaction and workflow automation combined.

The Android case shows the early version of this fight. The Commission’s proposed measures would allow competing AI services to be easily activated, including through a custom wake word, and to interact with applications on Android devices. Reuters reported that EU regulators said Google currently keeps key Android capabilities for Gemini, while Google argued the intervention would undermine device-maker autonomy and privacy protections.

The legal issue is framed as interoperability, but the commercial issue is default power. An AI assistant with deeper device access can perform more useful tasks. Users will prefer the assistant that works. Developers will support the assistant with users. Service providers will integrate with the assistant that drives transactions. The default becomes stronger because it is more capable, and it is more capable because it is the default.

The same logic applies in enterprise software. If a productivity suite embeds an AI assistant across email, documents, calendars, meetings, chat and files, the assistant becomes a work interface. Competing assistants may exist, but they may lack the same permissions, context, deployment ease or procurement path. If the enterprise suite is tied to a dominant identity system and cloud platform, the assistant’s advantage extends further.

A virtual assistant designation under the DMA could therefore have large consequences. It might trigger obligations around interoperability, self-preferencing, data access or user choice. It could force a gatekeeper to treat rival assistants more equally, or to expose certain interfaces under fair conditions. It could also raise difficult security questions. Giving a rival assistant access to email, files, device capabilities and business apps is not trivial. Permission design, auditability, data minimisation, cyber risk and liability all matter.

Europe’s challenge is to preserve contestability without mandating reckless access. A fair assistant market does not require every service to access everything. It requires gatekeepers not to reserve essential capabilities for themselves while presenting the result as a natural product advantage. The line between integration and exclusion will define the next phase of AI platform regulation.

Search data and AI answers bring discovery back into the DMA debate

Cloud and AI are not separate from search. Search is becoming one of AI’s most sensitive inputs and one of its most contested distribution channels. AI systems need fresh information, query understanding, ranking signals, web indexes and user interaction data. Search engines already sit in the DMA’s core platform service list, and Alphabet’s Google Search is designated under the DMA.

The Commission has opened specification proceedings involving Alphabet that address AI-related concerns, including interoperability and online search data sharing. The DMA review says these proceedings are intended to help Alphabet meet obligations around interoperability and access to search data in the context of AI services.

The commercial stakes are clear. If AI answer engines, chatbots or assistants need search data to compete, then access to that data becomes a contestability issue. A rival may build a strong user interface and model, but without high-quality search inputs it may deliver weaker answers, slower updates or poorer relevance. If the incumbent search provider can use its own data to improve its AI services while limiting rivals’ access, AI competition may reproduce search concentration.

This is not simply a matter of handing over a database. Search data includes query logs, ranking signals, click patterns, freshness indicators, spam detection, entity understanding and other sensitive information. Sharing can raise privacy, security, trade-secret and free-riding concerns. Regulators must distinguish between access needed to prevent gatekeeper self-reinforcement and access that would unfairly appropriate investment.

The DMA’s design tries to handle such tensions through specific obligations rather than broad antitrust balancing in every case. Yet AI strains the model because the relevant input may change quickly. A static dataset may be less useful than ongoing access. A narrow API may be too limited. A broad API may be intrusive. A rule that works for traditional search engines may not fit AI agents that combine search, summarisation, tool use and transaction execution.

Search also connects to publishers and content owners. AI answers can reduce traffic to websites even while using web content to generate responses. The DMA is not a copyright law, and it cannot solve every dispute over AI training or content display. Still, search data access and AI answer distribution will shape whether smaller search providers, vertical search services, news publishers and specialised AI tools can reach users.

For businesses, the practical lesson is that discovery channels are changing. Search engine optimisation alone will not be enough. Companies will need to understand how AI assistants retrieve, cite, rank and act on information. They will need structured data, credible sources, machine-readable product information, strong brand signals and direct customer relationships. Regulation may affect access to search data, but it will not eliminate the commercial shift from search pages to AI-mediated answers.

The DMA’s cloud and AI turn therefore connects the bottom of the stack to the top of the funnel. Cloud controls where AI runs. Search controls what AI knows and recommends. Operating systems control how AI acts. Businesses sit in the middle, exposed to each layer.

The Data Act reduces one kind of lock-in but cannot solve cloud dependency alone

The EU Data Act is highly relevant to the cloud debate because it creates rules for switching between data-processing services. The Commission’s Data Act page says the rules set a framework for customers to switch effectively between providers and unlock the EU cloud market.

That matters. Legal rights to switch and transfer data reduce one form of lock-in. The Data Act can make it harder for providers to use contractual or fee barriers to keep customers from moving. It also sits alongside the DMA because both laws address contestability, though from different angles. The Data Act applies broadly to data access and switching. The DMA targets designated gatekeepers and unfair gatekeeper practices.

Yet data portability is not the same as workload portability. A customer may be able to move data while still being unable to move the application economically. A business might export tables from a database, but the stored procedures, analytics jobs, permissions, integrations, machine-learning pipelines and compliance evidence may require major redesign. A company may transfer files but lose the managed AI services, identity policies, monitoring dashboards or data-governance workflows tied to the original cloud.

That is why the DMA review’s cloud focus remains necessary even after the Data Act. The Commission’s cloud investigation announcement specifically includes interoperability obstacles, data access limits, tying, bundling and imbalanced terms. Those practices go beyond a narrow right to extract data.

Provider responses to switching pressure show how regulation can change market behaviour before full litigation. AWS announced in March 2024 that it would waive data transfer-out charges when customers move outside AWS, saying the waiver follows the direction set by the European Data Act. Google Cloud later launched a no-cost “Data Transfer Essentials” offer for EU and UK customers using multicloud data transfers between Google Cloud and other providers, saying it responded to the principles of cloud interoperability and choice in the Data Act. Microsoft’s Azure documentation says Azure offers at-cost data transfer for European customers and cloud solution provider partners transferring data between Azure and another data-processing service provider in interoperable, parallel-use scenarios.

These changes are meaningful, but they also show the limits of fee-focused remedies. Egress fees are visible and unpopular, so providers can change them. Technical dependency is less visible. Software licensing can be buried in enterprise agreements. Managed-service dependencies appear one architecture choice at a time. AI tooling can be wrapped into attractive developer workflows.

For companies, the Data Act should be treated as a minimum, not a migration plan. Contract teams need to understand switching rights. Engineering teams need to design for portability before dependency sets in. Finance teams need to calculate long-term switching cost, not only first-year discounts. Security and compliance teams need to decide which workloads require cloud exit plans.

A legal right to leave is useful only when the system has been built so leaving is possible. That is the gap the DMA may try to fill for gatekeeper cloud services.

Egress fees became the visible symbol of a much deeper cloud problem

Few cloud billing concepts have become as politically symbolic as egress fees. These are charges for moving data out of a cloud provider’s network. Providers have long argued that network traffic has real infrastructure cost. Customers and rivals argue that high or unpredictable egress fees discourage switching and multicloud use.

The reason egress fees attracted regulatory attention is simple: they turn movement into a bill. A company may want to transfer data to another provider, replicate data for resilience, run analytics elsewhere or maintain a multicloud architecture. If outbound transfer fees are high, the provider that already holds the data gains a structural advantage. The fee does not need to ban switching. It can make switching unattractive.

Egress fees are not the whole lock-in story, but they are easy to understand. They show how a cloud provider’s pricing structure can shape architecture. A customer may avoid multicloud resilience because cross-cloud data transfer is expensive. A data-heavy business may keep workloads where the data already sits. A start-up may accept a cloud credit package and later discover that moving large datasets away carries cost and operational complexity.

The Data Act and market pressure have already pushed providers to adjust. AWS, Google Cloud and Microsoft have all published changes or programmes addressing transfer costs in certain switching or multicloud scenarios.

The deeper problem is that cloud providers can recover lock-in through other mechanisms. They may reduce egress fees but increase reliance on proprietary managed services. They may offer portability for basic storage while bundling advanced AI tools, security features or analytics services more tightly. They may support multicloud in principle while making the best discounts depend on long-term committed spend. They may allow exports but keep operational knowledge, certifications and support practices tied to native environments.

A regulation that treats egress fees as the entire issue will be too narrow. A regulation that ignores pricing will be too abstract. The Commission’s cloud investigation is better framed because it includes technical, contractual and commercial barriers together.

Businesses should draw the same lesson. Egress fees should be part of procurement analysis, but they should not dominate it. A serious cloud exit assessment asks: which services are proprietary, which APIs are portable, which data formats are open, which monitoring and security tools can move, which licences change cost outside the incumbent cloud, which AI models are tied to one provider, and what operational steps would be required to migrate under stress.

The danger is not only that a cloud bill becomes expensive. The danger is that a company loses strategic freedom. A board may approve an AI roadmap without realising that the roadmap assumes one provider’s model hosting, vector database, identity system, document store and security suite. Three years later, the organisation may find that its AI capability is technically advanced but commercially captive.

Egress fees made cloud lock-in visible. The DMA debate is about what remains hidden after those fees are reduced.

Software licensing may be the quietest form of cloud gatekeeping

Cloud lock-in is often discussed as a matter of APIs and data transfer. Software licensing can be just as powerful. If enterprise software is cheaper, easier or legally cleaner to run on one cloud than on a rival cloud, the cloud market can tilt even where the infrastructure itself is technically comparable.

Microsoft sits at the centre of this debate because Windows Server, SQL Server, Office, Teams, Entra identity services and other Microsoft products are deeply embedded in European enterprises. Azure competes as a cloud platform while Microsoft also supplies software that many businesses need to run. Rivals and European cloud providers have argued that licensing terms can make it more expensive or difficult to run Microsoft software on non-Microsoft clouds.

CISPE, the association of Cloud Infrastructure Services Providers in Europe, announced in July 2024 that it had reached a settlement with Microsoft related to a competition complaint filed with the European Commission in 2022. CISPE said Microsoft committed to changes addressing claims made by European CISPE members, and CISPE would withdraw the complaint. Reuters reported the agreement as a €20 million settlement to resolve an antitrust complaint about Microsoft’s cloud licensing practices.

The settlement did not end the broader debate. The UK CMA’s cloud work also examined software licensing as part of the competition picture, and the CMA’s cloud case later closed with findings that competition was not working well.

Licensing matters because it can turn a cloud decision into a software decision. A company may choose Azure not because Azure is always the best infrastructure for every workload, but because the total cost and licensing complexity of running Microsoft-heavy systems elsewhere is unattractive. That may be rational for the buyer. It may also reduce competitive pressure on infrastructure providers.

From a DMA perspective, the issue is subtle. The law does not punish a company for offering integrated services. Integration can reduce friction, improve security and lower cost. The problem arises when integration becomes a way to use strength in one market to restrict contestability in another. If a cloud provider’s software licensing terms disadvantage rival clouds, regulators may see that as a gatekeeper-style practice even if the conduct is expressed through contracts rather than technical blocking.

For businesses, licensing deserves board-level attention because it is often negotiated outside architecture planning. Procurement teams may secure discounts. IT teams may build around the discounted stack. Legal teams may approve terms. Later, the business discovers that a multicloud strategy is constrained by licensing rights, audit risk or support conditions.

Cloud portability cannot be assessed without software portability. A workload running on virtual machines, databases and operating systems is not portable merely because the data can be moved. The licences must travel, the support model must hold, and the economics must still work.

In the AI era, this issue may intensify. Productivity suites and developer platforms are becoming AI distribution channels. If AI copilots, model services and cloud credits are bundled into enterprise software agreements, businesses may adopt AI infrastructure through licensing momentum rather than deliberate market comparison. That is exactly the kind of quiet dependency Europe’s regulators are starting to examine.

AI self-preferencing will be harder to detect than search self-preferencing

Self-preferencing under the DMA is usually discussed through familiar examples: a platform gives its own services better ranking, better placement or more favourable access than rivals. In AI, self-preferencing may be less visible and more technically complex.

An AI assistant can favour a provider’s own services in many ways. It can route a task to the provider’s app by default. It can use a built-in model when a rival model would be better for the user’s preference. It can access private APIs unavailable to competitors. It can receive deeper operating-system permissions. It can appear in the most convenient interface. It can use proprietary data from the platform’s ecosystem. It can be bundled into a subscription where the marginal price appears close to zero.

The Commission’s DMA review identifies self-preferencing as one of five AI themes, specifically “more favourable treatment of gatekeepers’ own AI services.”

The challenge is proof. In search, regulators can examine ranking, display, traffic and click behaviour. In app stores, they can analyse rules, commissions, steering restrictions and technical access. In AI, the output is generated dynamically. A recommendation may depend on prompts, context, user history, available tools, model behaviour, safety filters and system instructions. The assistant’s preference may be embedded in design rather than a simple list.

Imagine a business user asking an enterprise assistant to “find the best CRM integration for our sales workflow.” If the assistant is embedded in a productivity suite and has privileged access to that provider’s marketplace, identity system and partner catalogue, the result may favour native or affiliated tools. Is that self-preferencing, quality ranking, security design or user convenience? The answer may depend on audit logs, alternative-service access, disclosure and whether rivals can integrate on equivalent terms.

The same problem arises when AI agents take actions. A shopping assistant may choose a marketplace. A travel assistant may choose a booking service. A coding assistant may choose a cloud deployment target. A business analytics assistant may choose a data warehouse. The user may not see the menu of rejected options. AI compresses search, comparison and transaction into a single action path.

That compression increases the value of default control. The provider that owns the assistant can shape the action path without obvious advertising or ranking manipulation. For regulators, the task shifts from policing visible placement to auditing access, defaults, tool routing and integration terms.

For businesses, the risk is strategic. Companies may become dependent on AI intermediaries that decide how suppliers are found, compared and purchased. A SaaS provider may need to ensure that its APIs, documentation, pricing, compliance evidence and partner relationships are legible to AI assistants. A retailer may need to negotiate with AI distribution channels, not only search engines and marketplaces. A B2B vendor may need structured product data and trusted machine-readable content so AI systems can evaluate it accurately.

The DMA may influence this market by requiring gatekeepers to avoid preferential treatment and support interoperability. But legal rules will not remove the need for businesses to adapt their distribution strategy. AI self-preferencing may be subtle, but its commercial effects could be blunt.

Cloud and AI turn compliance into a procurement and architecture issue

Many companies still treat platform regulation as something that affects the regulated platforms, not their own operations. That view is becoming outdated. The DMA’s cloud and AI phase will influence enterprise procurement, IT architecture, vendor management, risk assessment, data governance and AI adoption.

A business buying cloud services is no longer choosing only a technical supplier. It is choosing a dependency profile. The provider may control where data sits, how applications scale, which AI services are easiest to use, which developers need training, which security tools become standard, which licences apply and which migration paths remain realistic. Procurement that focuses only on price, uptime and feature lists misses the long-term competition issue.

The same applies to AI. A company adopting an AI assistant embedded in its productivity suite may get fast deployment and strong integration. It may also hand one vendor more influence over internal workflows, document access, user behaviour, knowledge management and future automation. A company adopting a cloud provider’s AI platform may accelerate development while becoming dependent on that provider’s model catalogue, vector database, orchestration tools and monitoring systems.

The DMA does not tell every business to avoid hyperscalers. That would be simplistic and often commercially irrational. Large cloud providers offer real benefits: reliability, security depth, global infrastructure, compliance tooling, developer ecosystems, AI services and economies of scale. The point is not to reject them. The point is to buy with open eyes.

Procurement teams should now ask questions that once belonged only to engineers. Can we export data in usable formats? Which services are proprietary? What would it cost to run the workload elsewhere? Are discounts tied to committed spend that discourages multicloud use? Does the vendor’s AI roadmap require deeper integration into its stack? Are software licences neutral across clouds? Are logs, identity policies and security configurations portable? Does the contract include clear switching support under the Data Act? Are there audit rights for AI routing and data use?

Architecture teams should ask commercial questions too. Which design choices increase bargaining power? Which managed services save time but create hard dependency? Which workloads need portability, and which can reasonably be provider-specific? Which data needs sovereign or sector-specific hosting? Which AI use cases require model choice? Where should the organisation avoid lock-in because future regulation, pricing or geopolitics could change the risk profile?

Legal and compliance teams should map the overlap between the DMA, Data Act, AI Act, GDPR, cybersecurity rules and sector-specific regulation. A cloud or AI supplier may be compliant with one regime but still create risk under another. A public body may have procurement obligations. A bank may face outsourcing supervision. A healthcare provider may need strict controls over sensitive data. A manufacturer may need IP protection for industrial data used in AI systems.

The next phase of DMA enforcement will be felt through contracts and system design long before court judgments are final. Businesses that wait for definitive legal outcomes may find their technical choices already made.

Europe’s cloud sovereignty agenda is now tied to competition policy

European cloud regulation is not only about competition. It is also about sovereignty, resilience, security and industrial strategy. These goals overlap, but they are not identical.

The Commission’s cloud policy page says the EU’s Digital Decade includes targets for 75% of European businesses to use cloud-edge technologies by 2030 and for 10,000 climate-neutral and highly secure edge nodes to be deployed across Europe. It also says the Commission will propose a Cloud and AI Development Act in 2026, with the aim of at least tripling EU data-centre capacity within five to seven years and meeting the needs of EU businesses and public administrations by 2035.

That policy agenda changes the cloud debate. Europe wants more cloud adoption, more AI capacity, more secure infrastructure, more edge computing and stronger domestic providers. Yet the market is dominated globally by US hyperscalers. European providers argue that they face structural disadvantages in scale, software ecosystems and licensing. Large customers often want the reach and capabilities of hyperscalers. Public bodies worry about sensitive data, foreign legal exposure and strategic dependency.

Competition policy cannot solve all of that. The DMA can address gatekeeper conduct. It cannot magically create a European hyperscaler with comparable global scale. The Data Act can reduce switching barriers. It cannot create cheap energy, abundant chips or instant data-centre capacity. AI factories can support start-ups and researchers. They cannot replace enterprise cloud ecosystems overnight.

Still, competition rules can shape whether European alternatives get a fair chance. If customers can switch more easily, smaller providers can compete for workloads. If software licensing is fairer, European cloud firms can host enterprise applications on better terms. If AI services can access operating-system capabilities, European AI developers can reach users. If cloud obligations prevent unfair bundling, buyers may compare services on merit.

Sovereignty also needs careful definition. A European buyer may need sovereign hosting for sensitive public or health data, but not for every workload. A European cloud provider may use non-European chips, software or investors. A hyperscaler may offer EU data residency, local operations or sovereign-cloud partnerships. The word “sovereign” can be meaningful or vague depending on legal control, operational control, technical isolation, encryption, support access and supply-chain resilience.

The Commission’s AI Continent agenda shows how competition and sovereignty are now linked. It refers to €200 billion to boost AI development in Europe, €20 billion for up to five AI gigafactories and 19 AI factories supporting start-ups, industry and research. It also connects AI capacity to a proposed Cloud and AI Development Act.

The EU is therefore pursuing two tracks at once. It wants to discipline gatekeeper power in existing markets. It also wants to build capacity so European firms are less dependent on those gatekeepers. The first track is legal. The second is industrial. Neither works well without the other.

If Europe regulates cloud and AI without building competitive capacity, businesses may face constraints without better alternatives. If Europe builds capacity without addressing lock-in, new entrants may struggle to win customers. The DMA’s cloud and AI push sits at the intersection.

The AI Act and the DMA regulate different risks but meet inside products

The EU AI Act and the DMA are often discussed together because both affect major technology companies. They are different laws aimed at different problems. Confusing them leads to poor strategy.

The AI Act is primarily a risk regulation. It classifies AI systems, prohibits certain practices, creates obligations for high-risk AI systems and sets rules for general-purpose AI models. Its purpose is tied to safety, fundamental rights, transparency and trustworthy AI. The DMA is a market contestability law. It targets gatekeepers and unfair practices that limit competition and fairness in digital markets.

A company building an AI product may need to comply with both. A high-risk AI system in healthcare, hiring, education or critical infrastructure may trigger AI Act obligations. If that AI system is distributed through a gatekeeper platform or depends on a gatekeeper’s cloud, app store, operating system, search data or virtual assistant layer, DMA issues may arise too.

The Commission’s DMA review explicitly says the DMA cannot be seen in isolation and that laws such as the AI Act and competition law also address issues across the AI value chain.

That sentence matters for business planning. A start-up may focus on AI Act compliance and ignore platform dependency. A large enterprise may focus on cloud procurement and ignore AI risk classification. A gatekeeper may treat DMA obligations as separate from AI product design. In real products, these layers merge.

Consider an AI assistant embedded in a mobile operating system. The AI Act may ask what the assistant does, whether it creates high-risk outputs, how transparency and safety are handled, and whether general-purpose model obligations apply. The DMA may ask whether rival assistants can access equivalent operating-system capabilities, whether the gatekeeper self-preferences its own assistant, and whether users can change defaults easily.

Consider an AI tool for hospitals deployed through a cloud platform. The AI Act may focus on clinical risk and conformity obligations. Data protection law may govern patient data. Cybersecurity rules may govern resilience. The DMA may become relevant if a designated cloud service uses unfair terms or blocks interoperability in a way that affects business users and alternative providers.

For compliance teams, the regulatory stack is now operational. It is no longer enough to ask, “Is this AI system lawful?” The better question is, “Which legal regimes shape the data, model, infrastructure, distribution channel, user interface and supplier relationship?”

The regulatory stack around cloud and AI in Europe

LayerMain regulatory concernPractical business question
DMAGatekeeper conduct, contestability, self-preferencing, interoperabilityCan we reach users, data, operating-system functions or cloud services on fair terms?
Data ActData access, cloud switching, interoperability between data-processing servicesCan we move data and switch providers without contractual or technical traps?
AI ActAI safety, risk controls, transparency, general-purpose AI obligationsDoes the AI system meet its risk-category obligations before deployment?
GDPR and cybersecurity rulesPersonal data, security, resilience and lawful processingCan we prove lawful data use, access control, security and incident readiness?

This stack is useful because it separates legal purposes without pretending that products are neatly separated. A single AI workflow may touch every row in the table.

AI factories and cloud regulation are two sides of the same capacity problem

Europe’s AI strategy is not only a rulebook. It is also an attempt to build compute capacity. The Commission’s AI Factories policy describes AI Factories as ecosystems bringing together computing power, data and talent, linking supercomputing centres, universities, SMEs, industry and financial actors. It says 19 AI Factories and 13 antennas are operational, with access prioritised for AI start-ups and SMEs. It also says at least nine new AI-optimised supercomputers will be procured and deployed across the EU, more than tripling current EuroHPC AI computing capacity.

This matters because cloud and AI regulation can only do so much if compute remains scarce. AI developers need GPUs or other advanced accelerators, high-bandwidth networking, storage, orchestration tools and engineering support. Frontier models need massive investment. Smaller models and sector-specific AI systems still need reliable infrastructure. If European firms cannot access compute on workable terms, they may depend on the same hyperscalers Europe is trying to regulate.

The Commission’s AI Continent plan includes up to five AI gigafactories and an InvestAI facility intended to mobilise €20 billion for gigafactories. It also links the Cloud and AI Development Act to the goal of tripling EU data-centre capacity within five to seven years.

That creates an industrial-policy bridge to the DMA. The DMA can try to keep gatekeepers from closing markets. AI factories can give smaller firms and researchers access to compute outside hyperscaler channels. Cloud-capacity policy can support broader infrastructure. Data strategy can create usable datasets. Procurement rules can steer public-sector demand.

The risk is fragmentation. A start-up may receive access to a public AI factory for training or experimentation, then still need commercial cloud deployment, enterprise distribution and integration with customer systems. If the deployment path leads back into the dominant clouds, the public compute intervention helps but does not remove dependency. If the AI factory ecosystem lacks strong tooling, support and commercial routes, developers may still prefer hyperscaler platforms.

A second risk is scale mismatch. Hyperscalers invest at global scale and move fast. Public programmes can be slower, politically distributed and administratively complex. Access processes, eligibility rules, support quality and integration with commercial tools will decide whether AI factories become serious infrastructure or symbolic projects.

The best outcome is not a fully separate European AI universe. It is a more contestable one. European AI developers should be able to use public compute, European cloud providers, open-source tools, specialised GPU clouds and hyperscalers without being locked into one route. Enterprises should be able to deploy models across environments according to sensitivity, cost, performance and compliance needs.

Regulation cannot create compute. Compute investment cannot guarantee competition. Europe needs both.

Energy, data centres and permitting are now part of digital competition

Cloud and AI depend on physical infrastructure: land, electricity, cooling, water, fibre, chips, substations and permits. As AI workloads grow, the regulatory debate can no longer stay inside software law.

The Commission’s cloud policy page says the proposed Cloud and AI Development Act will streamline data-centre deployment by identifying suitable sites and simplifying permitting for projects meeting sustainability and innovation criteria. It also says the Act will address energy demand through energy efficiency, cooling and power-management technologies, and integration with the broader energy system.

This is a competition issue because infrastructure constraints favour incumbents. Companies with large balance sheets can secure power, pre-order chips, build data centres, negotiate long-term energy contracts and absorb delays. Smaller providers may struggle to obtain grid connections, financing or hardware supply. If Europe wants more cloud and AI competition, it must deal with the bottlenecks that determine who can build capacity.

Data-centre policy also intersects with sovereignty. A country may want sensitive public data hosted domestically or within the EU, but local capacity may be limited. A region may welcome AI investment but resist energy or water use. A public authority may want sustainable data centres but face urgent demand for AI services. These trade-offs are no longer hypothetical. AI infrastructure is becoming an energy-planning issue.

For businesses, capacity constraints will show up as pricing, availability and contract terms. AI compute may be reserved for large customers. Smaller firms may face waiting lists or high prices. Cloud providers may use committed spend to manage scarce capacity. Buyers may accept deeper lock-in in exchange for guaranteed access to GPUs or AI services. That is a commercial decision with strategic consequences.

The DMA cannot order the construction of data centres. Yet its enforcement can influence how scarce capacity is allocated by gatekeepers. If a designated cloud provider bundles AI compute with other services in ways that exclude rivals, or reserves critical functionality for its own downstream AI products, competition concerns may follow. If access is constrained by objective capacity limits, the analysis is different. Regulators will need technical evidence.

Energy also affects Europe’s industrial credibility. A regulatory strategy that demands AI sovereignty but cannot deliver power, permits or grid capacity will frustrate businesses. A competition strategy that disciplines cloud gatekeepers but leaves smaller providers unable to scale will produce limited market change. A sustainability strategy that ignores AI demand may become unrealistic.

Cloud and AI have pulled digital regulation into the physical world. That may be uncomfortable for policymakers used to platform rules, but it is unavoidable. The future of AI competition will be built in data centres as much as in legal texts.

European cloud providers have an opening, not a guaranteed victory

The DMA’s cloud and AI push will be welcomed by many European cloud providers, but regulation alone will not hand them the market. They still need competitive products, credible security, strong developer experience, enterprise-grade support, transparent pricing, AI capabilities, partner ecosystems and enough capacity.

European providers often compete on sovereignty, data protection, local support, open-source alignment, sector expertise or compliance with European standards. Those strengths matter, especially for public-sector, health, defence, education and regulated-industry workloads. Yet many enterprises also want global regions, advanced managed services, extensive documentation, large talent pools, marketplace integrations and AI tooling. Hyperscalers dominate because they deliver many of those things at scale.

A fairer market gives alternatives more room. It does not erase buyer preferences. If switching becomes easier, European providers can win specific workloads. If software licensing becomes more neutral, they can host enterprise systems more competitively. If public procurement values sovereignty and interoperability, they can gain reference customers. If AI factories and cloud policy connect with European cloud ecosystems, they can support start-ups and SMEs.

The danger is that European cloud policy becomes too symbolic. Sovereign-cloud branding without strong technical substance will not satisfy serious buyers. A provider cannot rely on nationality if its service lacks uptime, security certification, developer tooling or cost predictability. Buyers will not move critical workloads to weak alternatives merely because regulators dislike gatekeepers.

There is also a risk of fragmentation among European providers. Many smaller providers can serve niches well, but enterprise buyers often want integrated platforms. Federation, open standards and interoperability can help, but they require disciplined execution. Common European data spaces, middleware such as Simpl and cloud-edge initiatives may support this, but buyers will judge results, not architecture diagrams. The Commission’s cloud policy refers to interoperable cloud and edge services, common data spaces, IPCEI investment and a cloud rulebook, showing that the EU understands the need for an ecosystem rather than isolated providers.

For European cloud firms, the DMA moment should be treated as a commercial test. They need to turn regulatory attention into buyer confidence. That means offering migration support, clear pricing, strong compliance evidence, AI deployment options, managed open-source services, multicloud architectures and proof that customers can leave without pain. The best answer to hyperscaler lock-in is not only legal complaint. It is a credible alternative that makes exit plausible.

For businesses, the best procurement posture is not ideological. Some workloads may belong on a hyperscaler. Some may belong on a European sovereign provider. Some may remain on-premises or at the edge. Some may use specialised AI infrastructure. The goal is strategic optionality. Regulation may make optionality easier to preserve, but companies still need to design for it.

Public-sector buyers sit at the centre of the cloud and AI fight

Public administrations are not passive observers. They are major buyers of cloud services, holders of sensitive data and adopters of AI. Their procurement choices can shape market demand, legitimise certain providers and set standards for portability, sovereignty and security.

The Commission’s cloud policy says the proposed Cloud and AI Development Act will work with a proposed single EU-wide cloud policy for public administrations and public procurement, aiming to foster European cloud providers and prioritise highly secure cloud capacity for highly critical use cases.

This is strategically important. Public-sector demand can support alternatives that private buyers consider too risky or too small. It can require open standards, exit plans, data portability, auditability and transparent subcontracting. It can also avoid repeating procurement patterns that deepen dependency on a small group of vendors.

Public bodies face hard constraints. They need reliable services, cyber resilience, compliance with procurement law, budget discipline and skilled staff. They may lack cloud architects or AI experts. They may inherit legacy contracts. They may be under pressure to deploy digital services quickly. A hyperscaler can offer a mature platform, certifications, implementation partners and familiar tooling. A smaller provider may offer sovereignty but less breadth.

Good public procurement must avoid two mistakes. The first is buying the easiest integrated stack without understanding long-term dependency. The second is choosing a politically attractive provider that cannot meet operational needs. Both damage public trust.

The smarter path is workload classification. Highly sensitive or critical workloads may need sovereign or specially certified infrastructure. Less sensitive workloads may use mainstream cloud services with strong exit rights and security controls. AI experimentation may use AI factories, open-source models or controlled cloud environments. Public procurement can require portability evidence, data-location clarity, encryption controls, audit logs, incident response, supplier diversity and contract terms aligned with the Data Act.

AI adds new procurement risks. A public body using an AI assistant for casework, citizen services, health triage or internal document analysis must understand where data goes, which model is used, whether prompts are stored, how outputs are audited, what human oversight exists and whether the system can be replaced. If the assistant is bundled into a productivity suite, procurement teams may underestimate its policy impact.

The DMA can support public buyers by improving market conditions, but public buyers still need internal capability. They need people who can read cloud contracts, understand AI systems, evaluate security claims and manage exit plans. Without that expertise, even strong regulation becomes a paper shield.

Europe’s cloud and AI regulation will be judged partly by whether public institutions become smarter buyers. A continent cannot claim digital sovereignty if its own public bodies cannot explain their cloud dependencies.

SMEs are exposed to cloud and AI lock-in in a different way

Large enterprises can hire cloud architects, negotiate enterprise agreements and run multicloud strategies. SMEs often cannot. For smaller firms, cloud and AI lock-in is less about strategic architecture documents and more about convenience, credits, packaged software and limited staff capacity.

A start-up may choose one cloud because it receives free credits from an accelerator. A small retailer may adopt cloud services through its e-commerce platform. A mid-sized manufacturer may rely on an IT provider that standardises on one stack. A professional-services firm may turn on an AI assistant because it is included in an existing subscription. These decisions are rational. They save time. They reduce complexity. They let small teams move.

The risk appears later. The start-up’s database, deployment scripts, analytics stack and AI workflows may become tightly bound to one provider. The retailer’s customer data may sit inside a platform that is hard to extract in useful form. The manufacturer may discover that predictive-maintenance data flows through a vendor-controlled environment. The professional-services firm may store knowledge in an AI-enabled suite with limited model choice.

SMEs are exactly the type of business the Commission says should benefit from DMA opportunities, yet they are least able to exploit complex rights without support. The DMA review says the Commission will focus on awareness, especially among start-ups, SMEs and consumers, so they can benefit from the DMA’s opportunities.

Awareness is necessary but insufficient. SMEs need practical tools: model contract clauses, cloud exit checklists, standardised portability documentation, trusted migration partners, clear complaint channels, affordable legal guidance and procurement templates. They need cloud providers to explain switching terms in plain language. They need AI vendors to disclose data use and integration dependencies. They need public programmes that connect AI factory resources to commercial deployment.

The Apply AI Strategy is relevant because it aims to boost AI adoption and innovation across Europe, particularly among SMEs, and it links support measures to European Digital Innovation Hubs, AI factories, AI gigafactories, testing facilities and sandboxes.

For SMEs, the practical rule is not “avoid lock-in at all costs.” Avoiding every dependency can paralyse a small firm. The better rule is to identify the dependencies that could threaten the business later. Customer data, product data, core application code, AI model outputs, analytics pipelines and identity systems deserve special attention. A company can accept some proprietary services if it knows the exit cost and has a plan for critical assets.

Regulators often speak in market-wide terms. SMEs experience the market through invoices, integrations and staff limits. If Europe wants the DMA’s cloud and AI turn to matter beyond Brussels, it must translate contestability into tools that smaller businesses can actually use.

Platform companies will fight on security, innovation and legal scope

The major platform companies will not accept the cloud and AI expansion of DMA enforcement without resistance. Their arguments are predictable, but not always wrong.

They will argue that forced interoperability can create security and privacy risks. The Android AI case already shows this. Google said the Commission’s proposed measures would mandate access to sensitive hardware and device permissions, drive up costs and undermine privacy and security protections for European users, according to Reuters.

Security is a serious issue. AI assistants with deep access to devices, email, files, enterprise apps and sensors can create real attack surfaces. Cloud interoperability can increase misconfiguration risk. Data portability can expose sensitive data if implemented poorly. Regulators should not dismiss security arguments as mere lobbying.

The harder task is separating genuine security constraints from strategic withholding. A gatekeeper may claim that only its own AI service can safely access certain functions. That may be true in some cases and self-serving in others. Regulators will need technical expertise, independent audits and clear standards for permissioning, sandboxing, logging, user consent and liability.

Platforms will also argue that intervention weakens innovation. They will say integrated services work better because the provider can control the full experience. Again, there is truth here. Integration often improves usability and reliability. Many customers choose integrated platforms because they reduce complexity.

The DMA does not ban integration. It targets unfair gatekeeper conduct. The legal fight will turn on whether the same capability can be offered to rivals without unacceptable risk, whether users can make real choices, whether business users can reach customers on fair terms and whether the gatekeeper is using control of one layer to distort competition in another.

Platforms will also fight on definition. Is an AI chatbot a virtual assistant? Is a model API a cloud service, a software service or something else? Does a cloud AI platform act as an important gateway between businesses and consumers? Is an AI feature embedded in a designated service already covered, or is it a separate service? The Commission’s review explicitly recognises this distinction between AI embedded in designated core platform services and distinct services potentially warranting designation.

Definition fights matter because obligations follow categories. If a service falls outside the DMA’s scope, competition law may still apply, but enforcement is slower and more case-specific. If a service falls inside the DMA, conduct obligations can apply earlier and more predictably.

The platforms’ strongest argument may be that Europe risks creating fragmented product experiences for EU users and businesses. Apple made a similar criticism of the DMA review, saying it failed to account for impacts on privacy, security and innovation, according to Reuters.

Europe’s answer must be evidence, not slogans. If interoperability improves competition without harming security, the Commission should show it. If a proposed access remedy creates real risk, it should adjust. A strong DMA does not require maximal intervention. It requires disciplined intervention where gatekeeper control blocks fair competition.

Overregulation could weaken Europe if the rules miss the real bottlenecks

Europe’s regulatory ambition carries risk. A poorly designed cloud or AI intervention could slow deployment, increase compliance cost, create legal uncertainty or push companies to delay product launches in Europe. Some criticism from platform companies is self-interested, but the risk of regulatory overreach is real.

Cloud systems are complex. Obligations that sound clean in law may be hard to implement safely. Interoperability across cloud services can involve identity, encryption, logs, networking, workload orchestration, monitoring, incident response and compliance evidence. Mandating access without clear boundaries can create security gaps. Forcing compatibility with proprietary services can reduce incentives to develop those services. Requiring too many bespoke EU-only product changes can increase fragmentation.

AI raises even harder questions. A rule requiring equal access for AI assistants sounds pro-competitive, but equal access to what? Device APIs? App intents? User data? Hardware permissions? Search indexes? Enterprise documents? Each category has different risk. An assistant that can send email or order food needs strong user consent and fraud controls. An assistant that can access business files needs auditability and data-loss prevention. An assistant that can act across apps needs accountability when something goes wrong.

The Commission’s review decision not to rewrite the DMA immediately is therefore sensible. It allows targeted enforcement, market investigations and specification proceedings before large legal changes.

Yet underregulation has its own cost. If Europe waits until AI defaults and cloud dependencies are fully entrenched, contestability may become theoretical. Markets can tip quietly through design choices, credits, bundling, permissions and procurement inertia. The regulator’s problem is timing: intervene too early and risk guessing wrong; intervene too late and preserve a market that has already closed.

A better approach is to focus on bottlenecks that businesses can identify and document. Switching costs. Data access limits. Licensing penalties. API discrimination. Default restrictions. Bundled discounts that punish multicloud use. Preferential operating-system permissions. Lack of transparent AI routing. Contractual terms that block benchmarking or migration. These are concrete.

The EU should avoid regulating AI through vague fears and cloud through abstract suspicion of size. Size alone is not the offence. Many large platforms produce real value. The regulatory target should be conduct and structural control that prevents others from competing on merit.

Businesses should make the same distinction. Not every dependency is bad. Some dependencies are efficient. A company should worry when a dependency removes bargaining power, blocks future choices or gives a supplier control over strategic data and workflows. That is the practical definition of harmful lock-in.

The business impact will show up before formal designations are final

Cloud and AI buyers should not wait for the Commission to complete every investigation. Regulatory pressure already changes supplier behaviour. Providers adjust contracts, publish data-transfer programmes, revise interoperability commitments, change compliance documentation and prepare legal arguments. Buyers can use that moment.

A company negotiating a cloud contract in 2026 should ask for stronger portability terms than it might have accepted in 2023. It should reference the Data Act’s switching framework and the Commission’s active cloud investigations. It should ask vendors to explain how they support multicloud architectures, what egress or transfer programmes apply, how proprietary services can be replaced, which licences change cost outside the provider’s cloud and how AI services handle data.

AI procurement should include similar questions. Does the AI assistant use customer data for training? Can the customer choose models? Can logs be exported? Can prompts and outputs be audited? Can the service interact with third-party applications on fair terms? Is the assistant bundled in a way that makes rival tools commercially unrealistic? Can the customer disable features without losing unrelated functionality? How are permissions managed?

The business impact will also show up in competitive strategy. SaaS vendors should prepare for AI-mediated distribution. Cloud providers should document openness and portability. System integrators should build migration and multicloud skills. European cloud firms should position themselves around exit-friendly architecture, sovereign workloads and AI deployment options. Legal teams should monitor DMA and Data Act guidance. Boards should include cloud dependency in risk registers.

The Commission’s review says the DMA has already driven consent mechanisms, data portability tools, choice screens and interoperability measures, while the gatekeeper compliance-report process continues.

Those early measures have not transformed the market overnight, and cloud and AI will not either. The DMA works through accumulated pressure: compliance reports, stakeholder complaints, specification proceedings, investigations, fines, court review, public scrutiny and market adaptation. Businesses that understand this rhythm can act before legal certainty becomes absolute.

For example, a company planning a new AI-powered customer service platform can design a dual-provider architecture for critical data, negotiate exit support, avoid proprietary orchestration where unnecessary and document why certain provider-specific services are worth the dependency. A manufacturer building industrial AI can keep raw machine data in portable formats and separate model development from one cloud’s proprietary analytics layer. A public body can require suppliers to provide portability evidence as part of bids.

These steps are not anti-cloud. They are pro-resilience. The DMA’s cloud and AI shift is a reminder that supplier convenience and strategic freedom often pull in different directions.

The next 18 months will decide whether the DMA can handle infrastructure markets

The Commission’s cloud investigations create a rough timetable. It aims to conclude the AWS and Azure gatekeeper investigations within 12 months of the November 2025 opening. The broader cloud market investigation may produce a final report within 18 months and could propose updates to DMA obligations for cloud.

That period is crucial. Several outcomes are possible.

The first scenario is targeted designation. AWS and/or Azure could be designated as gatekeepers for cloud computing services. That would bring DMA obligations to their cloud services after a compliance period. The practical effect would depend on how obligations are interpreted for cloud: interoperability, data access, bundling, self-preferencing and fair terms would become central.

The second scenario is no designation but stronger guidance. The Commission could decide that current evidence does not support designation while still using the broader investigation to shape expectations, encourage commitments or prepare future updates. That would be less dramatic but still influential.

The third scenario is delegated-act work on cloud obligations. If the Commission concludes existing obligations do not fit cloud markets well enough, it may propose updates. This would trigger a major debate over the DMA’s adaptability and the technical feasibility of cloud-specific obligations.

The fourth scenario is litigation drag. Any major designation or obligation interpretation may be challenged. Gatekeepers have strong incentives to litigate scope, proportionality, security and process. Courts may narrow or confirm the Commission’s approach. During litigation, businesses may face uncertainty but also changing supplier behaviour.

AI will run on a parallel track. The Commission will continue assessing whether AI services are embedded in existing designated core platform services or distinct services that could warrant designation, especially as virtual assistants. Android interoperability and search data proceedings involving Alphabet will provide early tests.

The market will not wait. AI assistants will become more capable. Cloud AI platforms will expand. Enterprise software bundles will deepen. Data-centre investment will accelerate. Provider pricing and credits will evolve. European AI factories will mature. Public procurement policies will shift.

That is why businesses need scenario planning rather than a passive compliance watch. A company should ask what happens if its primary cloud provider becomes subject to DMA obligations. Would contract terms change? Would migration support improve? Would multicloud options become cheaper? Would the provider alter AI bundling? Would European alternatives become more credible? Would rivals get better access to operating-system or search data?

The answer will vary by sector. A media company, bank, SaaS provider, public authority and AI start-up will feel different effects. The common thread is that cloud and AI dependency is becoming a regulated strategic risk.

The strongest business strategy is portability with purpose

Portability is often discussed as if more is always better. That is too simple. Full portability can be expensive. Avoiding every proprietary service may slow product development and deny businesses useful capabilities. The right goal is portability with purpose.

A business should classify workloads by strategic importance and lock-in tolerance. Commodity workloads may be designed for easier migration. Sensitive data stores may require stronger exit and sovereignty controls. AI experimentation may use provider-native tools for speed, while production systems keep data and model interfaces more portable. High-performance workloads may accept provider-specific infrastructure because the benefit is worth it. The decision should be explicit.

Portability with purpose has several components. Data should be stored in formats that can be exported and understood. Application code should avoid unnecessary dependence on proprietary APIs. Infrastructure should be described through reproducible configuration where feasible. Identity and access policies should be documented. Observability and security logs should be exportable. Vendor contracts should include practical exit assistance. AI systems should support model evaluation, logging and replacement where business risk requires it.

For AI specifically, companies should separate three layers: data, model and workflow. The data layer contains the organisation’s valuable information. The model layer may include proprietary models, open-source models, fine-tuned models or hosted APIs. The workflow layer connects AI outputs to business actions. Lock-in risk becomes severe when one vendor controls all three without usable exit paths.

A company can accept a hosted model from a major provider while keeping data governance independent and workflow orchestration portable. It can use a provider’s vector database for a narrow project while keeping source documents in a neutral repository. It can deploy an embedded assistant for internal productivity while preventing it from becoming the only interface to core business knowledge. These are architectural choices, not ideological positions.

The DMA and Data Act may make such choices easier by pushing providers toward fairer switching, interoperability and access. But regulation cannot design a company’s systems. Boards and technology leaders need to own that work.

Portability also strengthens negotiation. A supplier behaves differently when the customer can leave. Even if a business never migrates, credible exit options improve pricing, service quality and contractual balance. That is why portability is not wasted effort. It is bargaining power.

The EU’s cloud and AI regulatory push gives businesses a reason to revisit old assumptions. Convenience is still valuable. Integration is still useful. Hyperscalers are still powerful for good reasons. Yet the next decade will punish companies that confuse convenience with freedom.

The regulatory battlefield is really about who controls business agency

The phrase “business agency” may sound abstract, but it captures the central issue. Agency is the ability of a company to choose suppliers, move data, deploy AI, reach customers, change strategy and negotiate terms without being trapped by hidden dependencies.

Cloud lock-in reduces agency by making infrastructure change costly. AI default control reduces agency by shaping which services users see and use. Search data control reduces agency by limiting discovery and rival AI quality. Software licensing reduces agency by making some clouds economically unattractive. Bundling reduces agency by making alternatives appear more expensive than they are. Weak portability reduces agency by turning a legal right into an operational burden.

The DMA is concerned with fairness and contestability, but those concepts are not only legal. For a business, contestability means a supplier can be challenged. Fairness means access terms are not distorted by gatekeeper control. A market is contestable when rivals can win customers on merit and customers can act on that choice.

Cloud and AI matter because they sit inside the decision-making machinery of modern companies. A business that cannot change cloud provider easily may hesitate to adopt a better analytics platform. A company whose AI assistant routes tasks through one ecosystem may buy fewer rival services. A start-up dependent on one provider’s compute credits may shape its product roadmap around that provider’s commercial priorities. A public authority tied to one cloud may struggle to adjust policy after security or sovereignty concerns change.

This is why the EU’s move is not merely anti-Big Tech. It is an attempt to preserve choice inside the infrastructure layer. Big Tech firms are central because they own many of the relevant bottlenecks, but the beneficiaries are not only rival platforms. They include ordinary businesses that need bargaining power, public institutions that need resilient services and European developers that need routes to market.

The risk is that the debate becomes trapped in culture-war language: Europe versus America, regulation versus innovation, sovereignty versus openness. The real issue is more practical. Can a business build on powerful digital infrastructure without surrendering the ability to leave, compete or choose?

A healthy market does not require every provider to be small. It requires large providers to remain contestable. It requires customers to have credible alternatives. It requires technical integration not to become commercial captivity. It requires AI systems to compete through usefulness, not privileged access alone.

That is the standard by which the DMA’s cloud and AI phase should be judged.

Europe is testing whether open digital markets can survive the AI stack

The DMA was born from a specific diagnosis: some digital platforms had become so central that normal competition could not discipline them quickly enough. Cloud and AI now test whether that diagnosis still holds when the bottleneck is not only a marketplace, app store or search page, but the infrastructure and intelligence layer beneath business activity.

The Commission’s first DMA review gives Brussels a legal and political basis to move. The cloud investigations into AWS and Azure ask whether infrastructure services can act as gatekeepers even when their power is mediated through business users. The Android AI interoperability case asks whether rival AI services can compete when the operating-system owner reserves deep capabilities for its own assistant. The Data Act, AI Act, AI factories, Cloud and AI Development Act and Digital Decade targets all sit around the same problem: Europe wants a digital economy that uses advanced infrastructure without becoming dependent on a handful of vertically integrated foreign-controlled stacks.

There is no easy endpoint. Hyperscalers will remain central because they have scale, talent, infrastructure and mature services. European cloud providers will not win by regulation alone. AI start-ups will still need compute and distribution. Businesses will still choose convenience when it solves urgent problems. Regulators will make mistakes and courts will shape the boundaries.

Yet the direction is now clear. Cloud and AI are becoming the next regulatory battlefield because they decide who controls the conditions of digital competition before products ever reach the market. The DMA’s early fights were about gateways users could see. The next fights are about gateways businesses build on.

For platforms, the message is that compliance cannot be treated as a surface redesign. Regulators will look at architecture, access, defaults, data, contracts and bundling. For businesses, the message is that cloud and AI choices are strategic commitments, not routine IT purchases. For European policymakers, the message is that rules and capacity must move together. Contestability requires enforcement, but it also requires real alternatives.

The AI economy will not be open by default. It will be open only if customers can switch, rivals can integrate, data can move, assistants can compete, and infrastructure providers cannot turn scale into permanent control. That is the real test now facing the DMA.

Practical answers for businesses watching the DMA cloud and AI shift

Why is the EU now focusing on cloud and AI under the DMA?

The EU is focusing on cloud and AI because these services are becoming control points for digital competition. Cloud platforms host applications, data, analytics and AI workloads, while AI assistants and model platforms increasingly shape how users search, decide and act. The Commission’s DMA review identifies cloud services and AI as priority areas for fairer and more contestable digital markets.

Does the DMA already cover cloud computing services?

Yes. Cloud computing services are listed in the DMA’s core platform service categories. The difficult question is whether a specific provider’s cloud service should be designated as a gatekeeper service and which obligations apply in technical cloud markets.

Are AWS and Microsoft Azure already DMA gatekeepers for cloud?

No final designation has been announced for AWS or Azure as cloud gatekeeper services. The Commission opened market investigations in November 2025 to assess whether Amazon and Microsoft should be designated for Amazon Web Services and Microsoft Azure.

Why would cloud providers be treated differently from app stores or search engines?

Cloud providers often influence markets indirectly through business customers. A cloud service may not have the same consumer-facing user metrics as a social network, but it can still act as an important gateway if many digital services depend on it and switching is difficult.

What cloud practices is the Commission examining?

The Commission has pointed to interoperability obstacles, limited or conditioned access to business-user data, tying and bundling, and potentially imbalanced contractual terms. These practices can make it harder for customers to switch, use multiple clouds or choose rival services.

What does AI have to do with cloud regulation?

AI depends heavily on cloud infrastructure for training, fine-tuning and deployment. Large cloud providers also sell AI models, AI developer tools and enterprise AI services, which can connect cloud power with downstream AI competition.

Could AI assistants be regulated under the DMA?

Potentially. The Commission has said it will assess whether certain AI services could be designated as virtual assistant core platform services. That would depend on the service’s role, integration, user reach and gateway power.

Why does Android matter in the AI and DMA debate?

Android matters because mobile operating systems control permissions, defaults and deep app interactions. The Commission’s proposed measures for Google’s Android focus on whether competing AI services can interact with apps and execute tasks on users’ devices on fair terms.

Is this only about Google, Apple, Amazon and Microsoft?

No. The immediate investigations involve major technology firms, but the consequences reach businesses, public bodies, SMEs, cloud rivals, AI developers, software vendors and consumers. The issue is the structure of dependency across digital markets.

How does the Data Act relate to cloud switching?

The Data Act creates rules to make switching between data-processing service providers easier and to support cloud interoperability. It helps address data and switching barriers, but it does not by itself solve all workload portability, licensing or AI dependency problems.

Are egress fees still a major problem?

Egress fees remain relevant, but several major providers have changed transfer policies in response to regulatory and market pressure. The larger issue is whether customers can move workloads, applications, licences, security controls and AI systems without excessive cost or risk.

What should businesses ask cloud vendors now?

Businesses should ask about exit support, data export formats, proprietary service dependencies, AI model portability, software licensing across clouds, multicloud support, transfer charges, audit logs, security controls and contract terms aligned with the Data Act.

Should companies avoid hyperscalers because of the DMA?

No. Hyperscalers offer strong infrastructure and services. The better strategy is to use them deliberately, understand dependency risks and design portability where it matters most.

How could the DMA affect European cloud providers?

European cloud providers may benefit if switching becomes easier, software licensing becomes fairer and public procurement favours interoperability and sovereignty. They still need strong products, capacity, security and support to win customers.

What is the link between cloud regulation and digital sovereignty?

Digital sovereignty depends partly on whether European businesses and governments can choose secure, reliable infrastructure without excessive dependency on a few foreign-controlled providers. Competition policy, cloud capacity and public procurement all shape that goal.

How does the AI Act differ from the DMA?

The AI Act regulates AI risks, safety, transparency and obligations for certain AI systems and models. The DMA regulates gatekeeper conduct and market contestability. A single AI product can raise issues under both laws.

What is the biggest risk for SMEs?

SMEs often adopt cloud and AI tools through credits, bundles or default software subscriptions without fully understanding future switching costs. Their biggest risk is becoming dependent on one stack before they have the resources to manage exit or multicloud options.

What should public-sector buyers do differently?

Public-sector buyers should classify workloads by sensitivity, require portability evidence, assess sovereign hosting needs, demand clear data and AI governance, and avoid contracts that make future migration unrealistic.

Could stronger regulation reduce innovation?

Poorly designed rules could create security, compliance or product-fragmentation problems. The better regulatory approach focuses on concrete bottlenecks such as unfair access, discriminatory defaults, switching barriers, licensing restrictions and anti-competitive bundling.

What should companies do during the next 12 to 18 months?

Companies should review cloud contracts, map AI dependencies, identify critical data assets, negotiate exit terms, evaluate multicloud or sovereign options, monitor DMA investigations and design new AI systems with model, data and workflow portability in mind.

Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

Europe’s next DMA fight is about cloud lock-in and AI control
Europe’s next DMA fight is about cloud lock-in and AI control

This article is an original analysis supported by the sources cited below

EU rules reining in Big Tech will now target cloud services and AI, regulators say
Reuters report on the European Commission’s DMA review focus on cloud services and artificial intelligence.

Review highlights Digital Markets Act remains fit for purpose and has positive impact
European Commission press release on the first DMA review and its future enforcement priorities.

DMA Review Q&A
European Commission questions and answers on the first DMA review, including cloud and AI priorities.

The Digital Markets Act
European Commission overview of the DMA’s purpose, gatekeeper framework and core platform service approach.

DMA designated gatekeepers
European Commission portal listing designated DMA gatekeepers and their core platform services.

Commission launches market investigations on cloud computing services under the Digital Markets Act
European Commission press release announcing cloud investigations into AWS, Microsoft Azure and the DMA’s application to cloud markets.

Commission seeks feedback on measures to ensure interoperability with Google’s Android under the Digital Markets Act
European Commission announcement on proposed Android interoperability measures for competing AI services.

Google gets pointers from EU regulators on helping AI rivals access services
Reuters report on the Commission’s proposed Android measures and Google’s response.

Regulation (EU) 2022/1925 on contestable and fair markets in the digital sector
Official EUR-Lex text of the Digital Markets Act.

Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence
Official EUR-Lex text of the EU Artificial Intelligence Act.

Regulation (EU) 2023/2854 on harmonised rules on fair access to and use of data
Official EUR-Lex text of the EU Data Act.

Data Act
European Commission policy page explaining the Data Act, including switching between data-processing providers.

Cloud computing
European Commission policy page on cloud and edge objectives, the Digital Decade and the proposed Cloud and AI Development Act.

AI continent
European Commission page on the AI Continent Action Plan, AI factories, gigafactories and AI investment.

AI Factories
European Commission page explaining AI Factories, EuroHPC compute access and support for AI start-ups and SMEs.

Apply AI Strategy
European Commission page on sectoral AI adoption, technological sovereignty and SME support.

Cloud services market investigation
UK Competition and Markets Authority case page on the public cloud infrastructure services investigation.

Cloud services market investigation summary of final decision
CMA summary setting out findings on cloud competition, concentration, AI and market remedies.

Cloud services market study final report
Ofcom final report on the UK cloud services market study and competition concerns.

Cloud market share trends
Synergy Research Group analysis of global cloud infrastructure market shares and concentration.

GenAI helps drive quarterly cloud revenues to $119 billion as growth rate jumped yet again in Q4
Synergy Research Group analysis of 2025 cloud infrastructure spending and generative AI demand.

Free data transfer out to internet when moving out of AWS
AWS announcement on waiving data transfer-out charges for customers moving outside AWS.

New for the U.K. and EU no-cost multicloud Data Transfer Essentials
Google Cloud announcement on no-cost multicloud data transfers for EU and UK customers.

Azure data transfer fees
Microsoft Azure documentation on at-cost data transfer for European customers in interoperable cloud scenarios.

Commission finds Apple and Meta in breach of the Digital Markets Act
European Commission announcement of DMA non-compliance decisions and fines against Apple and Meta.

Gatekeepers publish updated reports on DMA compliance
European Commission announcement on gatekeepers’ updated DMA compliance and profiling reports.