The European Commission has placed the EU’s delay in developing and adopting AI at the heart of the continent’s competitiveness problem. It has launched several initiatives to address this, including the buildout of AI infrastructure backed by a substantial amount of public funding.
Specifically, EUR 10 billion (split between the EU and hosting Member States) has been invested into AI factories, which are large data centres optimized for AI training. An additional EUR 20 billion in public-private funding (a third being public) has been committed to building AI gigafactories, even larger clusters of AI-optimized computers. This infrastructure is aimed at boosting cutting-edge AI development and finally bringing Europe to the forefront of the global AI race.
There’s no doubt that Europe needs more compute. However, this is no magic pill for its competitiveness problem, and the strategy raises many uncomfortable questions. Are the (giga)factories designed to support the current AI ecosystem? Will they help Europe catch up? Can they maintain the continent’s technological sovereignty agenda? What research priorities will they advance?
Here’s what we at CEPS found in the eye of the storm.
A doppelganger network of hubs
The AI factories are envisioned as dynamic ecosystems where compute, data, and talent converge. However, their locations prioritize a wide geographic distribution and proximity to established, energy-efficient infrastructure—especially existing EuroHPC sites—over existing AI ‘hubs of excellence’.
This approach makes sense on some levels. AI training can be done remotely, meaning that placing compute infrastructure next to talent isn’t strictly necessary. Thus, a site chosen to optimize available land, affordable energy, and pre-existing high-performance computing in each country may indeed serve its best interests.
Yet, the Commission’s narrative about creating vibrant ecosystems that attract talent is fundamentally misguided. Talent won’t migrate to the factories; the factories must come to the talent. That’s why the network of AI supercomputers should be federated and allow for seamless remote access, fully supporting European researchers and innovators.
Where will the energy come from?
Electricity and water costs have become major bottlenecks for building data centres, influencing where the gigafactories should be located more than the factories themselves. While a wide coverage of countries may be sustainable for the factories, this is conditional upon sites optimizing for energy efficiency within each country’s borders. However, this may fall short of meeting the energy demands of the 100,000-chip supercomputers.
To reconcile competitiveness with sustainability, the EU should concentrate the gigafactories in countries that offer the cleanest power at the lowest price. The energy question also challenges the provisional plans for using gigafactories for inference—the process of running a trained AI model. Placing gigafactories in remote areas may not be ideal, as inference requires low latency and proximity to end-users, making their scalable location promise potentially sub-optimal.
Avoiding the Nvidia dependency trap
Perhaps the most striking feature of the gigafactory plan is the almost exclusive reliance on Nvidia for chips. This dependency wouldn’t be fatal for the factories’ sovereignty—provided factory users have full control over the software orchestration.
Unfortunately, operational control may not be guaranteed if AI models are developed using CUDA, Nvidia’s proprietary software layer that permits GPUs for AI training. As an intermediary layer of the AI stack, CUDA can lead to inflexibility in subsequent model development.
The solution goes beyond just diversifying GPU suppliers; the factories should prioritize open-source solutions linked to other tools for public benefit. European investments should also focus on alternative computing approaches that don’t rely solely on GPUs, along with strategic partnerships for securing increasingly crucial components like memory chips. Otherwise, this initiative risks replicating existing tech dependencies.
The wrong AI at the wrong scale
At its core, the AI (giga)factory plan reflects a leadership problem. Europe seems to be emulating big tech’s approach to AI, along with its ultimate goal of achieving artificial general intelligence. Yet this comes amid daunting gaps in private investment and a very different social model.
Moreover, the massive investments flooding into generative AI in the U.S. and China make it challenging for Europe to catch up. However, the technology’s continued unreliability and its increasing energy demands present Europe with an opportunity to compete through alternative solutions that align with its values—potentially involving neuro-symbolic algorithms, neuromorphic chips, or smaller, specialized models.
Crucially, AI factories must incorporate technical work on AI safety. Given the uncertain trajectory of AI research, it’s undeniable that unreliable and unsustainable AI cannot be deployed at scale, whether in business, critical infrastructure, or public services.
Finetuning the gigafactories plan
To ensure that the gigafactories boost Europe’s competitiveness and sovereignty, a much sharper strategy is essential.
First, serve the compute to the talent. The network of gigafactories must function as a cloud, mandating interoperability and collaboration. Infrastructure sites will not replace existing research and innovation hubs.
Second, reconcile competitiveness with sustainability. Gigafactories should be concentrated in regions with abundant renewable energy, powered by competitively priced electricity. Energy consumption remains a significant concern.
Third, re-evaluate whether the gigafactories should assist both training and inference. If their envisioned scale is fixed, it may lead to location choices that could complicate inference, or may require investment in fiber connectivity. These trade-offs should be recognized early on.
Fourth, ensure that the infrastructure is future-proof. Secure GPU supply from multiple vendors and invest in partnerships to guarantee the availability of crucial components. Prioritize open-source infrastructure solutions that pool resources from multiple vendors—this will be critical for reducing dependencies.
Finally, define strategic goals and long-term missions. What specific AI capabilities does Europe need? What alternative research directions hold promise? How can we build trustworthy solutions? The recently launched Apply AI strategy and the newly announced Frontier AI initiative could steer gigafactories toward breakthroughs in safety and reliability necessary for widescale AI adoption.
At a time when the EU is under pressure to become competitive and sovereign in AI, it should abandon grandiose visions of AGI leadership. The best Europe can aim for is an alternative AI growth model, one that emphasizes reliability and aligns with the true needs of industries, public services, and researchers.
About the Author:
Nicoleta Kyosovska is a Research Assistant in the Global Governance, Regulation, Innovation and Digital Economy (GRID) Unit, working on Artificial Intelligence.