Eridu exits stealth with massive Series A to tackle AI networking limits
Eridu, a new entrant in the infrastructure space for large-scale artificial intelligence, has emerged from stealth with a more than $200 million Series A round aimed at breaking what it calls AI’s “network wall.” The financing is led by investment firm Socratic, with participation from renowned investor John Doerr, semiconductor leader MediaTek, and deep-tech backer Eclipse.
Targeting the bottleneck behind frontier AI models
As hyperscalers and frontier AI labs race to train ever-larger foundation models, the industry has poured billions into GPU clusters. Yet, the company argues that networking, not compute, is rapidly becoming the limiting factor. This “network wall” appears when thousands of accelerators must communicate at extreme speed, overwhelming legacy data center networking architectures.
Eridu is designing a new class of GPU-scale networking tailored specifically for large-scale distributed training and inference. By rethinking how data moves between accelerators, storage, and memory, the startup aims to significantly reduce latency, increase bandwidth, and improve overall cluster efficiency.
Backers with deep hardware and cloud experience
The round’s backers bring substantial experience in semiconductors, cloud infrastructure, and AI systems. Veteran investor John Doerr has long supported category-defining infrastructure companies, while MediaTek adds chip design and manufacturing expertise. Firm Eclipse is known for funding complex, capital-intensive deep tech ventures.
With more than $200 million in fresh capital, Eridu is expected to scale engineering, build out reference deployments with leading hyperscalers, and validate its architecture with top-tier AI research labs. The company is positioning itself as a foundational player in the next generation of AI infrastructure, where networking performance will be as critical as raw compute power.
Implications for hyperscalers and AI labs
If Eridu succeeds in delivering GPU-scale networking at cloud scale, hyperscale cloud providers and advanced AI labs could train larger models faster and at lower total cost. This could accelerate progress in areas such as generative AI, autonomous systems, and scientific computing, while reshaping how data centers are architected for the AI era.

