
Panthalassa raises $140M to put AI data centers in the ocean
Oregon-based Panthalassa closes $140M Series B led by Thiel, with Doerr, Benioff, and Levchin on the cap table. The plan: floating compute nodes powered by ocean waves, cooled by ocean water.
Portland-based Panthalassa closed a $140M Series B led by Peter Thiel, with John Doerr, Marc Benioff's TIME Ventures, and Max Levchin's SciFi Ventures participating [GeekWire][press release].
── What shipped ──
Panthalassa, founded 2016, builds floating offshore compute platforms. The pitch is two problems solved at once: ocean-wave kinetic energy for power, and cold ocean water for cooling. The platforms run inference on already-trained AI models — not training, which has different power and bandwidth needs [Tom's Hardware].
The Series B funds a pilot manufacturing facility near Portland. The first deployment, the Ocean-3 pilot node series, targets the northern Pacific in 2026, with commercial deployments planned for 2027 [press release].
── Why it matters ──
Data centers are the rate-limiting input to AI scaling. The bottleneck stack is land, power interconnect, water, and grid permitting — every one of which has multi-year lead times in major US markets. Panthalassa's bet is that the ocean is a free, mostly unregulated power-and-cooling environment that can be operated outside terrestrial grid politics.
Three honest questions:
- Bandwidth. Inference is latency-sensitive. Subsea fibre is well-understood, but offshore-to-shore links add round-trip time. Panthalassa needs to demonstrate the latency budget for non-trivial inference loads.
- Maintenance. Hardware in salt-water environments has historically been a graveyard. The capex story works only if the lifecycle math survives ocean conditions.
- Permitting. "Mostly unregulated" understates the reality. International maritime, environmental, and shipping-lane rules are intricate and slow.
── Editor's take ──
The investor list does most of the signalling. Thiel-Doerr-Benioff-Levchin together is not a casual cap table; it is a specific bet that AI infrastructure scaling cannot continue on land at current trajectories. The pilot in 2026 is the real test. If Ocean-3 holds latency under a useful threshold for paying inference customers, this stops being a curiosity. If it doesn't, this is a footnote.
// newsletter_offline · provider_not_configured