by
Jeremy Fraenkel CEO & Co-founder, Fundamental
Introducing First Principles

The gap between what AI researchers know and what the rest of the world hears about is getting wider. New architectures, new scaling results, and new claims about what's possible seem to arrive weekly. But the conversations that matter most still tend to happen behind closed doors: researchers challenging assumptions, questioning dominant narratives, wrestling with what these systems can and can't actually do. Those exchanges happen in labs, over coffee, between people who've spent years thinking deeply about these problems.
We want to change that. We're launching First Principles, a video series where leaders across Fundamental sit down with notable figures in their respective fields for the kind of conversation that usually stays private. Technically grounded, honest, and unscripted. No slides, no talking points. Just two experts going deep on the ideas that are actually shaping where this field goes next.
The series begins with research. Our Chief Science Officer, Marta Garnelo will host conversations with leading AI researchers, starting with an episode that sets the tone for everything First Principles aims to be.
Episode One: Wojciech Czarnecki
Marta's first guest is Wojciech Czarnecki, a researcher best known for his work on multi-agent reinforcement learning, including DeepMind's landmark StarCraft project. Marta and Wojciech worked together at DeepMind for close to a decade, and this conversation picks up threads they've been pulling on for years.
They cover a lot of ground. First, Wojciech makes a compelling case for why the field's obsession with scale is missing something important: the difference between what a neural network can approximate and what it can represent. It turns out that even something as basic as multiplication is beyond what a standard MLP can represent exactly. It can get close, but never hit zero error. That distinction has real consequences for how models extrapolate, generalize, and ultimately perform on problems that matter.
From there, they dig into multi-agent reinforcement learning, not as a problem to be solved, but as a tool. Wojciech describes how setting up the right kind of competitive, self-play dynamic creates a natural curriculum that sidesteps some of reinforcement learning's hardest challenges around exploration. As he puts it: "you'll never learn chess if your only opponent is Garry Kasparov." But if you can play both sides and grow alongside yourself, something powerful emerges.
They wrap with a topic close to Wojciech's heart: what happens when machine learning meets video games, not as a research testbed, but as a way to make games themselves fundamentally better. He paints a picture of game worlds that adapt and evolve daily, where NPCs learn to move naturally through complex terrain and no two players' experiences are quite the same. Importantly, Wojciech feels strongly that this is not the "AI replaces artists" story. It's an opportunity to train agents that make environments feel less scripted and more alive.
Watch the full episode below.
First Principles reflects something core to how we think at Fundamental: that the most important advances in AI come from asking the right questions at the foundational level. Research is just the beginning. Future episodes will expand across the business, with Fundamental leaders hosting conversations in engineering, applied AI, and commercial strategy. Those questions deserve more airtime. Openly, rigorously, and in public. We hope you'll join the conversation.





