Building the World’s Most Powerful Foundation Models for Tabular Data

Building the World’s Most Powerful Foundation Models for Tabular Data

Our research is focused on a foundational challenge in machine learning: building scalable, general-purpose models for real-world tabular data.

Our research is focused on a foundational challenge in machine learning: building scalable, general-purpose models for real-world tabular data.

We prioritize the following themes:

We prioritize the following themes:

Meta
learning

Meta
learning

Architecture
research

Architecture
research

Generative
modeling 

Generative
modeling 

Large scale
systems

Large scale
systems

Code
generation

Code
generation

Structured
data

Structured
data

WHITEPAPER

WHITEPAPER

WHITEPAPER

Developing Foundation Models for Real-World Tabular Data

Developing Foundation Models for Real-World Tabular Data

Many landmark breakthroughs in supervised deep learning can be distilled into tabular prediction problems. Historically, however, each advancement has required immense, specialized resources. We propose a paradigm shift: the development of a universal predictor that leverages shared experience across billions of examples to adapt to novel tasks via in-context learning. Our objective is to build a foundation model for structured data where previous breakthroughs become mere queries to a single system. In this paper, we argue that current foundation model architectures are ill-suited for this task and outline our approach to solving it. This work serves as the research manifesto for Fundamental.

Many landmark breakthroughs in supervised deep learning can be distilled into tabular prediction problems. Historically, however, each advancement has required immense, specialized resources.

We propose a paradigm shift: the development of a universal predictor that leverages shared experience across billions of examples to adapt to novel tasks via in-context learning.

Our objective is to build a foundation model for structured data where previous breakthroughs become mere queries to a single system. In this paper, we argue that current foundation model architectures are ill-suited for this task and outline our approach to solving it. This work serves as the research manifesto for Fundamental.

Authored by

Marta Garnelo

Marta Garnelo

Chief Science Officer

Chief Science Officer

Research interests include meta learning, multi-agent reinforcement learning and generative modelling.

Research interests include meta learning, multi-agent reinforcement learning and generative modelling.

Google Scholar Profile

Wojciech Marian Czarnecki

Founding Advisor

Research interests include learning theory, deep reinforcement learning and open ended learning systems

Google Scholar Profile

Meet Some of the Team

Meet Some of the Team

Powering Fundamental

Powering Fundamental

Kevin Scaman, PhD

ML Researcher

Before joining Fundamental, Kevin was a research scientist at Inria Paris and a part-time Associate Professor at École Polytechnique. His research in machine learning spans from theoretical advances in key aspects of deep learning, including robustness, decentralized learning, and training via non-convex optimization, to practical implementations of graph neural networks extending their expressive power and robustness.

+ Read Full Bio

Google Scholar Profile

Yuval Azoulay

Founding Engineer

Before Fundamental, Yuval built and operated systems that power modern AI in production, from core infrastructure and performance-sensitive runtimes to user-facing product experiences.
At AI21 Labs, he worked on training and serving AI models, where he operated the GPU fleets that powered them and owned the scheduling system behind the core training infrastructure.

+ Read Full Bio

Alexandre Perez, PhD

ML Researcher

Before Fundamental, Alex spent five years in academic machine learning research at Inria and McGill University. His work spanned prediction with missing values, probabilistic modeling, uncertainty quantification, and model evaluation. As a visiting researcher at Stanford University, he extended this work to decision-making under uncertainty.

+ Read Full Bio

Google Scholar Profile

Víctor Vila

MLOps Engineer

Prior to joining Fundamental, Víctor evolved from Data Scientist to MLOps Team Lead within the logistics sector, where he built demand forecasting models and architected the infrastructure to serve them at scale. He later joined Huckleberry Labs to own company-wide ML operations, driving personalized child sleep solutions via tabular models, Reinforcement Learning, and Large Language Models.

+ Read Full Bio

Kevin Scaman, PhD

ML Researcher

Before joining Fundamental, Kevin was a research scientist at Inria Paris and a part-time Associate Professor at École Polytechnique. His research in machine learning spans from theoretical advances in key aspects of deep learning, including robustness, decentralized learning, and training via non-convex optimization, to practical implementations of graph neural networks extending their expressive power and robustness.

+ Read Full Bio

Google Scholar Profile

Yuval Azoulay

Founding Engineer

Before Fundamental, Yuval built and operated systems that power modern AI in production, from core infrastructure and performance-sensitive runtimes to user-facing product experiences.
At AI21 Labs, he worked on training and serving AI models, where he operated the GPU fleets that powered them and owned the scheduling system behind the core training infrastructure.

+ Read Full Bio

Alexandre Perez, PhD

ML Researcher

Before Fundamental, Alex spent five years in academic machine learning research at Inria and McGill University. His work spanned prediction with missing values, probabilistic modeling, uncertainty quantification, and model evaluation. As a visiting researcher at Stanford University, he extended this work to decision-making under uncertainty.

+ Read Full Bio

Google Scholar Profile

Víctor Vila

MLOps Engineer

Prior to joining Fundamental, Víctor evolved from Data Scientist to MLOps Team Lead within the logistics sector, where he built demand forecasting models and architected the infrastructure to serve them at scale. He later joined Huckleberry Labs to own company-wide ML operations, driving personalized child sleep solutions via tabular models, Reinforcement Learning, and Large Language Models.

+ Read Full Bio

Kevin Scaman, PhD

ML Researcher

Before joining Fundamental, Kevin was a research scientist at Inria Paris and a part-time Associate Professor at École Polytechnique. His research in machine learning spans from theoretical advances in key aspects of deep learning, including robustness, decentralized learning, and training via non-convex optimization, to practical implementations of graph neural networks extending their expressive power and robustness.

+ Read Full Bio

Google Scholar Profile

Yuval Azoulay

Founding Engineer

Before Fundamental, Yuval built and operated systems that power modern AI in production, from core infrastructure and performance-sensitive runtimes to user-facing product experiences.
At AI21 Labs, he worked on training and serving AI models, where he operated the GPU fleets that powered them and owned the scheduling system behind the core training infrastructure.

+ Read Full Bio

Alexandre Perez, PhD

ML Researcher

Before Fundamental, Alex spent five years in academic machine learning research at Inria and McGill University. His work spanned prediction with missing values, probabilistic modeling, uncertainty quantification, and model evaluation. As a visiting researcher at Stanford University, he extended this work to decision-making under uncertainty.

+ Read Full Bio

Google Scholar Profile

Víctor Vila

MLOps Engineer

Prior to joining Fundamental, Víctor evolved from Data Scientist to MLOps Team Lead within the logistics sector, where he built demand forecasting models and architected the infrastructure to serve them at scale. He later joined Huckleberry Labs to own company-wide ML operations, driving personalized child sleep solutions via tabular models, Reinforcement Learning, and Large Language Models.

+ Read Full Bio

Research Environment

Research Environment

Fundamental is built around rigorous research, careful experimentation, and long-term technical ambition. We aim to create an environment where foundational questions can be explored deeply, and where research transitions thoughtfully into real-world systems. 

Crucially, we care as much about who we are as what we build; we’re a tight-knit team that prioritizes mutual respect, kindness, and a shared sense of purpose.

Fundamental is built around rigorous research, careful experimentation, and long-term technical ambition. We aim to create an environment where foundational questions can be explored deeply, and where research transitions thoughtfully into real-world systems. 


Crucially, we care as much about who we are as what we build; we’re a tight-knit team that prioritizes mutual respect, kindness, and a shared sense of purpose.

Fundamental is built around rigorous research, careful experimentation, and long-term technical ambition. We aim to create an environment where foundational questions can be explored deeply, and where research transitions thoughtfully into real-world systems. 


Crucially, we care as much about who we are as what we build; we’re a tight-knit team that prioritizes mutual respect, kindness, and a shared sense of purpose.

Join Us

Join Us

We are looking for exceptional researchers and engineers who identify with our vision and values. For researchers interested in foundational problems at the intersection of machine learning, structured data, and decision systems, Fundamental is a place to work on problems that matter.

We are looking for exceptional researchers and engineers who identify with our vision and values. For researchers interested in foundational problems at the intersection of machine learning, structured data, and decision systems, Fundamental is a place to work on problems that matter.

First

First

First

Principles

Principles

Principles

A series of interviews between our Chief Science Officer and key players in the research community.

A series of interviews between our Chief Science Officer and key players in the research community.

Beyond Bigger Models: From StarCraft to Representation

Beyond Bigger Models: From StarCraft to Representation

Marta Garnelo, Chief Science Officer
Wojciech Czarnecki, Founding Advisor

Marta Garnelo, Chief Science Officer
Wojciech Czarnecki, Founding Advisor

Fundamental Technologies Inc.

Copyright © 2026

All rights reserved

Fundamental Technologies Inc.

Copyright © 2026

All rights reserved

Fundamental Technologies Inc.