General Intuition Secures $134M to Train AI with Game Clips

A new AI lab is leveraging billions of video game clips to teach agents spatial reasoning.

General Intuition, a new AI research lab spun out from Medal, has raised $133.7 million in seed funding. The company is using a vast dataset of video game clips to train AI agents in spatial-temporal reasoning, aiming for applications in gaming, drones, and robotics.

Mark Ellison

By Mark Ellison

October 16, 2025

4 min read

General Intuition Secures $134M to Train AI with Game Clips

Why You Care

Ever wonder if playing video games could actually make AI smarter? Imagine an AI that understands how objects move in space just like you do. This isn’t science fiction anymore. General Intuition, a new AI lab, just secured a massive $133.7 million in seed funding, according to the announcement. They are using billions of video game clips to teach AI agents spatial reasoning. Why should you care? This creation could lead to more intelligent robots and drones, impacting your daily life in unexpected ways.

What Actually Happened

Medal, a popular system for sharing video game clips, has launched a new frontier AI research lab named General Intuition. This new venture aims to build foundation models and AI agents. These agents will understand how objects and entities move through space and time, a concept known as spatial-temporal reasoning, as detailed in the blog post. General Intuition believes Medal’s dataset, which includes 2 billion videos per year from 10 million monthly active users, is uniquely suited for this training. This dataset reportedly surpasses alternatives like Twitch or YouTube for training agents, the company reports. The startup successfully raised $133.7 million in seed funding, with Khosla Ventures and General Catalyst leading the investment, as mentioned in the release.

Why This Matters to You

This new approach to AI training could have significant implications for various industries. If AI agents can understand spatial reasoning better, they can perform complex tasks more effectively. Think of it as giving AI a more intuitive grasp of the physical world. For example, autonomous vehicles could navigate unpredictable environments with greater precision. Drones used in search and rescue operations could identify and track targets more reliably.

Potential Applications of Spatial-Temporal AI

Application AreaBenefit for You
GamingMore realistic non-player characters (NPCs) and immersive virtual worlds
DronesEnhanced navigation for delivery, inspection, and emergency services
RoboticsSmarter industrial robots and more capable home assistants
Autonomous VehiclesImproved safety and decision-making in complex driving scenarios

Pim de Witte, CEO of Medal and General Intuition, explained the unique value of gaming data. He stated, “When you play video games, you essentially transfer your perception, usually through a first-person view of the camera, to different environments.” This means the AI learns from human perspectives. How might more intuitive AI agents change your interaction with system in the next five years?

The Surprising Finding

Here’s an interesting twist: General Intuition’s model can already understand environments it wasn’t specifically trained on. What’s more, it can correctly predict actions within these new environments, the team revealed. This capability comes purely from visual input. The agents only see what a human player would see. They move through space by following controller inputs. This is surprising because AI often struggles with generalization to new, unseen data. The fact that gamers tend to upload very positive or negative examples also creates a valuable training dataset. This “selection bias towards precisely the kind of data you actually want to use for training work” is a key advantage, according to Pim de Witte. This challenges the common assumption that AI needs perfectly balanced, curated datasets to learn effectively.

What Happens Next

General Intuition plans to use its new funding to expand its team of researchers and engineers. The goal is to train a general agent that can interact with the world around it, the company reports. Initial applications are expected in gaming and search and rescue drones. This suggests we might see practical implementations within the next 12-18 months. Imagine a drone that can autonomously navigate a collapsed building, understanding the unstable environment. For you, this means potentially safer and more efficient emergency responses. The approach of using visual input and controller commands can naturally transfer to physical systems. This includes robotic arms, drones, and autonomous vehicles, as detailed in the blog post. This opens up possibilities for more automation across many sectors. Your future interactions with smart devices could become much more and intuitive.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice