We have tried rule-based systems (they break in the real world), end-to-end deep learning (they hallucinate), and large language models (they lack physics). But a new architecture is emerging from the labs that might finally crack the code.
It is called .
Current AVs rely on "predictive models" that assume other drivers are rational. DEVA-3 simulates irrational behavior. It can predict the "jerk" who cuts across three lanes without a blinker because it has seen that episode 10,000 times in training data. Wayve and Ghost Autonomy are rumored to be testing DEVA-3 variants on public roads in London right now. deva-3
If you work in autonomy, robotics, or simulation, stop fine-tuning LLMs. Start looking at world models. We have tried rule-based systems (they break in
For the last decade, the holy grail of robotics and autonomous driving has been a simple question: How do we teach machines to predict the future? Current AVs rely on "predictive models" that assume
They asked the model: "What happens next?"
The model hallucinated cars sliding, pedestrians walking cautiously, and brake lights flashing. It had never seen snow, but it had learned friction and low-traction behavior from dry roads. It generalized the concept of slipperiness.