Alpamayo open-source AI models & tools for safer, faster inference-based assisted driving

Time:2026-01-12
View volume:14

image.png


CES – January 5, 2026, Pacific Time – NVIDIA Unveils the NVIDIA Alpamayo Series of Open-Source AI Models, Simulation Tools and Datasets to Advance the Development of Safe and Reliable Inference-Based Assisted Driving Vehiclesrobots.


Intelligent vehicles must operate safely under complex and ever-changing driving conditions. Rare, intricate scenarios known as long-tail cases remain one of the biggest hurdles for assisted driving systems to overcome. Traditional assisted driving architectures decouple perception and planning, which often limits system scalability when confronted with sudden or anomalous situations. While recent advances in end-to-end learning have achieved significant breakthroughs, addressing these long-tail extreme events still requires models capable of safe causal reasoning – especially when encountering scenarios beyond the scope of the models training data.


The Alpamayo Series introduces Chain-of-Thought (CoT)-based Visual-Language-Action (VLA) reasoning models, infusing human-like thinking capabilities into assisted driving decision-making. These systems can deduce step-by-step through rare or novel scenarios, enhancing driving performance and interpretability – a critical component in building safety and trust for intelligent vehicles. The core technology is underpinned by the NVIDIA Halos Safety System.


Jensen Huang, founder and CEO of NVIDIA, stated: “The ChatGPT moment for physical AI has arrived. Machines are now capable of understanding the real world, reasoning about it and acting on that reasoning. Robotaxis will be among the first to benefit. Alpamayo endows intelligent vehicles with reasoning capabilities, enabling them to handle rare scenarios, navigate complex environments safely and explain their driving decisions – laying the foundation for safe, scalable autonomous driving.”


A Complete, Open Ecosystem for Inference-Based Autonomous Driving


Alpamayo unifies three pillars – open models, simulation frameworks and datasets – into a single, open ecosystem, empowering any automotive developer or research team to build on its foundations.


Alpamayo is not a model directly deployed on vehicles. Instead, it serves as a large-scale teacher model, allowing developers to optimize and distill it to form the core foundation of their complete assisted driving technology stack.


At CES, NVIDIA launched three key components:


① Alpamayo 1: The industry’s first CoT-based VLA reasoning model designed for the assisted driving research community, now available on Hugging Face. Built on a 10-billion-parameter architecture, the model generates driving trajectories from video inputs while providing reasoning pathways, clearly demonstrating the logic behind every decision. Developers can either adapt Alpamayo 1 into a more compact runtime model for on-vehicle deployment or use it as the foundational architecture for assisted driving to build development tools such as inference-based evaluators and automated annotation systems. Alpamayo 1 provides open model weights and open-source inference scripts. Future models in the series will feature larger parameter sizes, more refined reasoning capabilities, more flexible input/output methods and expanded commercial options.


AlpaSim: A fully open-source, end-to-end simulation framework for high-fidelity assisted driving development, now publicly available on GitHub. The framework offers realistic sensor modeling, configurable traffic dynamics and a scalable closed-loop testing environment, supporting rapid validation and strategy optimization.


 Open Physical AI Datasets: NVIDIA provides diverse, large-scale open datasets for assisted driving. These datasets contain over 1,700 hours of driving data, covering a wide range of geographic regions and environmental conditions, including rare and complex real-world extreme scenarios essential for advancing reasoning architectures. The datasets are now open for use on Hugging Face.


Together, these tools form a self-reinforcing development loop for building inference-based assisted driving stacks.