News

Meta today introduced V-JEPA 2, a 1.2-billion-parameter world model trained primarily on video to support robotic systems.
During Nvidia GTC Paris at VivaTech, the company’s technology conference, Nvidia showcased Nvidia Drive, an autonomous ...
Meta has released V-JEPA 2, a powerful new AI world model designed to help robots and self-driving cars better understand and ...
Model helps machines reason like humans using raw video data—no labels required Meta has launched a new artificial ...
Meta challenges rivals with V-JEPA 2, its new open-source AI world model. By learning from video, it aims to give robots physical common sense for advanced, real-world tasks.
One popular technique is to use pre-trained LLMs and VLMs as components in modular systems for task planning ... action models (VLAs) from the ground up to directly generate robot control actions.
Given the costs and risks of training robots directly in physical environments, roboticists usually use simulated environments to train their control models before deploying them in the real world.
Format: The course is divided in two parts. In the first part there will be a series of formal lectures on the principles of robot planning, dynamics and control followed by a midterm project ...