CVPR 2026

Workshop on Autonomous Driving

Wednesday, June 3, 2026

Colorado Convention Center, Denver, CO

About

The CVPR 2026 Workshop on Autonomous Driving (WAD) brings together leading researchers and engineers from academia and industry to discuss the latest advances in autonomous driving. Now in its 9th year, the workshop has been continuously evolving with this rapidly changing field and now covers all areas of autonomy, including perception, behavior prediction and motion planning. In this full-day workshop, our keynote speakers will provide insights into the ongoing commercialization of autonomous vehicles, as well as progress in related fundamental research areas. Furthermore, we will host a series of technical benchmark challenges to help quantify recent advances in the field, and invite authors of accepted workshop papers to present their work.

About Image
About Image
News

  • [Apr 8] The workshop will take place on Wednesday, June 3.
  • [Apr 1] The KITScenes LongTail Challenge is now online.
  • [Mar 27] The 2026 Argoverse Scene Flow challenge is now online.
  • [Mar 23] Final decisions were released to authors and accepted papers are now listed in the website.
  • [Mar 20] Final decisions will now be released on Monday, March 23 to accommodate reviewing extensions.
  • [Mar 3] Our paper track is now closed. Thanks to everyone submitting their work!
  • [Feb 24] We have extended the workshop paper submission deadline to Monday, March 2, 2026.
  • [Feb 23] The 2026 Argoverse Scenario Mining challenge is now online.
  • [Jan 29] We released our call for papers. Papers are due by Friday, February 27, 2026.
  • [Dec 20] The workshop got accepted. More updates to follow soon.
Call for Papers

Important Dates

  • Workshop paper submission deadline: Friday, February 27, 2026 Monday, March 2, 2026 (23:59 PST)
  • Notification to authors: Friday, March 20, 2026 Monday, March 23, 2026
  • Camera ready papers and copyright forms due: Friday, April 10, 2026

Topics Covered

We invite submissions of original research contributions in machine perception, computer vision, prediction, planning and simulation related to autonomous vehicles, such as (but not limited to):

  • Foundational models for autonomous driving.
  • Vision language models (VLMs) and large language models (LLMs) for solving autonomous vehicle related tasks such as prediction or planning.
  • Autonomous navigation and exploration based on camera, laser, radar or related measurements.
  • Embodied AI for autonomous driving.
  • Sensor fusion and multi-modal perception algorithms for scene understanding.
  • Bird’s eye view methods for autonomous driving, such as BEV-based 3D detection, BEV segmentation, occupancy grids, HD-maps, and topological lane graphs.
  • Vision-based driving assistance, driver monitoring and advanced interfaces.
  • Sensor simulation, neural rendering / NeRFs, 3D Gaussian Splatting, generative models for 3D assets or driving environments.
  • Diffusion models for prediction and planning.
  • Mapless autonomous driving.
  • Cooperative perception and planning based on vehicle-to-everything (V2X) / vehicle-to-vehicle communication.
  • Transfer learning and domain adaptation in the autonomous vehicle domain.
  • Simulation for autonomous driving.
  • Online sensor calibration.
  • SLAM and 3D reconstruction algorithms.
  • Validation and interpretability of autonomous systems.
  • Adversarial learning, adversarial attacks, robustness and handling of uncertainty in autonomous systems.

Presentation Guidelines

All accepted papers will be presented as posters. The guidelines for the posters are the same as at the main conference.

Submission Guidelines

  • We solicit short papers on autonomous vehicle topics
  • Submitted manuscript should follow the CVPR 2026 paper template
  • The page limit is 8 pages (excluding references)
  • We do not accept dual submissions
  • Submissions will be rejected without review if they:
    • contain more than 8 pages (excluding references)
    • violate the double-blind policy or violate the dual-submission policy
  • The accepted papers will be linked at the workshop webpage and also in the main conference proceedings.
  • Papers will be peer reviewed under double-blind policy, and must be submitted online.

Submission Instruction

Submit your papers through CMT: https://cmt3.research.microsoft.com/WAD2026

Acknowledgement

The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.

Tentative Schedule

Follow the livestream on the CVPR Virtual Website. Recordings will be published after the workshop.

09:15am
09:30am
Opening Remarks
09:30am
10:00am
Keynote 1
Title: To be announced
10:00am
10:30am
Keynote 2
Title: To be announced
10:30am
11:00am
CVPR AM Coffee Break
11:00am
11:30am
Keynote 3
Title: To be announced
11:30am
12:00pm
Dataset Challenges
12:00pm
01:30pm
Lunch Break & Poster Session
Poster Location: To be announced
01:30pm
02:00pm
Keynote 4
Title: To be announced
02:00pm
02:30pm
Keynote 5
Title: To be announced
02:30pm
03:00pm
CVPR PM Coffee Break
03:00pm
03:30pm
Keynote 6
Title: To be announced
03:30pm
04:00pm
Keynote 7
Title: To be announced
04:00pm
05:00pm
Lightning Talks
To be announced
05:00pm
05:05pm
Closing Remarks
Challenges

Please note: Challenges are not directly affiliated with the workshop. If you have any questions regarding a dataset challenge or encounter any issues, please contact the challenge organizers directly.

Argoverse Scenario Mining and LiDAR Scene Flow Challenges

The workshop will host the Argoverse 2026 challenges for Scenario Mining and LiDAR Scene Flow. To participate and more information, visit the Argoverse website.

KITScenes LongTail Challenge

The workshop will host the KITScenes LongTail Challenge, which focuses on the few-shot generalization of end-to-end driving models (e.g., VLAs and VLMs) in long-tail scenarios. The dataset is available on Hugging Face. Further details on prizes, metrics and helper functions can be found in the submission space.

Accepted Papers

BePo: Dual Representation for 3D Occupancy Prediction

Authors: Yunxiao Shi, Hong Cai, Jisoo Jeong, Yinhao Zhu, Shizhong Han, Amin Ansari, Fatih Porikli

CCLSTM: Coupled Convolutional Long-Short Term Memory Network for Occupancy Flow Forecasting

Authors: Peter Lengyel

CogAD: Cognitive-Hierarchy Guided End-to-End Autonomous Driving

Authors: Zhennan Wang, Jianing Teng, Canqun Xiang, Kangliang Chen, Xing Pan, Lu Deng, Weihao Gu

Edge-Efficient Vision-Language Models for Autonomous Driving Using Distillation and RAG-Based Connectors

Authors: Alexandra Chiu, Tanvi Aggarwal, Qidao Lian, Sabyasachi Gupta, Kevin Nowka

InCaRPose: In-Cabin Relative Camera Pose Estimation Model and Dataset

Authors: Felix Stillger, Lukas Hahn, Frederik Hasecke, Tobias Meisen

Localization-Guided Foreground Augmentation in Autonomous Driving

Authors: Jiawei Yong, Deyuan Qu, Qi Chen, Kentaro Oguchi, Shintaro Fukushima

R3D2: Realistic 3D Asset Insertion via Diffusion for Autonomous Driving Simulation

Authors: William Ljungbergh, Bernardo Taveira, Wenzhao Zheng, Adam Tonderski, Chensheng Peng, Fredrik Kahl, Christoffer Petersson, Michael Felsberg, Kurt Keutzer, Masayoshi Tomizuka, Wei Zhan

SafeDrive: Improving Adverse-Weather Robustness in Autonomous Driving via Geometry-Aware Diffusion Augmentation

Authors: Syeda Fiza Rubab, Arslan Abdul Ghaffar, Ingyu Lee, Gyu Sang Choi

Traffic Scene Generation from Natural Language Description for Autonomous Vehicles with Large Language Model

Authors: Bo-Kai Ruan, Hao-Tang Tsui, Yung-Hui Li, Hong-Han Shuai

When Does Adaptive Guidance Help? Belief-Aware Privileged Distillation for Autonomous Driving Under Partial Observability

Authors: Mehmet Haklidir

Reviewers

Name Affiliation
Aakanksha AakankshaIndian Institute of Technology Madras
Abdelrahman O AliPhoton Smart
Ahmed AbdelrahmanUniversity of Central Florida (UCF)
Alberto G Rodriguez SalgadoTechnische Universität München
Alessandro Paolo Capasso AmbarellaVisLab
Alexander BienemannUniversity of the Bundeswehr Munich
Alperen DegirmenciNVIDIA
Alperen KantarcıGoethe University Frankfurt
Anastasia BolovinouICCS
Angelos AmanatiadisDemocritus University of Thrace
Anton KuznietsovTU Darmstadt
Arash AkbariNortheastern University
Arslan Abdul GhaffarYeungnam University
Benedikt AltRobert Bosch GmbH
Bharatesh ChakravarthiArizona State University
Bikram AdhikariDriver Research Institute
Bingyin ZhaoNational University of Singapore
Bo-Kai RuanNational Yang Ming Chiao Tung University
Bolin ZhouChina Automotive Technology and Research Center Co., Ltd.
Ce ZhangVirginia Tech
Cem TarhanTogg
Chieh-Chih WangNCTU
Deepak RavishankarNVIDIA
Deyuan QuToyota
Dianwei ChenUniversity of Maryland
Douglas B. CavalcanteIPT
Edmund K ChaoUniversity of California, Los Angeles
Ehsan AhmadiUniversity of Alberta
Elahe YahyapourUniversity of Massachusetts Amherst
Eun Sang ChaKorea University
Fanta CamaraUniversity of York
Federico CamardaHeudiasyc
Felix StillgerBergische Universität Wuppertal
Flavia Sofia AcerboKU Leuven
Frederik Lenard HaseckeAptiv
Gaël Parfait Atheupe GatcheuEnsta
Gibran AliVirginia Tech Transportation Institute
Giorgio C ButtazzoScuola Superiore Sant'Anna
Gyu Sang ChoiYeungnam University
Haotian CAONational University of Defense Technology
Hojin AhnKorea Advanced Institute of Science and Technology (KAIST)
Ingyu LeeYeungnam University
Javad Zolfaghari BengarComputer Vision Center
Jenny SchmalfussUniversity of Stuttgart
Jialei ChenNagoya University
Jiawei YongToyota Motor Corporation
Jingde ChenNVIDIA
Johannes BetzTechnical University of Munich
Kailun YangHunan University
Kuderna-Iulian BentaBabes-Bolyai University
Lu CaoHonda Research Institute Japan
Mahan RafidashtiChalmers University of Technology
Mahmut YurtStanford University
Marcello CeresiniVisLab
Mathieu CocheteuxUniversité de Technologie de Compiègne
Md Zafar AnwarMercedes-Benz R&D North America
Michael BrunnerReutlingen University
Michael HubbertzBergische Universität Wuppertal
Naa Korkoi AddoUniversity of Limerick
Peter LengyelaiMotive
Rafid MahmoodNVIDIA
Rahul BhadaniVanderbilt University
Rajeev YasarlaQualcomm AI Research
Royden WagnerKIT
Ruihao ZengThe University of Sydney
Runheng ZuoShanghai Jiao Tong University
Ruphan SwaminathanOttonomy Inc
Sabyasachi GuptaTexas A&M University
Shounak SuralCarnegie Mellon University
Shuai ZhengCruise LLC
Shuxuan GuoEPFL
Simon de MoreauMines Paris - PSL University
Suraj BhardwajBharAI Lab
Syeda Fiza RubabYeungnam University
Tamás MatuszkaaiMotive
Tobias MeisenUniversity of Wuppertal
Vibashan VSJohns Hopkins University
Vikram AnanthaLexington High School
Weichao ZhuangSoutheast University
Weitao ZhouTsinghua University
Xiangrui ZengHuazhong University of Science and Technology
Xiaokai BaiZhejiang University
Xin ZhouTongji University
Xinglong SunStanford University
Xuesong BaiBeihang University
Xuming HeShanghaiTech University
Xunjiang GuUniversity of Toronto
Yezhi ShenPurdue University
Yihan ZhongThe Hong Kong Polytechnic University
Yilun ChenChinese University of Hong Kong
Yisheng AnChang'an University
Yu HanLixiang
Yug AjmeraWaymo
Yunheng XuAnhui University
Yunxiao ShiQualcomm AI Research
Yuxiao CaoHuazhong University of Science and Technology
Zhennan WangPeng Cheng Laboratory
Zi WangNVIDIA

Contact

cvpr.wad@gmail.com

Background photo of Denver, licensed under CC BY-NC 4.0 (link)