NuRisk: A VQA Dataset for Agent-Level Risk Assessment in Autonomous Driving

Anonymous Authors

Framework of NuRisk. Multi-modal inputs are processed into BEV scenes and risk metrics to enable conversation-based VQA with chain-of-thought reasoning, supporting risk evaluation, benchmarking, fine-tuning, and safety-critical scenario analysis.

Abstract

Understanding risk in autonomous driving requires not only perception and prediction, but also high-level reasoning about agent behavior and context. Current VLM-based methods primarily ground agents in static images and provide qualitative judgments, lacking the spatio–temporal reasoning needed to capture how risks evolve over time. To address this gap, we propose NuRisk, a comprehensive VQA dataset comprising 2.9K scenarios and 1.1M agent-level samples, built on real-world data from nuScenes and Waymo, completed with safety-critical scenarios from the CommonRoad simulator. The dataset provides Bird-Eye-View (BEV) based sequential images with quantitative, agent-level risk annotations, enabling spatio–temporal reasoning. We benchmark well-known VLMs across different prompting techniques and find that they fail to perform explicit spatio-temporal reasoning, resulting in a peak accuracy of 33% at high latency. To address these shortcomings, our fine-tuned 7B VLM agent improves accuracy to 41% and reduces latency by 75%, demonstrating explicit spatio-temporal reasoning capabilities that proprietary models lacked. While this represents a significant step forward, the modest accuracy underscores the profound challenge of the task, establishing NuRisk as a critical benchmark for advancing spatio-temporal reasoning in autonomous driving.

Results

Table 1: Vision-Only Performance

Model + Technique MAE↓ QWK↑ Acc↑ Time↓
Proprietary Models
Gemini-2.5-Flash (Baseline)1.910.490.1538.29
Baseline + CP1.230.890.3345.88
Baseline + CP + CoT1.200.870.3096.67
Baseline + CP + CoT + ICL1.200.880.32107.32
Gemini-2.5-Pro (Baseline)2.000.490.1540.13
Baseline + CP1.250.880.3345.24
Baseline + CP + CoT1.150.880.3195.43
Baseline + CP + CoT + ICL1.220.860.30106.55
Qwen-VL-Plus (Baseline)0.630.550.138.81
Baseline + CP0.580.620.1735.81
Baseline + CP + CoT0.620.590.1637.40
Baseline + CP + CoT + ICL0.620.690.2248.22
Qwen-VL-Max (Baseline)1.260.040.2213.54
Baseline + CP0.960.050.1614.20
Baseline + CP + CoT1.20-0.050.1226.54
Baseline + CP + CoT + ICL1.020.070.1822.18
GPT-5-Mini (Baseline)1.160.140.3027.67
Baseline + CP1.230.070.2540.49
Baseline + CP + CoT1.140.060.2547.65
Baseline + CP + CoT + ICL1.240.050.2553.34
Open-Source Models
InternVL3-8B (Baseline)0.620.560.146.55
Baseline + CP0.550.700.207.15
Baseline + CP + CoT0.540.700.2111.88
Baseline + CP + CoT + ICL0.330.720.1911.68
Qwen2.5-VL-7B (Baseline)1.870.510.1411.98
Baseline + CP1.880.460.1211.60
Baseline + CP + CoT1.760.580.1816.66
Baseline + CP + CoT + ICL1.530.680.2219.64

Performance comparison of prompting strategies. Arrows indicate if higher (↑) or lower (↓) values are better. CP: Contextual Prompting, CoT: Chain-of-Thought, ICL: In-Context Learning.

Table 2: Physics-Enhanced Performance

Model + Technique + Modality MAE↓ QWK↑ Acc↑ Time↓
Proprietary Models
Gemini-1.5-Pro (Text)--0.8325.00
Gemini-2.5-Flash (Baseline) (Single)1.080.440.117.66
Baseline + CP (Single+Text)0.210.830.9124.51
Baseline + CP + CoT (Single+Text)0.200.840.9132.63
Baseline + CP + CoT + ICL (Single+Text)0.220.820.9031.57
Gemini-2.5-Flash (Baseline) (Multi)1.910.490.1538.29
Baseline + CP (Multi+Text)0.200.840.9226.76
Baseline + CP + CoT (Multi+Text)0.200.840.9234.72
Baseline + CP + CoT + ICL (Multi+Text)0.190.850.9240.55
Open-Source Models
InternVL3-8B (Baseline) (Single)1.650.140.131.36
Baseline + CP (Single+Text)1.550.200.201.54
Baseline + CP + CoT (Single+Text)0.950.440.3921.14
Baseline + CP + CoT + ICL (Single+Text)1.230.390.3032.84
InternVL3-8B (Baseline) (Multi)0.620.560.146.55
Baseline + CP (Multi+Text)1.710.180.162.01
Baseline + CP + CoT (Multi+Text)1.130.300.3323.17
Baseline + CP + CoT + ICL (Multi+Text)1.180.400.3833.18
Qwen2.5-VL-7B (Baseline) (Single)1.950.120.122.63
Baseline + CP (Single+Text)1.700.180.182.96
Baseline + CP + CoT (Single+Text)1.710.180.1813.02
Baseline + CP + CoT + ICL (Single+Text)1.430.270.1844.91
Qwen2.5-VL-7B (Baseline) (Multi)1.870.510.1411.98
Baseline + CP (Multi+Text)1.780.160.166.50
Baseline + CP + CoT (Multi+Text)1.550.200.2522.95
Baseline + CP + CoT + ICL (Multi+Text)1.580.200.2333.18

Performance with physics-enhanced inputs. CP: Contextual Prompting, CoT: Chain-of-Thought, ICL: In-Context Learning. Multi/Single refers to Sequential/single-image input.

NuRisk VLM Agent Fine-tuning Architecture

NuRisk VLM Agent Fine-tuning Architecture

NuRisk VLM Agent Fine-tuning Architecture.

Performance Radar Chart

Experimental results showing VLM performance comparison

Performance comparison of different VLM approaches on the NuRisk dataset.

TODO

  • ☐ šŸš€ Release Code
  • ☐ šŸ“Š Release Dataset
  • ☐ šŸ† Release Checkpoints

BibTeX