Tool-Augmented Spatiotemporal Reasoning for Streamlining Video Question Answering Task
Abstract
A spatiotemporal reasoning framework enhances multimodal large language models for video question answering by strategically scheduling tools to improve spatial and temporal understanding.
Video Question Answering (VideoQA) task serves as a critical playground for evaluating whether foundation models can effectively perceive, understand, and reason about dynamic real-world scenarios. However, existing Multimodal Large Language Models (MLLMs) struggle with simultaneously modeling spatial relationships within video frames and understanding the causal dynamics of temporal evolution on complex and reasoning-intensive VideoQA task. In this work, we equip MLLM with a comprehensive and extensible Video Toolkit, to enhance MLLM's spatiotemporal reasoning capabilities and ensure the harmony between the quantity and diversity of tools. To better control the tool invocation sequence and avoid toolchain shortcut issues, we propose a Spatiotemporal Reasoning Framework (STAR) that strategically schedules temporal and spatial tools, thereby progressively localizing the key area in the video. Our STAR framework enhances GPT-4o using lightweight tools, achieving an 8.2% gain on VideoMME and 4.6% on LongVideoBench. We believe that our proposed Video Toolkit and STAR framework make an important step towards building autonomous and intelligent video analysis assistants. The code is publicly available at https://github.com/fansunqi/VideoTool.
Community
Tool-augmented VideoQA system, accepted by NeurIPS'25 main track.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- VTimeCoT: Thinking by Drawing for Video Temporal Grounding and Reasoning (2025)
- Video-QTR: Query-Driven Temporal Reasoning Framework for Lightweight Video Understanding (2025)
- Open-o3 Video: Grounded Video Reasoning with Explicit Spatio-Temporal Evidence (2025)
- CrossVid: A Comprehensive Benchmark for Evaluating Cross-Video Reasoning in Multimodal Large Language Models (2025)
- VideoChat-M1: Collaborative Policy Planning for Video Understanding via Multi-Agent Reinforcement Learning (2025)
- Enhancing Temporal Understanding in Video-LLMs through Stacked Temporal Attention in Vision Encoders (2025)
- Vidi2: Large Multimodal Models for Video Understanding and Creation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper