Dataset Viewer
Auto-converted to Parquet Duplicate
platform
stringclasses
1 value
venue
stringclasses
4 values
year
int32
2.02k
2.03k
title
stringlengths
8
177
abstract
stringlengths
310
3.08k
keywords
stringlengths
0
613
areas
stringclasses
152 values
tldr
stringlengths
0
281
scores
listlengths
0
8
decision
stringclasses
21 values
authors
stringlengths
6
834
author_ids
stringlengths
8
956
cdate
stringclasses
976 values
url
stringlengths
41
45
platform_id
stringlengths
9
13
bibtex
stringlengths
228
1.26k
figure_path
stringlengths
61
79
figure_number
stringclasses
134 values
figure_caption
stringlengths
8
2.35k
figure_context
stringlengths
0
20.2k
figure_type
stringclasses
1 value
confidence
float32
0.85
1
OpenReview
ICLR
2,026
FreqKV: Key-Value Compression in Frequency Domain for Context Window Extension
Existing key-value (KV) cache compression methods for large language models (LLMs) often rely on token eviction, which risks losing critical local information in both long prefilling and decoding scenarios. When extrapolating beyond the pretrained context length, their performance degrades sharply on long-context benchmarks. Motivated by the observation in the frequency domain that the context information is concentrated in the low-frequency components, we propose FreqKV, a parameter-free and architecture-agnostic approach. It iteratively compresses the increasing KV cache in the frequency domain, allowing models to process lengthy contexts efficiently. With minimal training at 8K length, FreqKV extends the context window of LLaMA-2-7B up to 256K tokens while maintaining stable perplexity. Extensive experiments on both prefilling and decoding stages demonstrate that FreqKV enables robust context window extension and consistently outperforms existing KV cache compression methods, highlighting its effectiveness for both understanding and generation in long contexts.
Large Language Models, KV Compression, Context Extension
foundation or frontier models, including LLMs
This paper introduces FreqKV, an efficient context extension method that iteratively compresses key-value states in the frequency domain.
[ 4, 6, 4 ]
Accept (Poster)
Jushi Kai, Yixuan Wang, Boyi Zeng, Haoli Bai, Bo Jiang, Ziwei He, Zhouhan Lin
~Jushi_Kai1, ~Yixuan_Wang10, ~Boyi_Zeng2, ~Haoli_Bai2, ~Bo_Jiang2, ~Ziwei_He1, ~Zhouhan_Lin1
20250918
https://openreview.net/forum?id=wFSOtyvQ9d
wFSOtyvQ9d
@inproceedings{ kai2026freqkv, title={Freq{KV}: Key-Value Compression in Frequency Domain for Context Window Extension}, author={Jushi Kai and Yixuan Wang and Boyi Zeng and Haoli Bai and Bo Jiang and Ziwei He and Zhouhan Lin}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=wFSOtyvQ9d} }
OpenReview/ICLR/figures/2026/accept_poster/wFSOtyvQ9d/Figure3.png
3
Figure 3: The overview of our FreqKV. (a) The illustration of the frequency-domain compression. (b) The KV cache will be compressed in an iterative manner to extend the context window. Sink tokens remain uncompressed throughout the process. The tokens after sink tokens will be compressed in the frequency domain and subsequent tokens will continue to get into the cache. When the cache is filled again, the compressed tokens and incoming tokens will be compressed together.
<paragraph_1>To reduce redundancy in the key-value (KV) cache, we compress KV states in the frequency domain as shown in Figure 3a. Specifically, we conduct DCT along the sequence dimension to transfer the KV cache to the frequency domain:</paragraph_1> <paragraph_2>Extending the context window of LLMs is fundamentally constrained by memory and computation cost. To address this, FreqKV employs an iterative compression strategy in the frequency domain that constrains the effective cache size while enabling processing of arbitrarily long sequences. The overall pipeline is illustrated in Figure 3b.</paragraph_2>
diagram
0.899471
OpenReview
ICLR
2,026
ThinkOmni: Lifting Textual Reasoning to Omni-modal Scenarios via Guidance Decoding
Omni-modal reasoning is essential for intelligent systems to understand and draw inferences from diverse data sources. While existing omni-modal large language models (OLLM) excel at perceiving diverse modalities, they lack the complex reasoning abilities of recent large reasoning models (LRM). However, enhancing the reasoning ability of OLLMs through additional training presents significant challenges, including the need for high-quality data, task-specific adaptation, and substantial computational costs. To address these limitations, we propose ThinkOmni, a training-free and data-free framework that lifts textual reasoning to omni-modal scenarios. ThinkOmni introduces two key components: 1) LRM-as-a-Guide, which leverages off-the-shelf LRMs to guide the OLLM decoding process; 2) Stepwise Contrastive Scaling, which adaptively balances perception and reasoning signals without manual hyperparameter tuning. Experiments on six multi-modal reasoning benchmarks demonstrate that ThinkOmni consistently delivers performance improvements, with main results achieving 70.2 on MathVista and 75.5 on MMAU. Overall, ThinkOmni offers a flexible and generalizable solution for omni-modal reasoning and provides new insights into the generalization and application of reasoning capabilities.
Omni-modal large language models, training-free guidance decoding, language model reasoning
applications to computer vision, audio, language, and other modalities
[ 6, 6, 6, 6 ]
Accept (Poster)
Yiran Guan, Sifan Tu, Dingkang Liang, Linghao Zhu, Jianzhong Ju, Zhenbo Luo, Jian Luan, Yuliang Liu, Xiang Bai
~Yiran_Guan1, ~Sifan_Tu2, ~Dingkang_Liang2, ~Linghao_Zhu1, ~Jianzhong_Ju1, ~Zhenbo_Luo2, ~Jian_Luan1, ~Yuliang_Liu2, ~Xiang_Bai1
20250917
https://openreview.net/forum?id=pMpCOjzwI1
pMpCOjzwI1
@inproceedings{ guan2026thinkomni, title={ThinkOmni: Lifting Textual Reasoning to Omni-modal Scenarios via Guidance Decoding}, author={Yiran Guan and Sifan Tu and Dingkang Liang and Linghao Zhu and Jianzhong Ju and Zhenbo Luo and Jian Luan and Yuliang Liu and Xiang Bai}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=pMpCOjzwI1} }
OpenReview/ICLR/figures/2026/accept_poster/pMpCOjzwI1/Figure3.png
3
Figure 3: Guidance decoding methods. “Guid.” denotes the guiding model, and “Amat.” denotes the amateur model.
<paragraph_1>In Contrastive Decoding (Fig. 3(a)), the contrastive pair is formed by comparing the responses to the same prompt from the original guiding model and an additional amateur model, with z+ set to zbase. In Visual Contrastive Decoding (Fig. 3(b)), the contrastive pair is created by applying different input conditions to the same model. Specifically, z−is obtained by adding Gaussian noise to the input image and then performing inference. In contrast to these approaches, ProxyTuning and ProxyThinker (Fig. 3(c)) construct contrastive pairs across different models within the same family, aiming to transfer behaviors from more minor, guiding models to larger, amateur models.</paragraph_1>
diagram
0.93543
OpenReview
ICLR
2,026
Task-Agnostic Amortized Multi-Objective Optimization
Balancing competing objectives is omnipresent across disciplines, from drug design to autonomous systems. Multi-objective Bayesian optimization is a promising solution for such expensive, black-box problems: it fits probabilistic surrogates and selects new designs via an acquisition function that balances exploration and exploitation. In practice, it requires tailored choices of surrogate and acquisition that rarely transfer to the next problem, is myopic when multi-step planning is often required, and adds refitting overhead, particularly in parallel or time-sensitive loops. We present TAMO, a fully amortized, universal policy for multi-objective black-box optimization. TAMO uses a transformer architecture that operates across varying input and objective dimensions, enabling pretraining on diverse corpora and transfer to new problems without retraining: at test time, the pretrained model proposes the next design with a single forward pass. We pretrain the policy with reinforcement learning to maximize cumulative hypervolume improvement over full trajectories, conditioning on the entire query history to approximate the Pareto frontier. Across synthetic benchmarks and real tasks, TAMO produces fast proposals, reducing proposal time by 50–1000× versus alternatives while matching or improving Pareto quality under tight evaluation budgets. These results show that transformers can perform multi-objective optimization entirely in-context, eliminating per-task surrogate fitting and acquisition engineering, and open a path to foundation-style, plug-and-play optimizers for scientific discovery workflows.
Multi-Objective Optimization, Bayesian Optimization, Transformers, Neural Processes
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
We introduce a fully amortized (surrogate model + acquisition function), dimension-agnostic policy for multi-objective optimization.
[ 6, 6, 8, 4 ]
Accept (Poster)
Xinyu Zhang, Conor Hassan, Julien Martinelli, Daolang Huang, Samuel Kaski
~Xinyu_Zhang41, ~Conor_Hassan1, ~Julien_Martinelli1, ~Daolang_Huang1, ~Samuel_Kaski1
20250920
https://openreview.net/forum?id=odmeUlWta8
odmeUlWta8
@inproceedings{ zhang2026taskagnostic, title={Task-Agnostic Amortized Multi-Objective Optimization}, author={Xinyu Zhang and Conor Hassan and Julien Martinelli and Daolang Huang and Samuel Kaski}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=odmeUlWta8} }
OpenReview/ICLR/figures/2026/accept_poster/odmeUlWta8/Figure2.png
2
Figure 2: Dimension-agnostic embedder for a single observation.
<paragraph_1>(I) Dimension-agnostic embedder. We apply learnable scalar-to-vector maps ex : R →Rde and ey : R →Rde dimension-wise, resulting in ex = ex(x) ∈Rdτ x×de and ey = ey(y) ∈Rdτ y×de. Both functions ex and ey are parameterized as feedforward neural networks. After L transformer layers on the concatenated tokens [ex; ey], we apply learnable dimension-specific positional tokens px ∈Rdτ x×de and py ∈Rdτ y×de element-wise and mean-pool across the dτ x+dτ y token axis to obtain a single representation E ∈Rde. These positional tokens are randomly sampled for each batch from fixed pools of learned embeddings. We introduce the positional tokens to prevent the spurious symmetries over dimensionalities from a permutation-invariant set encoder, allowing the model to distinguish between features and objectives with the same values. During training, the embedder is applied to Dh and Dq to yield Eh and Eq for the optimization task, and to Dc and Dp to yield Ec and Ep for the prediction task. Each observation contributes O(1) tokens, so the cost scales with the number of observations, not with dτ x or dτ y. Figure 2 summarizes the embedder.</paragraph_1> <paragraph_2>Figure S16: Inference on GP examples (dx = 2, dy = 1), with query points proposed over 100 optimization steps (white circle, size increasing along with the number of queries).</paragraph_2> <paragraph_3>Figure S17: Inference on GP examples (dx = 2, dy = 2), with query points proposed over 100 optimization steps (white circles, size increasing along with the number of queries).</paragraph_3>
diagram
0.99614
OpenReview
ICLR
2,026
DESIGNER: Design-Logic-Guided Multidisciplinary Data Synthesis for LLM Reasoning
Large language models (LLMs) perform strongly on many language tasks but still struggle with complex multi-step reasoning across disciplines. Existing reasoning datasets often lack disciplinary breadth, reasoning depth, and diversity, as well as guiding principles for question synthesis. We propose DESIGNER: a DESIGN-logic-guidEd Reasoning data synthesis pipeline that leverages naturally available, extensive raw documents to generate multidisciplinary questions. The central insight is the notion of Design Logic, a form of reusable meta-knowledge that encapsulates the structured process human experts use to transform knowledge into complex exam questions, enabling LLMs to generate new questions with the same complex reasoning patterns from entirely different source texts with explicit control over difficulty, diversity, and question types. We use LLMs to reverse-engineer and abstract over 120,000 Design Logics from existing questions across various disciplines. By designing a two-stage retrieve-and-generate mechanism to match these Design Logics with raw corpus, we synthesized two large-scale reasoning datasets that span 75 disciplines: DLR-Book (3.04 million questions from the book corpus) and DLR-Web (1.66 million questions from the web corpus). Data analysis indicates that the questions synthesized by our method exhibit greater difficulty and diversity compared to those in the baseline datasets. Supervised fine-tuning (SFT) on Qwen3 and Llama3 with our data substantially improves multidisciplinary reasoning and outperforms baseline datasets. Notably, by applying SFT on the base versions of these models using only our data, we even surpass their official final models that have undergone the full post-training.
Large Language Models, Data Synthesis, Synthetic Data, Reasoning, Post-Training, Supervised Fine-Tuning
datasets and benchmarks
[ 6, 4, 2, 8, 4 ]
Accept (Poster)
Weize Liu, Yongchi Zhao, Yijia Luo, Mingyu Xu, Jiaheng Liu, Yanan Li, Xiguo Hu, ZhiqiBai, Yuchi Xu, Wenbo Su, Bo Zheng
~Weize_Liu1, ~Yongchi_Zhao1, ~Yijia_Luo1, ~Mingyu_Xu3, ~Jiaheng_Liu1, ~Yanan_Li8, ~Xiguo_Hu1, ~ZhiqiBai1, ~Yuchi_Xu1, ~Wenbo_Su2, ~Bo_Zheng5
20250903
https://openreview.net/forum?id=SQVxBJhIrK
SQVxBJhIrK
@inproceedings{ liu2026designer, title={{DESIGNER}: Design-Logic-Guided Multidisciplinary Data Synthesis for {LLM} Reasoning}, author={Weize Liu and Yongchi Zhao and Yijia Luo and Mingyu Xu and Jiaheng Liu and Yanan Li and Xiguo Hu and ZhiqiBai and Yuchi Xu and Wenbo Su and Bo Zheng}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=SQVxBJhIrK} }
OpenReview/ICLR/figures/2026/accept_poster/SQVxBJhIrK/Figure2.png
2
Figure 2: The Design-Logic-Guided Multidisciplinary Data Synthesis Pipeline.
<paragraph_1>Specifically, our pipeline is illustrated in Figure 2. First, we process large-scale book and web corpora with multi-dimensional labeling and filtering (discipline, readability, educational value, reasoning depth) to construct a high-quality source material library. From a question bank of hundreds of millions, we cluster and sample a diverse set of difficult questions, from which an LLM reverseengineers and abstracts over 120K structured Design Logics to construct a reusable Design Logic library. In question synthesis, we adopt a two-stage retrieve-and-generate mechanism: (1) vector similarity retrieves coarse candidate logics for each source document, and (2) an LLM performs a fine-grained evaluation to select the optimal logic and generates a reasoning question from the source document by strictly following its steps. This approach addresses the absence of guiding principles in prior data synthesis methods, enabling the automated generation of a large number of diverse and high-difficulty exam questions while reducing reliance on expensive manual creation.</paragraph_1> <paragraph_2>We curate three data sources for question synthesis: a proprietary question bank, a book corpus, and a web corpus, all aligned to a unified 75-discipline taxonomy (see Appendix A). Figure 2 (Phase 1) illustrates the overall data processing pipeline.</paragraph_2> <paragraph_3>Figure 2 (Phase 2 and Phase 3) illustrates the overall data synthesis pipeline.</paragraph_3>
diagram
0.99595
OpenReview
ICLR
2,026
DESIGNER: Design-Logic-Guided Multidisciplinary Data Synthesis for LLM Reasoning
Large language models (LLMs) perform strongly on many language tasks but still struggle with complex multi-step reasoning across disciplines. Existing reasoning datasets often lack disciplinary breadth, reasoning depth, and diversity, as well as guiding principles for question synthesis. We propose DESIGNER: a DESIGN-logic-guidEd Reasoning data synthesis pipeline that leverages naturally available, extensive raw documents to generate multidisciplinary questions. The central insight is the notion of Design Logic, a form of reusable meta-knowledge that encapsulates the structured process human experts use to transform knowledge into complex exam questions, enabling LLMs to generate new questions with the same complex reasoning patterns from entirely different source texts with explicit control over difficulty, diversity, and question types. We use LLMs to reverse-engineer and abstract over 120,000 Design Logics from existing questions across various disciplines. By designing a two-stage retrieve-and-generate mechanism to match these Design Logics with raw corpus, we synthesized two large-scale reasoning datasets that span 75 disciplines: DLR-Book (3.04 million questions from the book corpus) and DLR-Web (1.66 million questions from the web corpus). Data analysis indicates that the questions synthesized by our method exhibit greater difficulty and diversity compared to those in the baseline datasets. Supervised fine-tuning (SFT) on Qwen3 and Llama3 with our data substantially improves multidisciplinary reasoning and outperforms baseline datasets. Notably, by applying SFT on the base versions of these models using only our data, we even surpass their official final models that have undergone the full post-training.
Large Language Models, Data Synthesis, Synthetic Data, Reasoning, Post-Training, Supervised Fine-Tuning
datasets and benchmarks
[ 6, 4, 2, 8, 4 ]
Accept (Poster)
Weize Liu, Yongchi Zhao, Yijia Luo, Mingyu Xu, Jiaheng Liu, Yanan Li, Xiguo Hu, ZhiqiBai, Yuchi Xu, Wenbo Su, Bo Zheng
~Weize_Liu1, ~Yongchi_Zhao1, ~Yijia_Luo1, ~Mingyu_Xu3, ~Jiaheng_Liu1, ~Yanan_Li8, ~Xiguo_Hu1, ~ZhiqiBai1, ~Yuchi_Xu1, ~Wenbo_Su2, ~Bo_Zheng5
20250903
https://openreview.net/forum?id=SQVxBJhIrK
SQVxBJhIrK
@inproceedings{ liu2026designer, title={{DESIGNER}: Design-Logic-Guided Multidisciplinary Data Synthesis for {LLM} Reasoning}, author={Weize Liu and Yongchi Zhao and Yijia Luo and Mingyu Xu and Jiaheng Liu and Yanan Li and Xiguo Hu and ZhiqiBai and Yuchi Xu and Wenbo Su and Bo Zheng}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=SQVxBJhIrK} }
OpenReview/ICLR/figures/2026/accept_poster/SQVxBJhIrK/Figure22.png
22
Figure 22: An example of the Design Logic for a Mathematics problem, showing the Mermaid source code (a) and the corresponding visual flowchart (b).
diagram
0.907912
OpenReview
ICLR
2,026
Enhancing Multivariate Time Series Forecasting with Global Temporal Retrieval
Multivariate time series forecasting (MTSF) plays a vital role in numerous real-world applications, yet existing models remain constrained by their reliance on a limited historical context. This limitation prevents them from effectively capturing global periodic patterns that often span cycles significantly longer than the input horizon—despite such patterns carrying strong predictive signals. Naïve solutions, such as extending the historical window, lead to severe drawbacks, including overfitting, prohibitive computational costs, and redundant information processing. To address these challenges, we introduce the Global Temporal Retriever (GTR), a lightweight and plug-and-play module designed to extend any forecasting model’s temporal awareness beyond the immediate historical context. GTR maintains an adaptive global temporal embedding of the entire cycle and dynamically retrieves and aligns relevant global segments with the input sequence. By jointly modeling local and global dependencies through a 2D convolution and residual fusion, GTR effectively bridges short-term observations with long-term periodicity without altering the host model architecture. Extensive experiments on six real-world datasets demonstrate that GTR consistently delivers state-of-the-art performance across both short-term and long-term forecasting scenarios, while incurring minimal parameter and computational overhead. These results highlight GTR as an efficient and general solution for enhancing global periodicity modeling in MTSF tasks. Code is available at this repository: https://github.com/macovaseas/GTR.
Time-series forecasting, model plugins
learning on time series and dynamical systems
A lightweight, model-agnostic plug-and-play module for time-series forecasting models.
[ 6, 4, 4, 8 ]
Accept (Poster)
Fanpu Cao, Lu Dai, Jindong Han, Hui Xiong
~Fanpu_Cao1, ~Lu_Dai1, ~Jindong_Han1, ~Hui_Xiong1
20250915
https://openreview.net/forum?id=QUJBPSfyui
QUJBPSfyui
@inproceedings{ cao2026enhancing, title={Enhancing Multivariate Time Series Forecasting with Global Temporal Retrieval}, author={Fanpu Cao and Lu Dai and Jindong Han and Hui Xiong}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=QUJBPSfyui} }
OpenReview/ICLR/figures/2026/accept_poster/QUJBPSfyui/Figure2.png
2
Figure 2: Overview of the Global Temporal Retriever (GTR): a plug-and-play module compatible with any MTSF forecaster. GTR operates in three stages: (1) retrieves corresponding segments from global temporal embedding; (2) aligns them with the input and uses 2D convolution to jointly model local and global periodicity; (3) fuses the result with the original input via residual connection.
<paragraph_1>Method Overview. In this paper, we propose the Global Temporal Retriever (GTR) — a lightweight, plug-and-play module designed to extend a model’s temporal receptive field beyond the immediate input window. As illustrated in Figure 2, the proposed method operates in two phases: (1) The GTR module enhances global cyclic patterns by dynamically retrieving periodic information from the global temporal embedding, then fusing them with the input series through a linear transformation and 2D convolution (c.f. Section 3.2). (2) The enhanced representation is subsequently processed by the backbone model (a multi-layer perceptron in this work, c.f. Section 3.3.) for final forecasting.</paragraph_1>
diagram
0.993829
OpenReview
ICLR
2,026
From EduVisBench to EduVisAgent: A Benchmark and Multi-Agent Framework for Reasoning-Driven Pedagogical Visualization
While foundation models (FMs), such as diffusion models and large vision-language models (LVLMs), have been widely applied in educational contexts, their ability to generate pedagogically effective visual explanations remains limited. Most existing approaches focus primarily on textual reasoning, overlooking the critical role of structured and interpretable visualizations in supporting conceptual understanding. To better assess the visual reasoning capabilities of FMs in educational settings, we introduce EduVisBench, a multi-domain, multi-level benchmark. EduVisBench features diverse STEM problem sets requiring visually grounded solutions, along with a fine-grained evaluation rubric informed by pedagogical theory. Our empirical analysis reveals that existing models frequently struggle with the inherent challenge of decomposing complex reasoning and translating it into visual representations aligned with human cognitive processes. To address these limitations, we propose EduVisAgent, a multi-agent collaborative framework that coordinates specialized agents for instructional planning, reasoning decomposition, metacognitive prompting, and visualization design. Experimental results show that EduVisAgent substantially outperforms all baselines, achieving a 40.2% improvement and delivering more educationally aligned visualizations.
education, agent, benchmark, llm, application, visualisation
datasets and benchmarks
[ 6, 2, 2, 6, 6 ]
Accept (Poster)
Haonian Ji, Shi Qiu, Siyang Xin, Siwei Han, Zhaorun Chen, Dake Zhang, Hongyi Wang, Huaxiu Yao
~Haonian_Ji1, ~Shi_Qiu2, ~Siyang_Xin1, ~Siwei_Han1, ~Zhaorun_Chen1, ~Dake_Zhang3, ~Hongyi_Wang1, ~Huaxiu_Yao1
20250918
https://openreview.net/forum?id=FVCpV04ZRe
FVCpV04ZRe
@inproceedings{ ji2026from, title={From EduVisBench to EduVisAgent: A Benchmark and Multi-Agent Framework for Reasoning-Driven Pedagogical Visualization}, author={Haonian Ji and Shi Qiu and Siyang Xin and Siwei Han and Zhaorun Chen and Dake Zhang and Hongyi Wang and Huaxiu Yao}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=FVCpV04ZRe} }
OpenReview/ICLR/figures/2026/accept_poster/FVCpV04ZRe/Figure4.png
4
Figure 4: Workflow for evaluation.
<paragraph_1>Evaluation Protocol. As shown in Figure 4, models are provided with a visualization prompt together with a question and are asked to generate visual outputs. To enable fair comparison across heterogeneous outputs, we first canonicalize every model result to a raster image prior to scoring. This standardization is a crucial step that ensures all systems are evaluated on a level playing field, independent of their native modality or file format, and prevents format-specific rendering artifacts from biasing the assessment. Visuals produced directly as SVG or PNG are used as-is. Web pages (HTML or Next.js) are rendered in a headless browser and captured as screenshots of the primary view; when lightweight interactivity is present (e.g., buttons, tabs, or toggles), we systematically traverse the reachable states and retain one representative screenshot per state. All resulting images are then evaluated by GPT-4o along five dimensions defined in Appendix A.2 to compute an overall performance score. Each dimension is rated on a 0-5 scale; the ratings are summed (0-25) and, when appropriate, normalized to a percentage to yield the final overall score.</paragraph_1>
diagram
0.932038
OpenReview
ICLR
2,026
A State-Transition Framework for Efficient LLM Reasoning
While Long Chain-of-Thought (CoT) reasoning significantly improves Large Language Models (LLMs) performance on complex reasoning tasks, the substantial computational and memory costs of generating long CoT sequences limit their efficiency and practicality. Existing studies usually enhance the reasoning efficiency of LLMs by compressing CoT sequences. However, this approach conflicts with test‑time scaling, limiting the reasoning capacity of LLMs. In this paper, we propose an efficient reasoning framework that models the reasoning process of LLMs as a state‑transition process. Specifically, we first apply a linear attention mechanism to estimate the LLM’s reasoning state, which records the historical reasoning information from previous reasoning steps. Then, based on the query prompt and the reasoning state, the LLM can efficiently perform the current reasoning step and update the state. With the linear attention, each token in the current reasoning step can directly retrieve relevant historical reasoning information from the reasoning state, without explicitly attending to tokens in previous reasoning steps. In this way, the computational complexity of attention is reduced from quadratic to linear, significantly improving the reasoning efficiency of LLMs. In addition, we propose a state-based reasoning strategy to mitigate the over-thinking issue caused by noisy reasoning steps. Extensive experiments across multiple datasets and model sizes demonstrate that our framework not only improves the reasoning efficiency of LLMs but also enhances their reasoning performance.
Large Language Models, reasoning, efficient reasoning
foundation or frontier models, including LLMs
[ 4, 6, 6, 6 ]
Accept (Poster)
Liang Zhang, Yu Zhao, Longyue Wang, Tianqi Shi, Weihua Luo, Kaifu Zhang, Jinsong Su
~Liang_Zhang9, ~Yu_Zhao1, ~Longyue_Wang3, ~Tianqi_Shi1, ~Weihua_Luo2, ~Kaifu_Zhang2, ~Jinsong_Su1
20250919
https://openreview.net/forum?id=Zz8ikW4uWG
Zz8ikW4uWG
@inproceedings{ zhang2026a, title={A State-Transition Framework for Efficient {LLM} Reasoning}, author={Liang Zhang and Yu Zhao and Longyue Wang and Tianqi Shi and Weihua Luo and Kaifu Zhang and Jinsong Su}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=Zz8ikW4uWG} }
OpenReview/ICLR/figures/2026/accept_poster/Zz8ikW4uWG/Figure4.png
4
Figure 4: (a) shows the computational and memory efficiency of our model and the base model. (b) and (c) present our model’s performance with different values of hyper-parameters β and αmax, respectively. These experiments are conducted on Qwen2.5-1.5B.
<paragraph_1>Analysis of Computational and Memory Costs. We conduct experiments to further compare the computational and memory efficiency of our model and the base model across varying CoT lengths. The experimental results are presented in Figure 4(a). Although our model exhibits similar reasoning efficiency to the base model for shorter CoT, it significantly surpasses the base model once the CoT length exceeds 4K. In particular, when the CoT length reaches 32K, our model achieves over 40% faster reasoning speed than the base model. Moreover, our model maintains a nearly constant memory usage across varying CoT lengths, whereas that of the base model increases linearly with CoT length. Theoretically, our model’s advantages in computational and memory efficiency would become even more significant when FlashAttention-2 is disabled.</paragraph_1> <paragraph_2>Analysis of Hyper-Parameters. We also investigate the impact of the two key hyper-parameters, β and αmax, on the performance of our model. As illustrated in Figure 4(b)–(c), our model exhibits low sensitivity to these two hyper-parameters. Meanwhile, our model attains the best performance when β and αmax are set to 0.2 and 0.4, respectively. We further analyze the choice of these two hyperparameter values as follows:</paragraph_2>
diagram
0.868907
OpenReview
ICLR
2,026
STITCH: Simultaneous Thinking and Talking with Chunked Reasoning for Spoken Language Models
Spoken Language Models (SLMs) are designed to take speech inputs and produce spoken responses. However, current SLMs lack the ability to perform an internal, unspoken thinking process before responding. In contrast, humans typically engage in complex mental reasoning internally, enabling them to communicate ideas clearly and concisely. Thus, integrating an unspoken thought process into SLMs is highly desirable. While naively generating a complete chain-of-thought (CoT) reasoning before starting to talk can enable thinking for SLMs, this induces additional latency for the speech response, as the CoT reasoning can be arbitrarily long. To solve this issue, we propose STITCH, a novel generation method that alternates between the generation of unspoken reasoning chunks and spoken response chunks. Since the audio duration of a chunk of spoken response is much longer than the time to generate the tokens in a chunk of spoken response, we use the remaining free time to generate the unspoken reasoning tokens. When a chunk of audio is played to the user, the model continues to generate the next unspoken reasoning chunk, achieving simultaneous thinking and talking. Remarkably, STITCH matches the latency of baselines that cannot generate unspoken CoT by design while outperforming those baselines by 15% on math reasoning datasets; STITCH also performs equally well on non-reasoning datasets as those baseline models. Some animations and demonstrations are on the project page: https://d223302.github.io/STITCH.
spoken language model, reasoning, chain-of-thought
applications to computer vision, audio, language, and other modalities
[ 6, 4, 6, 4 ]
Accept (Poster)
Cheng-Han Chiang, Xiaofei Wang, Linjie Li, Chung-Ching Lin, Kevin Lin, Shujie LIU, Zhendong Wang, Zhengyuan Yang, Hung-yi Lee, Lijuan Wang
~Cheng-Han_Chiang1, ~Xiaofei_Wang9, ~Linjie_Li1, ~Chung-Ching_Lin2, ~Kevin_Lin3, ~Shujie_LIU1, ~Zhendong_Wang1, ~Zhengyuan_Yang1, ~Hung-yi_Lee2, ~Lijuan_Wang1
20250915
https://openreview.net/forum?id=5Z1eMhCeTb
5Z1eMhCeTb
@inproceedings{ chiang2026stitch, title={{STITCH}: Simultaneous Thinking and Talking with Chunked Reasoning for Spoken Language Models}, author={Cheng-Han Chiang and Xiaofei Wang and Linjie Li and Chung-Ching Lin and Kevin Lin and Shujie LIU and Zhendong Wang and Zhengyuan Yang and Hung-yi Lee and Lijuan Wang}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=5Z1eMhCeTb} }
OpenReview/ICLR/figures/2026/accept_poster/5Z1eMhCeTb/Figure2.png
2
Figure 2: Different generation method explored in this paper. The arrow represents the timeline for the SLM to generate the tokens; this timeline should not be confused with the timeline that the end user receives the audio, i.e., the upper timeline in Figure 1. We plot tokens of the same type in a chunk using the same color. (a) GLM-4-Voice: Interleaving between text and speech token chunks (Section 2). This is the design of the original interleaved SLMs. (b) TBS: Generating a complete reasoning span and then interleaving between text and speech token chunks (Section 3.1). (c) STITCH-R: Alternating between reasoning token chunks, text token chunks, and speech token chunks (Section 3.2). (d) STITCH-S: Alternating between text token chunks, speech token chunks, and reasoning token chunks (Section 3.3).
<paragraph_1>In the interleaved decoding paradigm, the SLM backbone model generates a chunk of text tokens and a chunk of speech tokens alternately. The text tokens serve as guidance for future speech tokens by transcribing what the speech token will say. For example, GLM-4-Voice (Zeng et al., 2024) interleaves between generating Ntext = 13 text tokens and Nspeech = 26 speech tokens. After a chunk of speech tokens is generated, it is immediately synthesized into audio by the speech decoder and streamed to the user, enabling low latency and real-time interaction. A figurative illustration of this output format is shown in Figure 2(a). When concatenating the chunk of text tokens, they should correspond to the transcription of the speech tokens. The ratio of the text tokens and speech tokens is carefully selected such that the text tokens are always faster than the speech tokens to ensure that the content of the speech tokens has already appeared in previous text tokens. Once all the text tokens are generated, the model will continue to generate the remaining speech tokens.</paragraph_1> <paragraph_2>To teach SLMs to operate in TBS, we construct the training data DTBS where each training instance has the form (x, z, y), x is the speech token sequence of the user input, z is the reasoning token sequence, and y = [t1 ◦s1 ◦t2 ◦s2, · · · ] is the token sequence for the speech output that interleaves between Ntext text tokens (tj) and Nspeech tokens (sj) (The last text token chunk may be less than Ntext while the last speech token span can have more than Nspeech tokens); ◦denotes concatenating two token sequences. We defer how we construct DTBS from existing datasets until Section 4.1. A figurative illustration of the target output for TBS is in Figure 2(b).</paragraph_2> <paragraph_3>STITCH-R realizes this ”thinking when speaking” by alternating fixed-length (Nreason) partial reasoning spans, fixed-length (Ntext) text token spans, and fixed-length (Nspeech) speech token spans. The partial reasoning spans are for inner thinking, while the text and speech token spans are for the spoken response. Stitching the partial reasoning spans together will form a complete CoT reasoning. A figurative illustration of the output of STITCH-R is shown in Figure 2(c), and some samples generated by STITCH-R are shown in Table 5 in the Appendix. The ”R” in STITCH-R stands for ”reasoning first” since it generates a partial reasoning chunk before speaking; this is used to distinguish the ”speaking first” STITCH-S that will be introduced in Section 3.3.</paragraph_3> <paragraph_4>To construct the training data for STITCH-R, we simply split the full reasoning CoT z in DTBS into chunks with Nreason tokens {z1, z2, · · · }, where each zi except the last chunk has Nreason tokens. Next, we interleave those chunks with the interleaved text-speech token sequence y = [t1 ◦s1 ◦t2 ◦s2, · · · ] to create the interleaved data of the form [z1 ◦t1 ◦s1 ◦z2 ◦t2 ◦s2 ◦· · · ], as shown in Figure 2(c). If the number of the reasoning span is more than the number of text spans, this indicates that the reasoning token spans think slower than the text token spans, so we remove the sample from the training data.2 The model is fine-tuned to auto-regressively predict the interleaved reasoning-text-speech token spans using standard language modeling cross-entropy loss.</paragraph_4> <paragraph_5>To fully remove the latency of waiting for the first partial reasoning span, we propose an alternative generative pipeline that directly starts to generate the text and speech token chunks and then generates the first reasoning chunk; the model continues to interleave this generation pattern. We call this STITCH-S since it generates a speech response first, and an illustrative figure is shown in Figure 2(d).</paragraph_5>
diagram
0.959533
OpenReview
ICLR
2,026
Seeing Across Views: Benchmarking Spatial Reasoning of Vision-Language Models in Robotic Scenes
Vision-language models (VLMs) are essential to Embodied AI, enabling robots to perceive, reason, and act in complex environments. They also serve as the foundation for the recent Vision-Language-Action (VLA) models. Yet, most evaluations of VLMs focus on single-view settings, leaving their ability to integrate multi-view information largely underexplored. At the same time, multi-camera setups are increasingly standard in robotic platforms, as they provide complementary perspectives to mitigate occlusion and depth ambiguity. Whether VLMs can effectively leverage such multi-view inputs for robotic reasoning therefore remains an open question. To bridge this gap, we introduce MV-RoboBench, a benchmark specifically designed to evaluate the multi-view spatial reasoning capabilities of VLMs in robotic manipulation. MV-RoboBench consists of 1.7k manually curated QA items across eight subtasks, divided into two primary categories: spatial understanding and robotic execution. We evaluate a diverse set of existing VLMs, including both open-source and closed-source models, along with enhanced versions augmented by Chain-of-Thought (CoT)-inspired enhancements. The results show that state-of-the-art models remain far below human performance, underscoring the substantial challenges VLMs face in multi-view robotic perception. Additionally, our analysis uncovers two key findings: (i) spatial intelligence and robotic task reasoning are correlated in multi-view robotic scenarios; and (ii) strong performance on existing general-purpose single-view spatial understanding benchmarks does not reliably translate to success in the robotic spatial tasks assessed by our benchmark. We release MV-RoboBench as an open resource to foster progress in spatially grounded VLMs and VLAs, providing a foundation for advancing embodied multi-view intelligence in robotics.
spatial understanding, benchmark, multi-view, vlm, robotics
datasets and benchmarks
MV-RoboBench evaluates whether vision–language models can integrate multi-view images for precise robotic perception and decision-making, revealing major gaps compared to human performance.
[ 8, 6, 6, 6 ]
Accept (Poster)
ZhiYuan Feng, Zhaolu Kang, Qijie Wang, Zhiying Du, Jiongrui Yan, Shi Shubin, Chengbo Yuan, Huizhi Liang, Yu Deng, Qixiu Li, Rushuai Yang, Ruichuan An, Leqi Zheng, Weijie Wang, Shawn Chen, Sicheng Xu, Yaobo Liang, Jiaolong Yang, Baining Guo
~ZhiYuan_Feng1, ~Zhaolu_Kang2, ~Qijie_Wang1, ~Zhiying_Du1, ~Jiongrui_Yan1, ~Shi_Shubin3, ~Chengbo_Yuan2, ~Huizhi_Liang1, ~Yu_Deng2, ~Qixiu_Li1, ~Rushuai_Yang1, ~Ruichuan_An1, ~Leqi_Zheng1, ~Weijie_Wang2, ~Shawn_Chen1, ~Sicheng_Xu1, ~Yaobo_Liang1, ~Jiaolong_Yang3, ~Baining_Guo1
20250913
https://openreview.net/forum?id=jXDZJAfRZB
jXDZJAfRZB
@inproceedings{ feng2026seeing, title={Seeing Across Views: Benchmarking Spatial Reasoning of Vision-Language Models in Robotic Scenes}, author={ZhiYuan Feng and Zhaolu Kang and Qijie Wang and Zhiying Du and Jiongrui Yan and Shi Shubin and Chengbo Yuan and Huizhi Liang and Yu Deng and Qixiu Li and Rushuai Yang and Ruichuan An and Leqi Zheng and Weijie Wang and Shawn Chen and Sicheng Xu and Yaobo Liang and Jiaolong Yang and Baining Guo}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=jXDZJAfRZB} }
OpenReview/ICLR/figures/2026/accept_poster/jXDZJAfRZB/Figure12.png
12
Figure 12: Illustration of the righthanded coordinate system defined relative to each camera.
<paragraph_1>Directional convention. In summary, +z = upward, −z = downward; +y = forward, −y = backward; +x = right, −x = left. Figure 12 provides an illustration of this definition.</paragraph_1>
diagram
0.955413
OpenReview
ICLR
2,026
R-Horizon: How Far Can Your Large Reasoning Model Really Go in Breadth and Depth?
Recent trends in test-time scaling for reasoning models (e.g., OpenAI o1, DeepSeek-R1) have led to remarkable improvements through long Chain-of-Thought (CoT). However, existing benchmarks mainly focus on immediate, single-horizon tasks, failing to adequately evaluate models’ ability to understand and respond to complex, long-horizon scenarios. To address this incomplete evaluation of Large Reasoning Models (LRMs), we propose R-HORIZON, a method designed to stimulate long-horizon reasoning behaviors in LRMs through query composition. Based on R-HORIZON, we construct a long-horizon reasoning benchmark, comprising complex multi-step reasoning tasks with interdependent problems that span long reasoning horizons. Through comprehensive evaluation of LRMs using the R-HORIZON benchmark, we find that even the most advanced LRMs suffer significant performance degradation. Our analysis reveals that LRMs exhibit limited effective reasoning length and struggle to allocate thinking budget across multiple problems appropriately. Recognizing these limitations, we use R-HORIZON to construct long-horizon reasoning data for reinforcement learning with verified rewards (RLVR). Compared to training with single-horizon data, RLVR with R-HORIZON not only substantially improves performance on the multi-horizon reasoning tasks, but also promotes accuracy on standard reasoning tasks (+7.5 on AIME2024). These results position R-HORIZON as a scalable, controllable, and low-cost paradigm for enhancing and evaluating the long-horizon reasoning capabilities of LRMs.
Large Reasoning Models, Long Horizon Reasoning
foundation or frontier models, including LLMs
A scalable, controllable, and low-cost paradigm for enhancing and evaluating the long-horizon reasoning capabilities of LRMs
[ 6, 6, 6, 6 ]
Accept (Poster)
Yi Lu, Jianing Wang, Linsen Guo, Wei He, Hongyin Tang, Tao Gui, Xuanjing Huang, Xuezhi Cao, Wei Wang, Xunliang Cai
~Yi_Lu7, ~Jianing_Wang4, ~Linsen_Guo2, ~Wei_He14, ~Hongyin_Tang1, ~Tao_Gui1, ~Xuanjing_Huang1, ~Xuezhi_Cao1, ~Wei_Wang41, ~Xunliang_Cai1
20250916
https://openreview.net/forum?id=rRB1bYErbL
rRB1bYErbL
@inproceedings{ lu2026rhorizon, title={R-Horizon: How Far Can Your Large Reasoning Model Really Go in Breadth and Depth?}, author={Yi Lu and Jianing Wang and Linsen Guo and Wei He and Hongyin Tang and Tao Gui and Xuanjing Huang and Xuezhi Cao and Wei Wang and Xunliang Cai}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=rRB1bYErbL} }
OpenReview/ICLR/figures/2026/accept_poster/rRB1bYErbL/Figure2.png
2
Figure 2: The R-HORIZON data composition pipeline is illustrated in (a)-(c). We leverage RHORIZON to construct a comprehensive long-horizon reasoning evaluation benchmark spanning 6 tasks and generate multi-horizon training data for long-horizon reinforcement learning.
<paragraph_1>We propose R-HORIZON, a method designed to stimulate long-horizon reasoning behaviors in LRMs via query composition. As illustrated in Figure 2, R-HORIZON supports the concatenation of three types of expanded questions and can be employed in both the training and evaluation stages to enhance and evaluate the long-horizon capabilities of LRMs.</paragraph_1>
diagram
0.95814
OpenReview
ICLR
2,026
IGC-Net for conditional average potential outcome estimation over time
Estimating potential outcomes for treatments over time based on observational data is important for personalized decision-making in medicine. However, many existing methods for this task fail to properly adjust for time-varying confounding and thus yield biased estimates. There are only a few neural methods with proper adjustments, but these have inherent limitations (e.g., division by propensity scores that are often close to zero), which result in poor performance. As a remedy, we introduce the iterative G-computation network (IGC-Net). Our IGC-Net is a novel, neural end-to-end model which adjusts for time-varying confounding in order to estimate conditional average potential outcomes (CAPOs) over time. Specifically, our IGC-Net is the first neural model to perform fully regression-based iterative G-computation for CAPOs in the time-varying setting. We evaluate the effectiveness of our IGC-Net across various experiments. In sum, this work represents a significant step towards personalized decision-making from electronic health records.
causal inference, potential outcomes, treatment effects, healthcare
causal reasoning
We develop a novel neural method that performs G-computation in an iterative end-to-end training algorithm for conditional average potential outcome estimation over time.
[ 8, 6, 2, 4, 4 ]
Accept (Poster)
Konstantin Hess, Dennis Frauen, Valentyn Melnychuk, Stefan Feuerriegel
~Konstantin_Hess1, ~Dennis_Frauen1, ~Valentyn_Melnychuk1, ~Stefan_Feuerriegel1
20250916
https://openreview.net/forum?id=ZmhpqpKzAT
ZmhpqpKzAT
@inproceedings{ hess2026igcnet, title={{IGC}-Net for conditional average potential outcome estimation over time}, author={Konstantin Hess and Dennis Frauen and Valentyn Melnychuk and Stefan Feuerriegel}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=ZmhpqpKzAT} }
OpenReview/ICLR/figures/2026/accept_poster/ZmhpqpKzAT/Figure1.png
1
Figure 1: Iterative G-computation network. Neural end-toend architecture and training of our iterative G-computation network.
<paragraph_1>Our IGC-Net consists of two key components (see Figure 1): (i) a neural backbone zϕ(·), which can be, for example, be an LSTM or a transformer, and (ii) several G-computation heads {gϕ δ (·)}τ−1 δ=0, where ϕ denote the trainable weights. First, the neural backbone encodes the entire observed history. Then, the G-computation heads take the encoded history and perform the iterative regressions according to Equation 5. For all t = 1, . . . , T −τ and δ = 0, . . . , τ −1, the components are designed as follows:</paragraph_1>
diagram
0.992686
OpenReview
ICLR
2,026
**TandemFoilSet**: Datasets for Flow Field Prediction of Tandem-Airfoil Through the Reuse of Single Airfoils
Accurate simulation of flow fields around tandem geometries is critical for engineering design but remains computationally intensive. Existing machine learning approaches typically focus on simpler cases and lack evaluation on multi-body configurations. To support research in this area, we present **TandemFoilSet**: five tandem-airfoil datasets (4152 tandem-airfoil simulations) paired with four single-airfoil counterparts, for a total of 8104 CFD simulations. We provide benchmark results of a curriculum learning framework using a directional integrated distance representation, residual pre-training, training schemes based on freestream conditions and smooth-combined estimated fields, and a domain decomposition strategy. Evaluations demonstrate notable gains in prediction accuracy. We believe these datasets will enable future work on scalable, data-driven flow prediction for tandem-airfoil scenarios.
Physics-informed Graph Neural Network; Tandem-Airfoil; Flow Field Prediction; CFD; Aerodynamics;
datasets and benchmarks
We introduce TandemFoilSet, a paired set of 5 tandem-airfoil + 4 single-airfoil CFD datasets (8,104 simulations total) and baseline benchmarks to enable scalable ML flow-field prediction for tandem-airfoil interactions.
[ 2, 6, 6, 4 ]
Accept (Poster)
Wei Xian Lim, Loh Sher En Jessica, Zenong Li, Thant Zin Oo, Wai Lee Chan, Adams Wai-Kin Kong
~Wei_Xian_Lim2, ~Loh_Sher_En_Jessica1, ~Zenong_Li1, ~Thant_Zin_Oo1, ~Wai_Lee_Chan1, ~Adams_Wai-Kin_Kong1
20250918
https://openreview.net/forum?id=4Z0P4Nbosn
4Z0P4Nbosn
@inproceedings{ lim2026tandemfoilset, title={**TandemFoilSet**: Datasets for Flow Field Prediction of Tandem-Airfoil Through the Reuse of Single Airfoils}, author={Wei Xian Lim and Loh Sher En Jessica and Zenong Li and Thant Zin Oo and Wai Lee Chan and Adams Wai-Kin Kong}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=4Z0P4Nbosn} }
OpenReview/ICLR/figures/2026/accept_poster/4Z0P4Nbosn/Figure16.png
16
Figure 16: Determining obstruction of a boundary point from the reference point in a (a) single-object case and (b) double-object case. Note how a boundary point that is unobstructed in the first case may be obstructed by another object in the second case.
<paragraph_1>As mentions previously, the DID was estimated numerically following the procedure outlined in Algorithm 1. Although extending the theoretical definition of DID to multiple geometries is conceptually straightforward, the numerical calculations grow significantly more complex with each additional object. These challenges are indicated in red within Alg. 1, and are illustrated in Figs. 16 and 17.</paragraph_1> <paragraph_2>The first challenge is in determining whether the point on the object boundary k is obstructed from the point of reference i. As shown in Fig. 16(a), in a single object scenario, it suffices to ascertain</paragraph_2> <paragraph_3>that either boundary face adjacent to k is on the side of the object that faces i. However, as seen in Fig. 16(b), there is the possibility that k is obstructed from i by the boundary faces of another object. Determining obstruction is a process that increases in complexity with the addition of every object.</paragraph_3>
diagram
0.991554
OpenReview
ICLR
2,026
Understanding and Improving Length Generalization in Hierarchical Sparse Attention Models
Effectively processing long contexts is a critical challenge for language models. While standard Transformers are limited by quadratic complexity and poor length extrapolation, alternative architectures like sliding window attention and state space models sacrifice the ability to effectively utilize the full context due to their fixed-size memory. Chunk-based sparse attention has emerged as a promising paradigm for extreme length generalization, yet the key architectural principles underpinning its success are not yet fully understood. In this work, we present a systematic dissection of these models to identify the core components driving their performance. Through a unified framework and comprehensive ablation studies, we demonstrate that a combination of three design principles is critical: (1) an expressive, non-linear Chunk Encoder with a dedicated CLS token to produce representations for retrieval; (2) a Bypassing Residual Path to stably integrate retrieved global information without it being overridden by the local residual stream; and (3) enforced selection sparsity during pre-training to bridge the train-test distribution gap. We provide a theoretical motivation for intra-chunk information processing and landmark generation. By combining these principles, we establish a new state-of-the-art for training-free length extrapolation, successfully generalizing models trained on a 4K context to 32 million tokens on RULER and BABILong. Our findings provide a clear and empirically-grounded set of design principles for developing future, highly-capable long-context language models.
long-context modeling, length generalization, length extrapolation, sparse attention, language modeling
unsupervised, self-supervised, semi-supervised, and supervised representation learning
We demonstrate that extreme length generalization in hierarchical sparse attention is enabled by the interplay of an expressive chunking, a stable bypassing residual path, and enforced retrieval sparsity.
[ 4, 6, 4, 8 ]
Accept (Poster)
Jiaqi Leng, Xiang Hu, Junxiong Wang, Jianguo Li, Wei Wu, Yucheng Lu
~Jiaqi_Leng3, ~Xiang_Hu2, ~Junxiong_Wang1, ~Jianguo_Li2, ~Wei_Wu1, ~Yucheng_Lu1
20250912
https://openreview.net/forum?id=iHqdSQk6qc
iHqdSQk6qc
@inproceedings{ leng2026understanding, title={Understanding and Improving Length Generalization in Hierarchical Sparse Attention Models}, author={Jiaqi Leng and Xiang Hu and Junxiong Wang and Jianguo Li and Wei Wu and Yucheng Lu}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=iHqdSQk6qc} }
OpenReview/ICLR/figures/2026/accept_poster/iHqdSQk6qc/Figure2.png
2
Figure 2: Design of Encoder: (a): Encoder w/o CLS (b): Encoder with a learnable CLS token.
<paragraph_1>The different architectural configurations we investigate, summarized in Table 1, can be expressed as joint definitions of (f, g). In the “w/ CLS” variant, we prepend a learnable token, xCLS, to the input chunk H[i], as shown in Fig. 2. The Encoder processes this combined sequence, and its output corresponding to the xCLS position is used to form the landmark, while the remaining outputs form the KV cache.</paragraph_1>
diagram
0.911093
OpenReview
ICLR
2,026
Omni-Weather: Unified Multimodal Foundation Model for Weather Generation and Understanding
Weather modeling requires both accurate prediction and mechanistic interpretation, yet existing methods treat these goals in isolation, separating generation from understanding. To address this gap, we present Omni-Weather, the first multimodal foundation model that unifies weather generation and understanding within a single architecture. Omni-Weather integrates a radar encoder for weather generation tasks, followed by unified processing using a shared self-attention mechanism. Moreover, we construct a Chain-of-Thought dataset for causal reasoning in weather generation, enabling interpretable outputs and improved perceptual quality. Extensive experiments show Omni-Weather achieves state-of-the-art performance in both weather generation and understanding. Our findings further indicate that generative and understanding tasks in the weather domain can mutually enhance each other. Omni-Weather also demonstrates the feasibility and value of unifying weather generation and understanding.
AI for Science, Unified foundation model, Interpretable reasoning
applications to physical sciences (physics, chemistry, biology, etc.)
[ 6, 6, 4, 8 ]
Accept (Poster)
Zhiwang Zhou, Yuandong Pu, Xuming He, Yidi Liu, Yixin Chen, Junchao Gong, Xiang Zhuang, Wanghan Xu, Qinglong Cao, SHIXIANG TANG, Yihao Liu, Wenlong Zhang, LEI BAI
~Zhiwang_Zhou1, ~Yuandong_Pu1, ~Xuming_He4, ~Yidi_Liu3, ~Yixin_Chen26, ~Junchao_Gong1, ~Xiang_Zhuang1, ~Wanghan_Xu1, ~Qinglong_Cao1, ~SHIXIANG_TANG1, ~Yihao_Liu1, ~Wenlong_Zhang3, ~LEI_BAI1
20250910
https://openreview.net/forum?id=3WnXsp72v6
3WnXsp72v6
@inproceedings{ zhou2026omniweather, title={Omni-Weather: Unified Multimodal Foundation Model for Weather Generation and Understanding}, author={Zhiwang Zhou and Yuandong Pu and Xuming He and Yidi Liu and Yixin Chen and Junchao Gong and Xiang Zhuang and Wanghan Xu and Qinglong Cao and SHIXIANG TANG and Yihao Liu and Wenlong Zhang and LEI BAI}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=3WnXsp72v6} }
OpenReview/ICLR/figures/2026/accept_poster/3WnXsp72v6/Figure2.png
2
Figure 2: Comparison between separated architectures for weather understanding / generation (top) and unified framework with shared self-attention (bottom).
<paragraph_1>Despite these advances, unified architectures remain absent in the weather domain. As shown in Figure 2, existing approaches are divided into two disjoint paradigms: model such as ClimaX Nguyen et al. (2023) and WeatherGFM Zhao et al. (2024) excel at forecasting and downscaling but lack interpretation, while understanding models such as RadarQA He et al. (2025a) and WeatherQA Ma et al. (2024) provide diagnostic reasoning yet cannot synthesize physical fields. However, atmospheric systems are inherently multiscale, shaped by storm genesis, intensification and decay, where accurate prediction is often accompanied by the need for mechanistic interpretation. Moreover, extreme events such as rapid intensification of cyclones demand models that can not only predict hazardous outcomes but also explain the underlying drivers for actionable decision-making. Current studies isolate these links—generative nowcasting models do not understand radar observations, yet MLLMs do not predict radar variables. Bridging this gap with a foundation model that unifies generation and understanding is therefore an urgent requirement for weather domain.</paragraph_1> <paragraph_2>To this end, we propose Omni-Weather, a unified multimodal foundation model for both weather generation and understanding. By consolidating these tasks within a shared backbone (Figure 2, bottom), we further propose a Chain-of-Thought dataset tailored for causal reasoning in generation tasks, which enables Omni-Weather to be finetuned with explicit reasoning supervision and to perform thinking inference. Through this integration, Omni-Weather bridges predictive accuracy with interpretability, marking a step toward reasoning unified foundation models for weather.</paragraph_2>
diagram
0.992263
OpenReview
ICLR
2,026
Weight Space Representation Learning on Diverse NeRF Architectures
Neural Radiance Fields (NeRFs) have emerged as a groundbreaking paradigm for representing 3D objects and scenes by encoding shape and appearance information into the weights of a neural network. Recent studies have demonstrated that these weights can be used as input for frameworks designed to address deep learning tasks; however, such frameworks require NeRFs to adhere to a specific, predefined architecture. In this paper, we introduce the first framework capable of processing NeRFs with diverse architectures and performing inference on architectures unseen at training time. We achieve this by training a Graph Meta-Network within an unsupervised representation learning framework, and show that a contrastive objective is conducive to obtaining an architecture-agnostic latent space. In experiments conducted across 13 NeRF architectures belonging to three families (MLPs, tri-planes, and, for the first time, hash tables), our approach demonstrates robust performance in classification, retrieval, and language tasks involving multiple architectures, even unseen at training time, while also matching or exceeding the results of existing frameworks limited to single architectures.
weight space learning, representation learning, metanetworks, graph metanetworks, neural fields, neural radiance fields, NeRF, implicit neural representations, INR
unsupervised, self-supervised, semi-supervised, and supervised representation learning
We present the first framework that performs tasks on NeRFs by processing their weights and is able to work on diverse architectures
[ 6, 4, 4, 6 ]
Accept (Poster)
Francesco Ballerini, Pierluigi Zama Ramirez, Luigi Di Stefano, Samuele Salti
~Francesco_Ballerini1, ~Pierluigi_Zama_Ramirez1, ~Luigi_Di_Stefano2, ~Samuele_Salti1
20250918
https://openreview.net/forum?id=u90rHXaBve
u90rHXaBve
@inproceedings{ ballerini2026weight, title={Weight Space Representation Learning on Diverse Ne{RF} Architectures}, author={Francesco Ballerini and Pierluigi Zama Ramirez and Luigi Di Stefano and Samuele Salti}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=u90rHXaBve} }
OpenReview/ICLR/figures/2026/accept_poster/u90rHXaBve/Figure5.png
5
Figure 5: Parameter graph conversion. Top left: parameter graph representation of an MLP, proposed by Lim et al. (2024). Right: parameter graph representation of a tri-plane, proposed by Lim et al. (2024). Dotted edges should be connected to the C channel nodes, but are not fully drawn for better visual clarity. Bottom left: our parameter graph representation of a multi-resolution hash table.
<paragraph_1>The parameter graph conversion of an MLP, a tri-plane, and a multi-resolution hash table is depicted in Fig. 5, with additional details compared to Fig. 2 (left).</paragraph_1>
diagram
0.883032
OpenReview
ICLR
2,026
Toward Effective Tool-Integrated Reasoning via Self-Evolved Preference Learning
Tool-Integrated Reasoning (TIR) enables large language models (LLMs) to enhance their internal reasoning ability by integrating external tools. However, models with TIR often exhibit suboptimal behaviors, including insufficient tool calls, excessive tool calls, and overthinking after receiving tool call results. How to empower LLMs to perform TIR efficiently and accurately, while stabilizing the reasoning process, remains an open challenge. In this paper, we first analyze the impact of tool calls on model reasoning from the perspective of information entropy. We find that when tool call results are provided, the information entropy of subsequent reasoning content will show a clear trend of change, and the overall information entropy of the reasoning chain will vary depending on the number of tool calls. Based on these observations, we propose Tool-Light, a framework designed to encourage LLMs to perform TIR efficiently and accurately. Our framework consists of dataset construction and multi-stage fine-tuning. For dataset construction, we use the trained model for continuous self-evolved sampling, integrating two methods: vanilla sampling and entropy-guided sampling. At the same time, during the sampling process, we design strict criteria for selecting positive-negative pairs. For the training process, we introduce a two-stage method, which includes a Supervised Fine-Tuning (SFT), and Self-Evolved Direct Preference Optimization (DPO). Test results on 10 datasets reveal the effectiveness of Tool-Light, significantly improving the efficiency and accuracy of the model in completing TIR tasks.
reasoning model, tool-integrated reasoning, self-evolved training, information entropy
foundation or frontier models, including LLMs
[ 4, 6, 8, 6 ]
Accept (Poster)
Yifei Chen, Guanting Dong, Zhicheng Dou
~Yifei_Chen12, ~Guanting_Dong1, ~Zhicheng_Dou1
20250916
https://openreview.net/forum?id=mNeitRAdWV
mNeitRAdWV
@inproceedings{ chen2026toward, title={Toward Effective Tool-Integrated Reasoning via Self-Evolved Preference Learning}, author={Yifei Chen and Guanting Dong and Zhicheng Dou}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=mNeitRAdWV} }
OpenReview/ICLR/figures/2026/accept_poster/mNeitRAdWV/Figure3.png
3
Figure 3: The overall structure of Tool-Light’s training pipeline. Among them, the Self-Evolved DPO Alignment stage will conduct multiple rounds of training.
<paragraph_1>Overview. We propose Tool-Light, a multi-stage training pipeline aiming to improve the effectiveness of model tool calls. As shown in Figures 2 and 3, Tool-Light consists of two key components: (1) Dataset construction, which includes carefully designed sampling strategies to screen out training data. (2) Two-stage TIR training paradigm, which trains the model successively with SFT and self-evolved DPO training. In the self-evolved DPO training stage, we design pre-aligned DPO training and self-evolved DPO alignment stages to gradually improve the model’s capabilities.</paragraph_1> <paragraph_2>Based on existing research (Li et al., 2025g; Dong et al., 2025a; Song et al., 2025), we propose a two-stage self-evolved training pipeline to gradually boost the effectiveness and stability of the model’s TIR process. The specific pipeline is shown in Figure 3.</paragraph_2> <paragraph_3>(x,y)∈D log Pθ(y|x). As shown in the first step of Figure 3, this step aims to help the model quickly acquire the ability to complete TIR tasks.</paragraph_3>
diagram
0.939537
OpenReview
ICLR
2,026
Lookup multivariate Kolmogorov-Arnold Networks
High-dimensional linear mappings, or linear layers, dominate both the parameter count and the computational cost of most modern deep-learning models. We introduce lookup multivariate Kolmogorov-Arnold Networks (lmKANs), which deliver a substantially better trade-off between capacity and inference cost. Our construction expresses a general high-dimensional mapping through trainable low-dimensional multivariate functions. These functions can carry dozens or hundreds of trainable parameters each, and yet it takes only a few multiplications to compute them because they are implemented as spline lookup tables. Empirically, lmKANs reduce inference FLOPs by up to 6.0× while matching the flexibility of MLPs in general high-dimensional function approximation. In another feedforward fully connected benchmark, on the tabular-like dataset of randomly displaced methane configurations, lmKANs enable more than 10× higher H100 throughput at equal accuracy. Within the framework of Convolutional Neural Networks, lmKAN-based CNNs cut inference FLOPs at matched accuracy by 1.6–2.1× and by 1.7× on the CIFAR-10 and ImageNet-1k datasets, respectively.
KAN, inference efficiency, CUDA kernels
other topics in machine learning (i.e., none of the above)
We propose a fully connected layer that decouples inference efficiency from the number of trainable parameters and empirically find it to be Pareto optimal across a wide range of macro-architectural backbones.
[ 6, 2, 6, 6 ]
Accept (Poster)
Sergey Pozdnyakov, Philippe Schwaller
~Sergey_Pozdnyakov1, ~Philippe_Schwaller1
20250919
https://openreview.net/forum?id=XRQVIeBnB0
XRQVIeBnB0
@inproceedings{ pozdnyakov2026lookup, title={Lookup multivariate Kolmogorov-Arnold Networks}, author={Sergey Pozdnyakov and Philippe Schwaller}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=XRQVIeBnB0} }
OpenReview/ICLR/figures/2026/accept_poster/XRQVIeBnB0/Figure6.png
6
Figure 6: A methane configuration
<paragraph_1>Having demonstrated that lmKANs are Pareto-optimal when approximating a general function, we proceed to benchmark their efficiency on real data. We chose the tabular-like dataset of randomly displaced methane configurations for the comparison, as it is particularly suitable for this purpose (see Appendix G.4). The dataset consists of multiple off-equilibrium methane configurations, as illustrated in Fig. 6. The target is given by the corresponding quantum-mechanical energy (Turney et al., 2012; Kohn & Sham, 1965). Hydrogen atoms are placed around the carbon atom randomly, varying from instance to instance, which leads to different target energies.</paragraph_1>
diagram
0.866739
OpenReview
ICLR
2,026
Automata Learning and Identification of the Support of Language Models
We study the learnability of languages in the *Next Symbol Prediction* (NSP) setting, where a learner receives only positive examples from a language together with, for every prefix, (i) whether the prefix itself is in the language and (ii) which next symbols can lead to an accepting string. This setting has been used in prior work to empirically analyze neural sequence models, and additionally, we observe that efficient algorithms for the NSP setting can be used to learn the (truncated) support of language models. We first show that the class of DFAs with at most $n$ states is identifiable from positive examples augmented with these NSP labels. Nevertheless, even with this richer supervision, we show that PAC-learning DFAs remains computationally hard, and exact identification using only membership queries cannot be achieved in polynomial time. We then present $\mathrm{L_{nsp}^{\star}}$, an extension of Angluin’s $\mathrm{L}^{\star}$ algorithm, and show that DFAs can be PAC-learned efficiently using a language-model–based teacher that answers membership queries and generates valid strings conditioned on prefix prompts. Finally, we conduct a comprehensive experimental evaluation on 11 regular languages of varying complexity. Using $\mathrm{L}^{\star}_{\text{nsp}}$, we extract DFAs from Transformer-based language models trained on regular languages to evaluate the algorithm’s effectiveness and identify erroneous examples.
automata learning, regular languages, learning theory, DFA extraction, language models
learning theory
[ 8, 6, 6, 8 ]
Accept (Poster)
Satwik Bhattamishra, Michael Hahn, Varun Kanade
~Satwik_Bhattamishra1, ~Michael_Hahn1, ~Varun_Kanade1
20250919
https://openreview.net/forum?id=L8SMNWsxfK
L8SMNWsxfK
@inproceedings{ bhattamishra2026automata, title={Automata Learning and Identification of the Support of Language Models}, author={Satwik Bhattamishra and Michael Hahn and Varun Kanade}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=L8SMNWsxfK} }
OpenReview/ICLR/figures/2026/accept_poster/L8SMNWsxfK/Figure7.png
7
Figure 7: DFA with 28 states extracted by L⋆ nsp from Transformer trained on Tomita-5. See App. H.2 for more details.
<paragraph_1>Identifying erroneous examples. When the learned DFA ˆA is not equivalent to the target DFA A⋆, we construct the product DFA B which recognizes the strings in the symmetric difference of the two languages L(B) = L( ˆA)△L(A⋆). We use a BFS-like approach to identify several erroneous examples for the language model. Table 2 illustrates some erroneous examples for Bounded Dycks, Parity, and Tomita-5 language. Fig. 6 and 7 depict the extracted automaton for Parity and Tomita5; the ones for DYCK-(2, 2) and DYCK-(3, 3) are too large to be visually informative. Note that these models were not intentionally trained to fail, and all the examples generated by the language models were in their respective target languages. The DFAs extracted by L⋆ nsp were based on a few disagreements in the NSP labels of the generated strings. Training the language models for longer avoids such errors for synthetic languages of this scale. Note that the Transformer models used for Tomita-5 and Dyck languages in Figure 2 (well-trained) and Table 2 (imperfect) are different. See App. H.2 for further details.</paragraph_1> <paragraph_2>Results. We observed erroneous strings for languages like Parity, Tomita-5, DYCK-(2, 2), and DYCK-(3, 3). Examples of some erroneous strings identified by the hypothesis DFA is provided in Table 2. Figure 6 and 7 show the DFAs extracted for Parity and Tomita-5, respectively. The DFAs for DYCK-(2, 2) and DYCK-(3, 3) are too large to be visually interpretable. Constructing the product DFA is efficient and identifying several erroneous examples takes only a few seconds. There is no natural distribution over the symmetric difference language and further it can even be finite in some cases which makes it difficult to systematically compute the accuracy of predicting erroneous examples using the extracted DFA. The closest signal we have is the NSP accuracy for the extracted DFAs which is near perfect.</paragraph_2>
diagram
0.92614
OpenReview
ICLR
2,026
Nef-Net v2: Adapting Electrocardio Panorama in the wild
Conventional multi-lead electrocardiogram (ECG) systems capture cardiac signals from a fixed set of anatomical viewpoints defined by lead placement. However, cer- tain cardiac conditions (e.g., Brugada syndrome) require additional, non-standard viewpoints to reveal diagnostically critical patterns that may be absent in standard leads. To systematically overcome this limitation, Nef-Net was recently introduced to reconstruct a continuous electrocardiac field, enabling virtual observation of ECG signals from arbitrary views (termed Electrocardio Panorama). Despite its promise, Nef-Net operates under idealized assumptions and faces in-the-wild challenges, such as long-duration ECG modeling, robustness to device-specific signal artifacts, and suboptimal lead placement calibration. This paper presents NEF-NET V2, an enhanced framework for realistic panoramic ECG synthesis that supports arbitrary-length signal synthesis from any desired view, generalizes across ECG devices, and compensates for operator-induced deviations in electrode place- ment. These capabilities are enabled by a newly designed model architecture that performs direct view transformation, incorporating a workflow comprising offline pretraining, device calibration tuning steps as well as an on-the-fly calibration step for patient-specific adaptation. To rigorously evaluate panoramic ECG synthe- sis, we construct a new Electrocardio Panorama benchmark, called Panobench, comprising 4470 recordings with 48-view per subject, capturing the full spatial variability of cardiac electrical activity. Experimental results show that NEF-NET V2 delivers substantial improvements over Nef-Net, yielding an increase of around 6 dB in PSNR in real-world setting. Our data and code are publicly available at https://github.com/HKUSTGZ-ML4Health-Lab/NEFNET-v2.
ECG representation, Cardiac Diagnosis
applications to physical sciences (physics, chemistry, biology, etc.)
An enhanced variant of Nef-Net to generate panoramic ECG views, including previously unseen views.
[ 6, 2, 6 ]
Accept (Poster)
Zehui Zhan, Yaojun Hu, Jiajing Zhang, Wanchen Lian, Wanqing Wu, Jintai Chen
~Zehui_Zhan1, ~Yaojun_Hu2, ~Jiajing_Zhang1, ~Wanchen_Lian1, ~Wanqing_Wu1, ~Jintai_Chen1
20250917
https://openreview.net/forum?id=JzZhhhxniR
JzZhhhxniR
@inproceedings{ zhan2026nefnet, title={Nef-Net v2: Adapting Electrocardio Panorama in the wild}, author={Zehui Zhan and Yaojun Hu and Jiajing Zhang and Wanchen Lian and Wanqing Wu and Jintai Chen}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=JzZhhhxniR} }
OpenReview/ICLR/figures/2026/accept_poster/JzZhhhxniR/Figure2.png
2
Figure 2: NEF-NET V2 architecture for Electrocardio Panorama synthesis (illustrated for a 3-input to 2-query view synthesis task as example). The NEF-NET V2 first employs a View Encoder to extract features from the Recorded ECG that are relevant to the Queried ECG. These extracted features are then fused using a Geometric View Transformer to synthesize the query view.
<paragraph_1>The key idea of NEF-NET V2 is to formulate ECG view synthesis as a direct view-to-view transformation problem. This is a pairwise deterministic mapping: the model converts the observed lead signals into the target lead through a single-step transformation, without modeling any shared geometric prior (e.g., the electrocardio field representation) as Nef-Net (Chen et al., 2021). NEF-NET V2 incorporates three core components: Angle Embedding, View Encoder, and Geometric View Transformer (GeoVT), as illustrated in Fig. 2. Formally, let X = {x1, · · · , xl} with each xi ∈R1×t denote l ECG signals recorded from distinct viewing angles.</paragraph_1>
diagram
0.992609
OpenReview
ICLR
2,026
Unified Vision–Language Modeling via Concept Space Alignment
We introduce vSONAR, a vision–language embedding space extended from the text-only embedding space SONAR, which supports 200 text languages and 37 speech languages. To construct vSONAR, we propose a post-hoc alignment pipeline that maps the representations of an existing vision encoder into the SONAR space. We thoroughly evaluate vSONAR and show that its embeddings achieve competitive performance on text-to-video retrieval. Equipped with the SONAR text decoder, vSONAR further surpasses state-of-the-art vision–language models on video captioning tasks, including DREAM-1K (BLEU 24.3 vs. 19.6) and VATEX (BLEU 45.0 vs. 41.5). Leveraging vSONAR, we first demonstrate that the Large Concept Model (LCM) operating in SONAR and trained with English text only, can perform both single- and multi-visual concept understanding in a zero-shot manner. Finally, we introduce vLCM, which extends the LCM with vision–language instruction tuning. vLCM encodes vision and language inputs into an unified sequence of latent embeddings via vSONARand SONAR, and it is trained with the same latent diffusion objective for next-embedding prediction as in LCM's text-only pre-training. Experiments on a large-scale multilingual and -modal instruction–tuning data mixture highlight the potential of vLCM: vLCM matches state-of-the-art vision-language models on tasks covering image/video captioning and question answering, while significantly outperforming them across 61 rich- to low-resource languages out of all 62 tested languages.
multimodal embedding space, multilingual embedding space
applications to computer vision, audio, language, and other modalities
[ 6, 6, 6, 4 ]
Accept (Poster)
Yifu QIU, Paul-Ambroise Duquenne, Holger Schwenk
~Yifu_QIU1, ~Paul-Ambroise_Duquenne1, ~Holger_Schwenk1
20250918
https://openreview.net/forum?id=4LiX5ddGcU
4LiX5ddGcU
@inproceedings{ qiu2026unified, title={Unified Vision{\textendash}Language Modeling via Concept Space Alignment}, author={Yifu QIU and Paul-Ambroise Duquenne and Holger Schwenk}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=4LiX5ddGcU} }
OpenReview/ICLR/figures/2026/accept_poster/4LiX5ddGcU/Figure1.png
1
Figure 1: Left: Illustration of V-SONAR. Right: fine-tuning V-LCM with vision-language instruction tuning.
<paragraph_1>Architecture The architecture of V-SONAR is illustrated in the left panel of Figure 1. Given the input image or video, PERCEPTION ENCODER (PE) will first encode each frame separately. Then, we stack a lightweight projector on top of PE to adapt the encoder’s representations into the SONAR space. The projector first injects positional embeddings to the embeddings of all frames, thus encoding temporal order information, followed by a single temporal attention layer that enables frame-level interactions. Finally, an attention layer then aggregates the frame embeddings into a single video-level representation, which serves as the final embedding for downstream tasks. See Appendix D for implementation details.</paragraph_1>
diagram
0.931501
OpenReview
ICLR
2,026
Not All Clients Are Equal: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients
As AI becomes more personal, e.g., Agentic AI, there is an increasing need for personalizing models for various use cases. Personalized federated learning (PFL) enables each client to collaboratively leverage other clients' knowledge for better adaptation to the task of interest, without privacy risks. Despite its potential, existing PFL methods remain confined to rather simplified scenarios where data and models are the same across clients. To move towards realistic scenarios, we propose FedMosaic, a method that jointly addresses data and model heterogeneity with a task-relevance-aware model aggregation strategy to reduce parameter interference, and a dimension-invariant module that enables knowledge sharing across heterogeneous architectures without huge computational cost. To mimic the real-world task diversity, we propose a multi-modal PFL benchmark spanning 40 distinct tasks with distribution shifts over time. The empirical study shows that FedMosaic outperforms the state-of-the-art PFL methods, excelling in both personalization and generalization capabilities under challenging, realistic scenarios.
Collaborative Learning, Federated Learning, Continual Learning, Multi-modal Learning, Personalization, Distributed Learning
applications to computer vision, audio, language, and other modalities
[ 10, 4, 6, 8 ]
Accept (Poster)
Minhyuk Seo, Taeheon Kim, Hankook Lee, Jonghyun Choi, Tinne Tuytelaars
~Minhyuk_Seo1, ~Taeheon_Kim3, ~Hankook_Lee1, ~Jonghyun_Choi1, ~Tinne_Tuytelaars1
20250918
https://openreview.net/forum?id=0g5Dk4Qfh0
0g5Dk4Qfh0
@inproceedings{ seo2026not, title={Not All Clients Are Equal: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients}, author={Minhyuk Seo and Taeheon Kim and Hankook Lee and Jonghyun Choi and Tinne Tuytelaars}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=0g5Dk4Qfh0} }
OpenReview/ICLR/figures/2026/accept_poster/0g5Dk4Qfh0/Figure14.png
14
Figure 14: Illustration of blockwise PQ-LoRA. When a model has NB PQ-LoRA modules, each block employs PQ-LoRA at its last layer, while the remaining layers adopt conventional LoRA. Each block contains the same number of layers.
<paragraph_1>To identify layer-wise correspondences between depth-heterogeneous models, we analyze representation alignment using CKA (Kornblith et al., 2019). Specifically, we measure similarity across layers within the Llama-3 family (1B, 3B, 8B) and the Qwen-2.5 family (0.5B, 1.5B, 3B), as illustrated in Fig. 12. As shown in the figure, layers with the same relative depth exhibit strong alignment, indicating approximately linear alignment within both the Llama-3 and Qwen-2.5 families. Moreover, we observe near-linear alignment even across families, i.e., between Llama-3 and Qwen-2.5, despite weaker linearity than intra-family alignment. Moreover, to demonstrate that this layer-wise correlation trend generally holds across different models, not just between Llama and Qwen, we have additionally included the layer-wise correlation analysis between InternLM (Cai et al., 2024) and Llama in Fig. 13, which shows the same trend as our previous findings. This empirical analysis supports our block-wise aggregation of PQ-LoRA. We provide an illustration of the block-wise PQ-LoRA in Fig. 14.</paragraph_1>
diagram
0.962517
OpenReview
ICLR
2,026
FreeKV: Boosting KV Cache Retrieval for Efficient LLM Inference
Large language models (LLMs) have been widely deployed with rapidly expanding context windows to support increasingly demanding applications. However, long contexts pose significant deployment challenges, primarily due to the KV cache whose size grows proportionally with context length. While KV cache compression methods are proposed to address this issue, KV dropping methods incur considerable accuracy loss, and KV retrieval methods suffer from significant efficiency bottlenecks. We propose FreeKV, an algorithm-system co-optimization framework to enhance KV retrieval efficiency while preserving accuracy. On the algorithm side, FreeKV introduces speculative retrieval to shift the KV selection and recall processes out of the critical path, combined with fine-grained correction to ensure accuracy. On the system side, FreeKV employs hybrid KV layouts across CPU and GPU memory to eliminate fragmented data transfers, and leverages double-buffered streamed recall to further improve efficiency, enabling effective overlap with computation, full latency hiding, and practical speedups from speculative recall. Experiments demonstrate that FreeKV achieves near-lossless accuracy across various scenarios and models, delivering up to 13$\times$ speedup compared to SOTA KV retrieval methods.
LLM inference, KV cache
infrastructure, software libraries, hardware, systems, etc.
We propose FreeKV, an algorithm-system co-optimization framework for LLM inference to enhance KV retrieval efficiency while preserving accuracy.
[ 8, 2, 6, 6 ]
Accept (Poster)
Guangda Liu, Chengwei Li, Zhenyu Ning, Jing Lin, Yiwu Yao, Danning Ke, Minyi Guo, Jieru Zhao
~Guangda_Liu1, ~Chengwei_Li1, ~Zhenyu_Ning1, ~Jing_Lin6, ~Yiwu_Yao1, ~Danning_Ke1, ~Minyi_Guo1, ~Jieru_Zhao1
20250918
https://openreview.net/forum?id=wXAn7orB1H
wXAn7orB1H
@inproceedings{ liu2026freekv, title={Free{KV}: Boosting {KV} Cache Retrieval for Efficient {LLM} Inference}, author={Guangda Liu and Chengwei Li and Zhenyu Ning and Jing Lin and Yiwu Yao and Danning Ke and Minyi Guo and Jieru Zhao}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=wXAn7orB1H} }
OpenReview/ICLR/figures/2026/accept_poster/wXAn7orB1H/Figure5.png
5
Figure 5: System overview of FreeKV.
<paragraph_1>The system overview of FreeKV is illustrated in Fig. 5. In the data plane, FreeKV retains the query vectors from the previous step, page summaries and cache for selected KV pages in GPU memory. In CPU memory, FreeKV maintains a complete KV cache pool for offloading KV pages. In the control plane, a controller on CPU manages the scheduling and synchronization of operations such as correction, attention, selection and recall launched on different CPU threads and GPU streams, following the timeline described in Sec. 3.</paragraph_1> <paragraph_2>End-to-end latency As shown in Fig. 7, FreeKV demonstrates significant efficiency gains over SOTA KV retrieval methods, achieving up to 13.7× and 8.4× speedups compared to ArkVale and ShadowKV, respectively. Moreover, FreeKV attains efficiency comparable to dropping methods like RaaS and RazorAttention, which do not involve offloading or recall. The speedups over ArkVale are detailed in Fig. 7. For InfiniGen, FreeKV achieves 3.2× and 5.4× speedups under long-input and long-generation scenarios on Qwen-2.5-7B, and 5.1× and 8.5× on Llama-3.1-8B. The improvements over ShadowKV are comparable to those over InfiniGen, reaching up to 8.4× on Llama-3.18B in the long-generation scenario. The improvements become more pronounced for large batch sizes and in long-generation scenarios, where more recall operations are required. In addition, the improvements are amplified for Llama-3.1-8B, which has more KV heads and a larger KV cache compared to Qwen-2.5-7B. Moreover, we present inference latency across different input and output lengths in Appendix C.1, showing that FreeKV consistently achieves substantial speedups under various settings. We also conduct ablation studies on the impact of our efficiency optimizations in Appendix C.2, which demonstrate their effectiveness. 6 DISCUSSION</paragraph_2>
diagram
0.981499
OpenReview
ICLR
2,026
Fine-Grained Activation Steering: Steering Less, Achieving More
Activation steering has emerged as a cost-effective paradigm for modifying large language model (LLM) behaviors. Existing methods typically intervene at the block level, steering the bundled activations of selected attention heads, feedforward networks, or residual streams. However, we reveal that block-level activations are inherently heterogeneous, entangling beneficial, irrelevant, and harmful features, thereby rendering block-level steering coarse, inefficient, and intrusive. To investigate the root cause, we decompose block activations into fine-grained atomic unit (AU)–level activations, where each AU-level activation corresponds to a single dimension of the block activation, and each AU denotes a slice of the block weight matrix. Steering an AU-level activation is thus equivalent to steering its associated AU. Our theoretical and empirical analysis show that heterogeneity arises because different AUs or dimensions control distinct token distributions in LLM outputs. Hence, block-level steering inevitably moves helpful and harmful token directions together, which reduces efficiency. Restricting intervention to beneficial AUs yields more precise and effective steering. Building on this insight, we propose AUSteer, a simple and efficient method that operates at a finer granularity of the AU level. AUSteer first identifies discriminative AUs globally by computing activation momenta on contrastive samples. It then assigns adaptive steering strengths tailored to diverse inputs and selected AU activations. Comprehensive experiments on multiple LLMs and tasks show that AUSteer consistently surpasses advanced baselines while steering considerably fewer activations, demonstrating that steering less achieves more.
Activation Steering, Large Language Models, Fine-Grained Intervention
foundation or frontier models, including LLMs
Breaking LLM blocks to fine-grained atomic units for intervention: steering less achieves more
[ 4, 4, 6 ]
Accept (Poster)
Zijian Feng, Tianjiao Li, Zixiao Zhu, Hanzhang Zhou, Junlang Qian, Li Zhang, Chua Jia Jim Deryl, Mak Lee Onn, Gee Wah Ng, Kezhi Mao
~Zijian_Feng2, ~Tianjiao_Li2, ~Zixiao_Zhu2, ~Hanzhang_Zhou1, ~Junlang_Qian1, ~Li_Zhang70, ~Chua_Jia_Jim_Deryl2, ~Mak_Lee_Onn1, ~Gee_Wah_Ng1, ~Kezhi_Mao1
20250918
https://openreview.net/forum?id=guSVafqhrB
guSVafqhrB
@inproceedings{ feng2026finegrained, title={Fine-Grained Activation Steering: Steering Less, Achieving More}, author={Zijian Feng and Tianjiao Li and Zixiao Zhu and Hanzhang Zhou and Junlang Qian and Li Zhang and Chua Jia Jim Deryl and Mak Lee Onn and Gee Wah Ng and Kezhi Mao}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=guSVafqhrB} }
OpenReview/ICLR/figures/2026/accept_poster/guSVafqhrB/Figure1.png
1
Figure 1: Comparison of block-level steering (prior work) and AU-level steering (Ours).
<paragraph_1>However, a common practice in existing methods is block-level steering, where a “block” denotes the multi-head attention (MHA), the feed-forward network (FFN), or the layer’s residual stream. As shown in Figure 1 (a), the intervention is vector-level: every dimension of the selected block’s activation is bundled and steered simultaneously. One of the main limitations of block-level intervention is that it ignores heterogeneity within block activations. These activations often span hundreds or thousands of dimensions, each indicating a different feature. Some features are beneficial for the task, while others are irrelevant or harmful. As a result, block level steering is (1) too coarse: a block can be decomposed into finer functional units, and treating it as a single entity prevents precise targeting; (2) inefficient: steering the entire block amplifies both useful and harmful signals, which reduces efficiency and risks performance degradation; and (3) overly intrusive: it modifies many dimensions unnecessarily, increasing the intervention footprint.</paragraph_1> <paragraph_2>In greater depth, we empirically and theoretically justify the heterogeneity of block-level activations. We first decompose block-level activations into finer-grained atomic unit (AU) activations, where each AU-level activation corresponds to a single dimension of the block activation, and each AU denotes a slice of the block weight matrix. Steering an AU-level activation is thus equivalent to steering its associated AU. As shown in Figure 1 (b), each AUlevel intervention targets a single dimension1. Both the intervention value and the affected activation are scalars. Empirically, we find that AU-level steering effects vary widely: some dimensions improve performance, some degrade it, and others are neutral, confirming heterogeneity. In many cases, steering a single dimension or a small subset outperforms steering the entire block.</paragraph_2> <paragraph_3>To further validate this, we first examine the convergence behavior of AU steering: different AUs govern different output token distributions, and as steering strength increases, the LLM’s output tends to converge to the AU’s token distribution. For the selected 7th attention head at the 27th layer, we scale the AU coefficient from 10 to an extremely large value (100,000) and compute the normalized KL divergence between the output at each strength and the output at 100,000. In Figure 3, columns 1 and 2 show these divergences for the 44th AU and the 84th AU. The divergence decreases with strength, indicating convergence. Column 3 shows the pairwise KL divergence between the 44th AU and the 84th AU across strengths. The divergence increases with strength, indicating that the two AUs tend to drive the model toward different output distributions.</paragraph_3>
diagram
0.998495
OpenReview
ICLR
2,026
Counterfactual Structural Causal Bandits
Causal reasoning lies at the heart of robust and generalizable decision-making, and the *Pearl Causal Hierarchy* provides a formal language for distinguishing between observational ($\mathcal{L}_1$), interventional ($\mathcal{L}_2$), and counterfactual ($\mathcal{L}_3$) levels of reasoning. Existing bandit algorithms that leverage causal knowledge have primarily operated within the $\mathcal{L}_1$ and $\mathcal{L}_2$ regimes, treating each realizable and physical intervention as a distinct arm. That is, they have largely excluded counterfactual quantities due to their perceived inaccessibility. In this paper, we introduce a *counterfactual structural causal bandit* (ctf-SCB) framework which expands the agent's feasible action space beyond conventional observational and interventional arms to include a class of realizable counterfactual actions. Our framework offers a principled extension of structural causal bandits and paves the way for integrating counterfactual reasoning into sequential decision-making.
causal inference, counterfactual inference, structural causal bandits, causal decision making
causal reasoning
We introduce a counterfactual structural causal bandit (ctf-SCB) framework which expands the agent's feasible action space beyond conventional observational and interventional arms to include a class of realizable counterfactual actions.
[ 4, 4, 6, 8 ]
Accept (Poster)
Min Woo Park, Sanghack Lee
~Min_Woo_Park1, ~Sanghack_Lee1
20250920
https://openreview.net/forum?id=gjvTNxVd2f
gjvTNxVd2f
@inproceedings{ park2026counterfactual, title={Counterfactual Structural Causal Bandits}, author={Min Woo Park and Sanghack Lee}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=gjvTNxVd2f} }
OpenReview/ICLR/figures/2026/accept_poster/gjvTNxVd2f/Figure10.png
10
Figure 10: MUCT and IB are shown in red and blue, respectively; (b, c) non-POMISs; (d, e) POMISs.
<paragraph_1>For example, consider the causal diagram in Fig. 10a. Here, G = G[An(Y )G] holds. An L1 action do(∅) is not a POMIS. To see this, we construct MUCT, initializing T = {Y }, as follows: Since Y has an unobserved confounder with C, we update T = cc(Y )G = {C, Y }, and thereafter add all the descendants of C, obtaining T = {C, D, Y }. Since there are no more unobserved confounders between T and An(Y )G \ T, MUCT has been found and is given by MUCT(G, Y ) = {C, D, Y } along with IB(G, Y ) = {A, B} (Fig. 10b). According to the graphical characterization, we can conclude that do(∅) is not a POMIS with respect to ⟨G, Y ⟩. Similarly, {B, C} is also not a POMIS, as IB(G{B,C}, Y ) = {B, D}, as depicted in Fig. 10c. In contrast, the regimes corresponding to Figs. 10d and 10e are POMISs, since they satisfy IB(GX, Y ) = X.</paragraph_1> <paragraph_2>Task 1 (Fig. 5a) Task 2 (Fig. 3b) Task 3 (Fig. 9) Total trials 10k 10k 100k</paragraph_2>
diagram
0.990313
OpenReview
ICLR
2,026
SpaCE-Eval: A Benchmark for Real-World Multi-Modal Reasoning
Multi-modal Large Language Models (MLLMs) represent a significant advancement in artificial intelligence. Among the growing capabilities exhibited by MLLMs, abilities to understand and reason in real-world environments stand out as particularly vital as a fundamental prerequisite for a wide array of real-world applications. The current methods for evaluating MLLMs often fall short in their ability to comprehensively assess these crucial capabilities. However, being able to reason on complex environment-scale spaces, for example, room spaces, building spaces, and even urban spaces, and to predict the future and plan actions, is essential for humans and various autonomous agents to survive in the real physical world. To address these gaps, we propose a visual-question-answering benchmark, **SpaCE-Eval** (**Spa**tial Reasoning, **C**ommonsense Knowledge and **E**nvironment Interaction) in the real world, designed to evaluate some of MLLM’s most important reasoning abilities in real-world environments. As the name suggests, it challenges the models to reason on complex spatial scenarios, invoke commonsense knowledge of the physical world, and interact with the environment. The dataset consists of all new diagrams purposefully produced by humans, where diagram-question pairs are meticulously refined and selected through a rigorous pipeline. Additionally, with the benchmark, we evaluate a selection of leading MLLMs, both proprietary and open source. The results suggest that a significant enhancement of MLLMs in reasoning in the real physical world is necessary to realise more advanced general artificial intelligence.
Benchmark, Multi-modal Large Language Model, Visual Reasoning, Real World Environments, Evaluation
datasets and benchmarks
[ 6, 4, 6, 6 ]
Accept (Poster)
Xuyou Yang, Yucheng Zhao, Wenxuan Zhang, Immanuel Koh
~Xuyou_Yang1, ~Yucheng_Zhao3, ~Wenxuan_Zhang1, ~Immanuel_Koh1
20250919
https://openreview.net/forum?id=VAEkLS9VBr
VAEkLS9VBr
@inproceedings{ yang2026spaceeval, title={Spa{CE}-Eval: A Benchmark for Real-World Multi-Modal Reasoning}, author={Xuyou Yang and Yucheng Zhao and Wenxuan Zhang and Immanuel Koh}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=VAEkLS9VBr} }
OpenReview/ICLR/figures/2026/accept_poster/VAEkLS9VBr/Figure9.png
9
Figure 9: Example of Spatial Reasoning/Form Transformation.
diagram
0.873445
OpenReview
ICLR
2,026
GaussianFusion: Unified 3D Gaussian Representation for Multi-Modal Fusion Perception
The bird’s-eye view (BEV) representation enables multi-sensor features to be fused within a unified space, serving as the primary approach for achieving comprehensive multi-task perception. However, the discrete grid representation of BEV leads to significant detail loss and limits feature alignment and cross-modal information interaction in multimodal fusion perception. In this work, we break from the conventional BEV paradigm and propose a new universal framework for multi-task multi-modal fusion based on 3D Gaussian representation. This approach naturally unifies multi-modal features within a shared and continuous 3D Gaussian space, effectively preserving edge and fine texture details. To achieve this, we design a novel forward-projection-based multi-modal Gaussian initialization module and a shared cross-modal Gaussian encoder that iteratively updates Gaussian properties based on an attention mechanism. GaussianFusion is inherently a task-agnostic model, with its unified Gaussian representation naturally supporting various 3D perception tasks. Extensive experiments demonstrate the generality and robustness of GaussianFusion. On the nuScenes dataset, it outperforms the 3D object detection baseline BEVFusion by 2.6 NDS. Its variant surpasses GaussFormer on 3D semantic occupancy with 1.55 mIoU improvement while using only 30% of the Gaussians and achieving a 450% speedup.
Gaussian Representation, BEV Representation, Detection, Occupancy
applications to robotics, autonomy, planning
[ 2, 4, 6, 6 ]
Accept (Poster)
Xiao Zhao, Chang Liu, Mingxu Zhu, Zheyuan Zhang, Linna Song, Qingliang Luo, Chufan Guo, Kuifeng Su
~Xiao_Zhao4, ~Chang_Liu67, ~Mingxu_Zhu1, ~Zheyuan_Zhang6, ~Linna_Song1, ~Qingliang_Luo1, ~Chufan_Guo1, ~Kuifeng_Su1
20250916
https://openreview.net/forum?id=7jXxQ9bGoU
7jXxQ9bGoU
@inproceedings{ zhao2026gaussianfusion, title={GaussianFusion: Unified 3D Gaussian Representation for Multi-Modal Fusion Perception}, author={Xiao Zhao and Chang Liu and Mingxu Zhu and Zheyuan Zhang and Linna Song and Qingliang Luo and Chufan Guo and Kuifeng Su}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=7jXxQ9bGoU} }
OpenReview/ICLR/figures/2026/accept_poster/7jXxQ9bGoU/Figure1.png
1
Figure 1: Comparison of the discrete BEV representation fusion paradigm Liu et al. (2023b) and our proposed continuous Gaussian representation fusion paradigm. B, G, C, L, and F denote BEV, Gaussian, Camera, Lidar, and Fusion.
<paragraph_1>BEV directly discretizes and quantizes data, leading to inevitable information loss. During feature extraction, perception data are projected onto a fixed-resolution BEV grid, which compresses spatial information. This issue becomes particularly severe when the BEV resolution is low, as it directly impacts model performance by failing to adequately preserve fine-grained scene structures. While increasing the BEV resolution will bring unacceptable computational overhead, as shown in Table 1. Additionally, BEV fusion strategies often rely on simple feature concatenation or weighted summation, which are insufficient for effective cross-modal feature interaction and alignment, ultimately leading to suboptimal fusion performance, as illustrated in Fig. 1(a).</paragraph_1> <paragraph_2>To address these challenges, we introduce a fusion approach based on 3D Gaussian Splatting (3DGS) Kerbl et al. (2023) to achieve more fine-grained information modeling and more natural multimodal alignment. As shown in Fig. 1(b), 3DGS employs continuous Gaussian distributions to represent the scene, preserving rich geometric and semantic information in the Gaussian stage and preventing the early quantization-induced information loss seen in BEV-based methods. Unlike direct BEV quantization, 3DGS aggregates information before its final projection onto the BEV grid, allowing crossmodal features to interact at a higher-dimensional level and capturing finer spatial structures prior to quantization, Table 1 shows the effectiveness of this strategy. Moreover, the covariance matrices of Gaussians enable adaptive modeling of uncertainty, enhancing the representation of object shapes and boundaries.</paragraph_2>
diagram
0.993349
OpenReview
ICLR
2,026
Beyond Simple Graphs: Neural Multi-Objective Routing on Multigraphs
Learning-based methods for routing have gained significant attention in recent years, both in single-objective and multi-objective contexts. Yet, existing methods are unsuitable for routing on multigraphs, which feature multiple edges with distinct attributes between node pairs, despite their strong relevance in real-world scenarios. In this paper, we propose two graph neural network-based methods to address multi-objective routing on multigraphs. Our first approach operates directly on the multigraph by autoregressively selecting edges until a tour is completed. The second model, which is more scalable, first simplifies the multigraph via a learned pruning strategy and then performs autoregressive routing on the resulting simple graph. We evaluate both models empirically, across a wide range of problems and graph distributions, and demonstrate their competitive performance compared to strong heuristics and neural baselines.
Combinatorial Optimization, Reinforcement Learning, Graph-based Machine Learning, Multigraphs, Traveling Salesman Problem, Multi-Objective Optimization
learning on graphs and other geometries & topologies
We introduce two GNN-based models for routing with multiple objectives on multigraphs and asymmetric graphs
[ 8, 4, 4 ]
Accept (Poster)
Filip Rydin, Attila Lischka, Jiaming Wu, Morteza Haghir Chehreghani, Balazs Kulcsar
~Filip_Rydin1, ~Attila_Lischka1, ~Jiaming_Wu3, ~Morteza_Haghir_Chehreghani2, ~Balazs_Kulcsar1
20250919
https://openreview.net/forum?id=55laGcPNZZ
55laGcPNZZ
@inproceedings{ rydin2026beyond, title={Beyond Simple Graphs: Neural Multi-Objective Routing on Multigraphs}, author={Filip Rydin and Attila Lischka and Jiaming Wu and Morteza Haghir Chehreghani and Balazs Kulcsar}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=55laGcPNZZ} }
OpenReview/ICLR/figures/2026/accept_poster/55laGcPNZZ/Figure1.png
1
Figure 1: Edge-based GMS and its most important components.
<paragraph_1>We visualize GMS-EB in Figure 1. The encoder, consisting of L GREAT-layers, outputs edge embeddings. Using them, the decoder constructs valid tours autoregressively. Given the instance s and incomplete route π1:t−1 in construction step t, the decoder selects edge πt with probability pθ(λ)(πt | π1:t−1, s). Thus the probability of the whole route π is</paragraph_1>
diagram
0.998319
OpenReview
ICLR
2,026
Goedel-Prover-V2: Scaling Formal Theorem Proving with Scaffolded Data Synthesis and Self-Correction
Automated theorem proving (ATP) --- the task of generating a proof that passes automated proof verification given a math question in formal language --- is a critical challenge at the intersection of mathematics and Artificial Intelligence (AI). We introduce Goedel-Prover-V2, a family of two language models that establish a new state-of-the-art (SOTA) in open-source ATP, using the Lean proof assistant. In addition to standard expert iteration and reinforcement learning, our approach incorporates three key innovations: (1) During training when improvement plateaus on human questions, the prover does scaffolded data synthesis to generate synthetic questions of increasing difficulty for its own training; (2) The prover is trained to self-correct using Lean compiler feedback; (3) Improved test-time exploration through checkpoint averaging to balance accuracy and diversity. Our small model, Goedel-Prover-V2-8B, reaches 84.6\% pass@32 on MiniF2F and outperforms DeepSeek-Prover-V2-671B despite being $80\times$ smaller. Our flagship model, Goedel-Prover-V2-32B, achieves 88.1\% on MiniF2F at pass@32 in standard mode and 90.4\% in self-correction mode, outperforming prior SOTA by a large margin. Additionally, our flagship model solves 86 problems on PutnamBench at pass@184, securing first place among open-source models and surpassing DeepSeek-Prover-V2-671B's record of 47 problems by pass@1024 with about $20\times$ smaller model size and significantly lower compute budget. Our models, code, and data are released at \url{https://github.com/Goedel-LM/Goedel-Prover-V2}.
Theorem Proving, Reasoning
foundation or frontier models, including LLMs
[ 6, 6, 4, 6 ]
Accept (Poster)
Yong Lin, Shange Tang, Bohan Lyu, Ziran Yang, Jui-Hui Chung, Haoyu Zhao, Lai Jiang, Yihan Geng, Jiawei Ge, Jingruo Sun, Jiayun Wu, Jiri Gesi, Ximing Lu, David Acuna, Kaiyu Yang, Hongzhou Lin, Yejin Choi, Danqi Chen, Sanjeev Arora, Chi Jin
~Yong_Lin2, ~Shange_Tang1, ~Bohan_Lyu1, ~Ziran_Yang1, ~Jui-Hui_Chung1, ~Haoyu_Zhao1, ~Lai_Jiang4, ~Yihan_Geng1, ~Jiawei_Ge3, ~Jingruo_Sun1, ~Jiayun_Wu1, ~Jiri_Gesi1, ~Ximing_Lu1, ~David_Acuna1, ~Kaiyu_Yang1, ~Hongzhou_Lin1, ~Yejin_Choi1, ~Danqi_Chen1, ~Sanjeev_Arora1, ~Chi_Jin1
20250916
https://openreview.net/forum?id=j4C0nALrgK
j4C0nALrgK
@inproceedings{ lin2026goedelproverv, title={Goedel-Prover-V2: Scaling Formal Theorem Proving with Scaffolded Data Synthesis and Self-Correction}, author={Yong Lin and Shange Tang and Bohan Lyu and Ziran Yang and Jui-Hui Chung and Haoyu Zhao and Lai Jiang and Yihan Geng and Jiawei Ge and Jingruo Sun and Jiayun Wu and Jiri Gesi and Ximing Lu and David Acuna and Kaiyu Yang and Hongzhou Lin and Yejin Choi and Danqi Chen and Sanjeev Arora and Chi Jin}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=j4C0nALrgK} }
OpenReview/ICLR/figures/2026/accept_poster/j4C0nALrgK/Figure3.png
3
Figure 3: The overall pipeline of model training.
<paragraph_1>We observe that while DeepSeek-Prover-V2 models are already heavily trained and have lost selfcorrection capabilities, other models like Qwen3 lack the ability to generate formal proofs. To address this trade-off, we use data distilled from DeepSeek-Prover-V2 to cold-start Qwen3, followed by large-scale generation of revision and direct proof data with the resulting model. We then train our own model and iteratively refine it, incorporating scaffolded data. During training, we observe a reduction in output diversity (a form of overfitting) after each stage and apply model averaging to mitigate this. The whole training pipeline consists of the following steps, as illustrated in Figure 3:</paragraph_1>
diagram
0.951549
OpenReview
ICLR
2,026
Learning Unified Representation of 3D Gaussian Splatting
A well-designed vectorized representation is crucial for the learning systems natively based on 3D Gaussian Splatting. While 3DGS enables efficient and explicit 3D reconstruction, its parameter-based representation remains hard to learn as features, especially for neural-network-based models. Directly feeding raw Gaussian parameters into learning frameworks fails to address the non-unique and heterogeneous nature of the Gaussian parameterization, yielding highly data-dependent models. This challenge motivates us to explore a more principled approach to represent 3D Gaussian Splatting in neural networks that preserves the underlying color and geometric structure while enforcing unique mapping and channel homogeneity. In this paper, we propose an embedding representation of 3DGS based on continuous submanifold fields that encapsulate the intrinsic information of Gaussian primitives, thereby benefiting the learning of 3DGS.
Representation Learning, 3D Gaussian Splatting
unsupervised, self-supervised, semi-supervised, and supervised representation learning
Proposed a new representation of 3DGS based on submanifold field that is more suitable for learning.
[ 2, 4, 8, 8 ]
Accept (Poster)
Yuelin Xin, Yuheng Liu, Xiaohui Xie, Xinke Li
~Yuelin_Xin1, ~Yuheng_Liu1, ~Xiaohui_Xie2, ~Xinke_Li1
20250904
https://openreview.net/forum?id=NvpVtGG6hk
NvpVtGG6hk
@inproceedings{ xin2026learning, title={Learning Unified Representation of 3D Gaussian Splatting}, author={Yuelin Xin and Yuheng Liu and Xiaohui Xie and Xinke Li}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=NvpVtGG6hk} }
OpenReview/ICLR/figures/2026/accept_poster/NvpVtGG6hk/Figure6.png
6
Figure 6: Setting of a Gaussian Neural Field, we compare between the prediction target SF embedding and raw GS parameters.
<paragraph_1>Gaussian Neural Fields. To validate the potential of our representation for advanced downstream tasks, we introduce the Gaussian Neural Field (GNF). Drawing inspiration from the decoding structures in generative diffusion models (e.g., DiffGS by Zhou et al. (2024b)) and neural compression frameworks (Wu & Tuytelaars, 2024), the GNF functions as a coordinate-based neural implicit field as illustrated in Fig. 6. Specifically, it employs a lightweight MLP (architecture detailed in App. D.4) to learn a continuous mapping from spatial coordinates xi to per-primitive descriptors. This setup allows us to evaluate the “learnability” of our representation: while regressing heterogeneous raw parameters θi often leads to optimization difficulties, our unified SF embeddings provide a smooth and well-conditioned target for the neural field. As evidenced in Tab. 3 and visualization in App. D.4, the SF-guided GNF outperforms the parameter-based baseline in visual fidelity with equivalent training effort. This indicates that our representation is more friendly to neural networks, hinting at its utility for potential downstream generative and compression tasks.</paragraph_1>
diagram
0.973273
OpenReview
ICLR
2,026
Disentangled representation learning through unsupervised symmetry group discovery
Symmetry-based disentangled representation learning leverages the group structure of environment transformations to uncover the latent factors of variation. Prior approaches to symmetry-based disentanglement have required strong prior knowledge of the symmetry group's structure, or restrictive assumptions about the subgroup properties. In this work, we remove these constraints by proposing a method whereby an embodied agent autonomously discovers the group structure of its action space through unsupervised interaction with the environment. We prove the identifiability of the true action group decomposition under minimal assumptions, and derive two algorithms: one for discovering the group decomposition from interaction data, and another for learning Linear Symmetry-Based Disentangled (LSBD) representations without assuming specific subgroup properties. Our method is validated on three environments exhibiting different group decompositions, where it outperforms existing LSBD approaches.
Representation learning, Disentanglement, Group Theory
unsupervised, self-supervised, semi-supervised, and supervised representation learning
[ 8, 4, 8, 6 ]
Accept (Poster)
Barthélémy Dang-Nhu, Louis Annabi, Sylvain ARGENTIERI
~Barthélémy_Dang-Nhu1, ~Louis_Annabi1, ~Sylvain_ARGENTIERI1
20250919
https://openreview.net/forum?id=I6xjMoLY3j
I6xjMoLY3j
@inproceedings{ dang-nhu2026disentangled, title={Disentangled representation learning through unsupervised symmetry group discovery}, author={Barth{\'e}l{\'e}my Dang-Nhu and Louis Annabi and Sylvain ARGENTIERI}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=I6xjMoLY3j} }
OpenReview/ICLR/figures/2026/accept_poster/I6xjMoLY3j/Figure4.png
4
Figure 4: Two isomorphic group actions satisfying Assumption 2.
<paragraph_1>We argue that this assumption alone is not sufficient to recover the correct decomposition. To illustrate this point, consider two distinct environments analogous to Flatland shown Figure 4: (a) a 2  3 cyclic grid i.e. Ga  Z{2Z  Z{3Z with actions Ga  txu Y tyu and (b) a 6  1 cyclic grid i.e. Gb  Z{6Z with actions Gb  t2x, 3xu. Both environments satisfy Assumption 2 and can share the same representation, as there exists an isomorphism from Ga to Gb that maps each element of Ga to a corresponding element in Gb. From the agent’s perspective, these two situations are indistinguishable in the absence of additional assumptions. Ideally, we seek an assumption that both covers a wide range of practical scenarios, i.e. action sets G, and enables a computationally tractable procedure for recovering the group decomposition. Among the various options considered, we adopt the following assumption, as it offers a favorable trade-off between situation coverage and computational feasibility: Assumption 3. For all g, g1 P G, if they belong to the same subgroup then there exists u P G and m P J1, MK such that we have either g  umg1, g  g1um, g1  gum or g1  umg.</paragraph_1> <paragraph_2>Combined with Assumption 2, it is straightforward to show that the implication of Assumption 3 is in fact an equivalence. As a result, we obtain a simple and practical criterion for determining whether two actions belong to the same subgroup. In terms of situation coverage, as soon as M ¥ 2, Assumption 3 holds in common cases such as when Gi contains an action and its inverse, when Gk  Gk, or when Gk  G k. In practice, the action sets considered in the experimental sections of state-of-the-art SBDRL algorithms typically fall into one of these categories. In the scenario illustrated in Figure 4, Assumption 3 allows us to assume that situation (b) will never occur, our method will thus assume that the environment corresponds to case (a).</paragraph_2>
diagram
0.908796
OpenReview
ICLR
2,026
On-the-Fly Adaptation to Quantization: Configuration-Aware LoRA for Efficient Fine-Tuning of Quantized LLMs
As increasingly large pre-trained models are released, deploying them on edge devices for privacy-preserving applications requires effective compression. Recent works combine quantization with the fine-tuning of high-precision LoRA adapters, which can substantially reduce model size while mitigating the accuracy loss from quantization. However, edge devices have inherently heterogeneous capabilities, while performing configuration-wise fine-tuning for every quantization setting is computationally prohibitive. In this paper, we propose CoA-LoRA, a method that dynamically adjusts the LoRA adapter to arbitrary quantization configurations (i.e., the per-layer bit-width choices of a pre-trained model) without requiring repeated fine-tuning. This is accomplished via a configuration-aware model that maps each configuration to its low-rank adjustments. The effectiveness of this model critically depends on the training configuration set, a collection of configurations chosen to cover different total bit-width budgets. However, constructing a high-quality configuration set is non-trivial. We therefore design a Pareto-based configuration search that iteratively optimizes the training configuration set, yielding more precise low-rank adjustments. Our experiments demonstrate that, unlike the state-of-the-art methods that require fine-tuning a separate LoRA adapter for each configuration, CoA-LoRA incurs no additional time cost while achieving comparable or even superior performance to those methods.
Configuration-aware optimization, Pareto-base configuration search, Quantization, Fine-tuning
foundation or frontier models, including LLMs
[ 4, 6, 6, 6 ]
Accept (Poster)
Rongguang Ye, Ming Tang, Edith C. H. Ngai
~Rongguang_Ye1, ~Ming_Tang5, ~Edith_C._H._Ngai1
20250916
https://openreview.net/forum?id=9OUg0nJE72
9OUg0nJE72
@inproceedings{ ye2026onthefly, title={On-the-Fly Adaptation to Quantization: Configuration-Aware Lo{RA} for Efficient Fine-Tuning of Quantized {LLM}s}, author={Rongguang Ye and Ming Tang and Edith C. H. Ngai}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=9OUg0nJE72} }
OpenReview/ICLR/figures/2026/accept_poster/9OUg0nJE72/Figure3.png
3
Figure 3: Illustration of configuration-aware LoRA adapters with parallel adjustment. The configurationaware model θ generates adjustment matrices I+Uθ(Ci) from the quantization configuration Ci in parallel, where I denotes the identity matrix.
<paragraph_1>Motivated by this observation, we introduce a configuration-aware model θ : R|Qi| →Rr×r, which maps a layer-level configuration vector Qi to a lightweight adjustment matrix Uθ(Qi) ∈Rr×r. As shown in Fig. 3, each layer’s low-rank matrix L2,i is reparameterized as (I + Uθ(Qi))L2,i, where I is the identity matrix. Given a dataset D, let f WC denote the quantized pre-trained model weights under configuration C. We define the adjusted model weights using a configuration-aware adjustment function:</paragraph_1> <paragraph_2>where HVI(f(C), C) = Hr(C ∪{f(C)}) −Hr(C) measures the potential hypervolume increase contributed by C. For example, in Fig. 4 (left), the yellow area indicates the HVI of C(3).</paragraph_2> <paragraph_3>Fig. C.3 compares the results under different values of U, where U = 0 corresponds to the case without segment Pareto selection. We observe that applying segment Pareto selection (i.e., U = 20</paragraph_3> <paragraph_4>Figure C.3: Comparison of performance with different segment numbers K across four tasks.</paragraph_4>
diagram
0.998697
OpenReview
ICLR
2,026
FHE-Coder: Evaluating LLM Agents for secure Fully Homomorphic Encryption Code Generation
Fully Homomorphic Encryption over the Torus (TFHE) is a cornerstone of confidential computing, yet its adoption is severely limited by a steep learning curve requiring specialized cryptographic expertise. To bridge this skills gap, we investigate the potential of Large Language Model (LLM) agents to automate the generation of secure TFHE and CKKS code from natural language. We introduce FHE-CODER, a novel, three-phase agentic framework designed to overcome the critical failure points of this process. Our framework integrates a Prompt Formalizer to structure user intent and configure secure parameters, a specialized RAG retriever for accurate API knowledge , and an automated Security Verifier that provides iterative feedback to correct cryptographic flaws. We comprehensively evaluate our framework by testing four leading LLMs on a benchmark of ten programming tasks of increasing difficulty. Our results demonstrate that while baseline agents consistently produce functionally correct but insecure code, our full agentic framework is uniquely capable of generating solutions that are simultaneously compilable, functionally correct, and verifiably secure. This work establishes the first robust methodology and benchmark for agentic TFHE and CKKS code generation, demonstrating a viable path toward democratizing secure computation.
Large Language Models, Agents, Code generation, Fully Homomorphic Encryption, Retrieval Augmented Generation
alignment, fairness, safety, privacy, and societal considerations
We built a three-phase agentic framework that enables Large Language Models to automatically generate secure and functional TFHE code, bridging the expertise gap that currently limits the adoption of privacy-preserving computation.
[ 6, 4, 6 ]
Accept (Poster)
Mayank Kumar, Jiaqi Xue, Mengxin Zheng, Qian Lou
~Mayank_Kumar8, ~Jiaqi_Xue1, ~Mengxin_Zheng1, ~Qian_Lou1
20250919
https://openreview.net/forum?id=4F1py5vQXm
4F1py5vQXm
@inproceedings{ kumar2026fhecoder, title={{FHE}-Coder: Evaluating {LLM} Agents for secure Fully Homomorphic Encryption Code Generation}, author={Mayank Kumar and Jiaqi Xue and Mengxin Zheng and Qian Lou}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=4F1py5vQXm} }
OpenReview/ICLR/figures/2026/accept_poster/4F1py5vQXm/Figure4.png
4
Figure 4: An offline, human-in-the-loop process creates a dictionary mapping expert-enriched docstrings to code snippets from the TFHE documentation.
<paragraph_1>Therefore, to mitigate each of these issues, we introduce the novel agentic code generation workflow and evaluation framework as shown in Fig. 2. Our workflow is composed of three key components designed to address these specific challenges. First, the FHE Prompt Formalizer (Fig. 3) corrects structural and parameterization errors by translating the user’s request into a formal specification with secure, correctly calculated cryptographic parameters. Second, to remedy the model’s lack of API knowledge, an FHE API RAG Retriever (Fig. 4) provides the agent with relevant documentation and code examples on-demand. Finally, to overcome inadequate evaluation, our FHE Security Verifier (Fig. 5)introduces a multi-faceted check for critical security properties, ensuring the generated code is not only functionally correct but also verifiably secure.</paragraph_1> <paragraph_2>The FHE API RAG Retriever, illustrated in Figure 4 , addresses the limitations of standard retrieval methods, which fail almost entirely in this domain because LLMs lack the intrinsic structure to interpret strict cryptographic APIs or respect ciphertext-only computation rules B. To bridge the semantic gap between natural-language intent and these rigid library constraints, we construct a knowledge base using expert-enriched metadata. Specifically, we transform TFHE method docstrings1 into the Doxygen format2, utilizing structured tags such as @objective to embed machine-readable semantic instructions. This enrichment enables the agent to retrieve precise, security-compliant code snippets based on cryptographic purpose rather than ambiguous</paragraph_2>
diagram
0.926502
OpenReview
ICLR
2,026
PALC: Preference Alignment via Logit Calibration
Aligning Large Language Models with human preferences typically requires computationally intensive training or complex reward architectures. We introduce PALC (Preference Alignment via Logit Calibration), a parameter-efficient framework that achieves test-time alignment through a novel intervention strategy: direct calibration in vocabulary space. Unlike existing methods that manipulate entangled hidden representations or rely on external reward models, PALC operates at the logit layer where each dimension corresponds to a distinct token, providing interpretable and efficient control. Our approach employs a bottleneck architecture that learns to compress the base model's hidden states and generate position-dependent calibration vectors, requiring only a fraction of the base model's parameters. Through this design, PALC sidesteps the superposition problem inherent in representation engineering while eliminating the computational overhead of guided decoding methods. A single scaling factor enables runtime adjustment of alignment strength without retraining, allowing practitioners to balance between preserving model capabilities and enforcing preferences. Experiments demonstrate that PALC outperforms most test-time alignment methods while maintaining near-baseline inference speed. Our ablations reveal that human preferences concentrate on surprisingly low-dimensional manifolds, validating our architectural choices. By establishing vocabulary-space intervention as an effective alignment paradigm, PALC makes preference alignment accessible for resource-constrained deployments where traditional methods are infeasible, opening new avenues for scalable and adaptive AI alignment.
AI alignment, Representation Editing
alignment, fairness, safety, privacy, and societal considerations
PALC: preference alignment via logit calibration. Learns compact calibrations for frozen LLMs, achieving strong alignment without external rewards or fine-tuning. Outperforms most test-time methods with minimal latency.
[ 6, 6, 6, 4 ]
Accept (Poster)
SANGHYUN LEE, Hoh Peter In
~SANGHYUN_LEE4, ~Hoh_Peter_In1
20250920
https://openreview.net/forum?id=0cmuYj3WeG
0cmuYj3WeG
@inproceedings{ lee2026palc, title={{PALC}: Preference Alignment via Logit Calibration}, author={SANGHYUN LEE and Hoh Peter In}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=0cmuYj3WeG} }
OpenReview/ICLR/figures/2026/accept_poster/0cmuYj3WeG/Figure1.png
1
Figure 1: Overview of the PALC framework. Unlike conventional representation steering methods that intervene in entangled hidden spaces, PALC treats the base model’s hidden states ht strictly as a read-only context. A lightweight Calibration Module (θ) extracts essential preference signals through a bottleneck architecture (Wdown,Wup) to generate calibration vectors mt in the disentangled logit space. This decoupling ensures precise preference alignment with minimal computational overhead and preserves the base model’s general capabilities.
<paragraph_1>We examine how the scaling factor γ affects PALC’s performance. Figure 3 shows results for five values: γ ∈{0.5, 1.0, 3.0, 5.0, 10.0}.</paragraph_1>
diagram
0.942897
OpenReview
ICLR
2,026
Neural Predictor-Corrector: Solving Homotopy Problems with Reinforcement Learning
The Homotopy paradigm, a general principle for solving challenging problems, appears across diverse domains such as robust optimization, global optimization, polynomial root-finding, and sampling. Practical solvers for these problems typically follow a predictor-corrector (PC) structure, but rely on hand-crafted heuristics for step sizes and iteration termination, which are often suboptimal and task-specific. To address this, we unify these problems under a single framework, which enables the design of a general neural solver. Building on this unified view, we propose Neural Predictor-Corrector (NPC), which replaces hand-crafted heuristics with automatically learned policies. NPC formulates policy selection as a sequential decision-making problem and leverages reinforcement learning to automatically discover efficient strategies. To further enhance generalization, we introduce an amortized training mechanism, enabling one-time offline training for a class of problems and efficient online inference on new instances. Experiments on four representative homotopy problems demonstrate that our method generalizes effectively to unseen instances. It consistently outperforms classical and specialized baselines in efficiency while demonstrating superior stability across tasks, highlighting the value of unifying homotopy methods into a single neural framework.
Homotopy System, Graduated optimization, Reinforcement Learning, Polynomial Equitions System, Gaussian Homotopy, Sampling
applications to computer vision, audio, language, and other modalities
[ 6, 6, 4 ]
Accept (Poster)
Jiayao Mai, Bangyan Liao, Zhenjun Zhao, Yingping Zeng, Haoang Li, Javier Civera, Tailin Wu, Yi Zhou, Peidong Liu
~Jiayao_Mai3, ~Bangyan_Liao1, ~Zhenjun_Zhao1, ~Yingping_Zeng1, ~Haoang_Li1, ~Javier_Civera1, ~Tailin_Wu1, ~Yi_Zhou27, ~Peidong_Liu3
20250905
https://openreview.net/forum?id=x6iodYWNty
x6iodYWNty
@inproceedings{ mai2026neural, title={Neural Predictor-Corrector: Solving Homotopy Problems with Reinforcement Learning}, author={Jiayao Mai and Bangyan Liao and Zhenjun Zhao and Yingping Zeng and Haoang Li and Javier Civera and Tailin Wu and Yi Zhou and Peidong Liu}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=x6iodYWNty} }
OpenReview/ICLR/figures/2026/accept_poster/x6iodYWNty/Figure2.png
2
Figure 2: Illustration of the Predictor-Corrector algorithm. Predictor proposes the next level and provides an initial solution estimate, while Corrector iteratively refines this estimate to project it back onto the solution trajectory. Orange curve denotes the implicit solution trajectory, as in Fig. 1.
<paragraph_1>While the homotopy paradigm specifies the abstract principle, an effective algorithm is needed to trace the implicit solution trajectory in practice. The PC method (Allgower & Georg, 2012) provides such a concrete algorithmic framework. As shown in Fig. 2, PC decomposes trajectory tracking into two complementary steps:</paragraph_1>
diagram
0.881063
OpenReview
ICLR
2,026
CLUE: Conflict-guided Localization for LLM Unlearning Framework
The LLM unlearning aims to eliminate the influence of undesirable data without affecting causally unrelated information. This process typically involves using a **forget set** to remove target information, alongside a **retain set** to maintain non-target capabilities. While recent localization-based methods demonstrate promise in identifying important nodes (neurons) to be unlearned, they fail to disentangle nodes responsible for forgetting undesirable knowledge or retaining essential skills, often treating them as a single entangled group. As a result, these methods apply uniform interventions, risking catastrophic over-forgetting or incomplete erasure of the target knowledge. To address this, we turn to circuit discovery, a mechanistic interpretability technique, and propose the **C**onflict-guided **L**ocalization for LLM **U**nlearning fram**E**work (**CLUE**). This framework identifies the forget and retain circuit composed of important nodes, and then the circuits are transformed into conjunctive normal forms (CNF). The assignment of each node in the CNF satisfiability solution reveals whether it should be forgotten or retained. We then provide targeted fine-tuning strategies for different categories of nodes. Extensive experiments demonstrate that, compared to existing localization methods, CLUE achieves superior forget efficacy and retain utility through precise neural localization.
LLM unlearning, circuit discovery, conjunctive normal form, interpretability
foundation or frontier models, including LLMs
We use circuit discovery and CNF solving to design the localization for forget neurons and retain neurons in the LLM unlearning task.
[ 6, 6, 4, 2 ]
Accept (Poster)
Hang Chen, Jiaying Zhu, Xinyu Yang, Wenya Wang
~Hang_Chen3, ~Jiaying_Zhu5, ~Xinyu_Yang2, ~Wenya_Wang1
20250901
https://openreview.net/forum?id=jtRYvazBWv
jtRYvazBWv
@inproceedings{ chen2026clue, title={{CLUE}: Conflict-guided Localization for {LLM} Unlearning Framework}, author={Hang Chen and Jiaying Zhu and Xinyu Yang and Wenya Wang}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=jtRYvazBWv} }
OpenReview/ICLR/figures/2026/accept_poster/jtRYvazBWv/Figure2.png
2
Figure 2: Overview from datasets to localization.
<paragraph_1>In this section, we provide a three-step framework of how circuit discovery ultimately enables precise localization. An overview of our localization procedure is shown in Figure 2. Specifically,</paragraph_1>
diagram
0.850337
OpenReview
ICLR
2,026
Latent Geometry-Driven Network Automata for Complex Network Dismantling
Complex networks model the structure and function of critical technological, biological, and communication systems. Network dismantling, the targeted removal of nodes to fragment a network, is essential for analyzing and improving system robustness. Existing dismantling methods suffer from key limitations: they depend on global structural knowledge, exhibit slow running times on large networks, and overlook the network’s latent geometry, a key feature known to govern the dynamics of complex systems. Motivated by these findings, we introduce Latent Geometry-Driven Network Automata (LGD-NA), a novel framework that leverages local network automata rules to approximate effective link distances between interacting nodes. LGD-NA is able to identify critical nodes and capture latent manifold information of a network for effective and efficient dismantling. We show that this latent geometry-driven approach outperforms all existing dismantling algorithms, including spectral Laplacian-based methods and machine learning ones such as graph neural networks and . We also find that a simple common-neighbor-based network automata rule achieves near state-of-the-art performance, highlighting the effectiveness of minimal local information for dismantling. LGD-NA is extensively validated on the largest and most diverse collection of real-world networks to date (1,475 real-world networks across 32 complex systems domains) and scales efficiently to large networks via GPU acceleration. Finally, we leverage the explainability of our common-neighbor approach to engineer network robustness, substantially increasing the resilience of real-world networks. We validate LGD-NA's practical utility on domain-specific functional metrics, spanning neuronal firing rates in the Drosophila Connectome, transport efficiency in flight maps, outbreak sizes in contact networks, and communication pathways in terrorist cells. Our results confirm latent geometry as a fundamental principle for understanding the robustness of real-world systems, adding dismantling to the growing set of processes that network geometry can explain.
network robustness, network dismantling, network geometry, network science, complex systems, network automata, graphs, network topology
learning on graphs and other geometries & topologies
Latent Geometry-Driven Network Automata dismantles networks by estimating effective link distances on the latent manifold via local rules, outperforming all existing methods on 1,475 real-world networks and runs efficiently on large systems via GPU.
[ 4, 2, 6, 6 ]
Accept (Poster)
Thomas Adler, Marco Grassia, Ziheng Liao, Giuseppe Mangioni, Carlo Vittorio Cannistraci
~Thomas_Adler2, ~Marco_Grassia1, ~Ziheng_Liao1, ~Giuseppe_Mangioni1, ~Carlo_Vittorio_Cannistraci1
20250918
https://openreview.net/forum?id=yz29QCGVzC
yz29QCGVzC
@inproceedings{ adler2026latent, title={Latent Geometry-Driven Network Automata for Complex Network Dismantling}, author={Thomas Adler and Marco Grassia and Ziheng Liao and Giuseppe Mangioni and Carlo Vittorio Cannistraci}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=yz29QCGVzC} }
OpenReview/ICLR/figures/2026/accept_poster/yz29QCGVzC/Figure1.png
1
Figure 1: Overview of the LGD Network Automata framework. A: Begin with an unweighted and undirected network. B: Estimate latent geometry by assigning a weight νij to each edge between nodes i and j using local latent geometry estimators. C: Construct a dissimilarity-weighted network based on these weights. D: Compute node strength as the sum of geometric weights to all neighbors in N (i): si = ∑ j∈N (i) νij E–F: Perform dynamic dismantling by iteratively computing node strengths, removing the node with the highest si and its edges, and checking whether the normalized size of the largest connected component (LCC) has dropped below a threshold. G–H (optional): Reinsert dismantled nodes using a selected reinsertion method.
<paragraph_1>We introduce the Latent Geometry-Driven Network Automata (LGD-NA) framework. LGD-NA adopts a parameter-free network automaton rule, such as RA2, to estimate latent geometric linked node pairwise distances and to assign edge weights based on these geometric distances. Then, it computes for each node its network centrality as a sum of the weights of adjacent edges. The higher this sum, the more a node dominates numerous and far-apart regions of the network, becoming a prioritized candidate for a targeted attack in the network dismantling process. This prioritized node is then removed from the network, and the procedure is iteratively repeated until the network is dismantled (see Figure 1 for a full breakdown).</paragraph_1> <paragraph_2>To ensure full reproducibility, we have made our source code publicly available, including detailed instructions on how to replicate all experiments. The codebase includes an implementation of our LGD-NA framework (illustrated in Figure 1), the exact formulas used (detailed in Appendix A), and an example network for demonstration. The code is compatible with both CPU and GPU environments and also provides the necessary tools to engineer network robustness as described in this work. The baseline methods were implemented using the code from the review by Artime et al. (2024). The exact topological measures of all networks used in our study are provided in Appendix 9. Further details regarding the experimental setup, including hardware specifications, are described in Appendix M and N.</paragraph_2>
diagram
0.976884
OpenReview
ICLR
2,026
Accelerated co-design of robots through morphological pretraining
The co-design of robot morphology and neural control typically requires using reinforcement learning to approximate a unique control policy gradient for each body plan, demanding massive amounts of training data to measure the performance of each design. Here we show that a universal, morphology-agnostic controller can be rapidly and directly obtained by gradient-based optimization through differentiable simulation. This process of morphological pretraining allows the designer to explore non-differentiable changes to a robot's physical layout (e.g. adding, removing and recombining discrete body parts) and immediately determine which revisions are beneficial and which are deleterious using the pretrained model. We term this process "zero-shot evolution" and compare it with the simultaneous co-optimization of a universal controller alongside an evolving design population. We find the latter results in _diversity collapse_, a previously unknown pathology whereby the population—and thus the controller's training data—converges to similar designs that are easier to steer with a shared universal controller. We show that zero-shot evolution with a pretrained controller quickly yields a diversity of highly performant designs, and by fine-tuning the pretrained controller on the current population throughout evolution, diversity is not only preserved but significantly increased as superior performance is achieved. Videos viewable at this website: https://gilded-macaron-5a75e3.netlify.app
robot co-design, universal control, differentiable simulation, embodied intelligence
applications to robotics, autonomy, planning
[ 2, 6, 6 ]
Accept (Poster)
Luke Strgar, Sam Kriegman
~Luke_Strgar1, ~Sam_Kriegman1
20250919
https://openreview.net/forum?id=WVliGyFwZv
WVliGyFwZv
@inproceedings{ strgar2026accelerated, title={Accelerated co-design of robots through morphological pretraining}, author={Luke Strgar and Sam Kriegman}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=WVliGyFwZv} }
OpenReview/ICLR/figures/2026/accept_poster/WVliGyFwZv/Figure2.png
2
Figure 2: Overview of the proposed method. End-to-end differentiable policy training across tens of millions of morphologically distinct robots—morphological pretraining—produces a universal controller, which was kept frozen throughout zero-shot evolution and finetuned for each generation of few-shot evolution.
<paragraph_1>Inspired by the remarkable success of large-scale pretrained models in computer vision and natural language processing, we here pretrain a universal controller across millions of complex body plans using gradient information from differentiable simulation, averaging gradients across variations in the robot’s body, world and goal (Fig. 1). Armed with a universal controller, evolution can now iteratively improve the robot’s morphology, and the controller can be rapidly finetuned for the current population with simulation gradients (Fig. 2). This also enables the successful recombination of designs (a.k.a. crossover; Fig. 4), a hallmark of biological evolution and of human engineering that has yet to be convincingly demonstrated in robots.</paragraph_1>
diagram
0.924178
OpenReview
ICLR
2,026
Automatic and Structure-Aware Sparsification of Hybrid Neural ODEs with Application to Glucose Prediction
Hybrid neural ordinary differential equations (neural ODEs) integrate mechanistic models with neural ODEs, offering strong inductive bias and flexibility, and are particularly advantageous in data-scarce healthcare settings. However, excessive latent states and interactions from mechanistic models can lead to training inefficiency and over-fitting, limiting practical effectiveness of hybrid neural ODEs. In response, we propose a new hybrid pipeline for automatic state selection and structure optimization in mechanistic neural ODEs, combining domain-informed graph modifications with data-driven regularization to sparsify the model for improving predictive performance and stability while retaining mechanistic plausibility. Experiments on synthetic and real-world data show improved predictive performance and robustness with desired sparsity, establishing an effective solution for hybrid model reduction in healthcare applications.
Predictive Sparsity, Hybrid Neural ODE, Group LASSO, Glucose Prediction
applications to physical sciences (physics, chemistry, biology, etc.)
[ 4, 6, 4, 8 ]
Accept (Poster)
Bob Junyi Zou, Lu Tian
~Bob_Junyi_Zou1, ~Lu_Tian4
20250918
https://openreview.net/forum?id=QBzFrjEF59
QBzFrjEF59
@inproceedings{ zou2026automatic, title={Automatic and Structure-Aware Sparsification of Hybrid Neural {ODE}s with Application to Glucose Prediction}, author={Bob Junyi Zou and Lu Tian}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=QBzFrjEF59} }
OpenReview/ICLR/figures/2026/accept_poster/QBzFrjEF59/Figure5.png
5
Figure 5: An illustration of the mechanistic vs true graphs used in the synthetic experiments
<paragraph_1>In figure 5, we provide an illustration of the mechanistic graph used in the synthetic experiments.</paragraph_1>
diagram
0.92587
OpenReview
ICLR
2,026
Tractability via Low Dimensionality: The Parameterized Complexity of Training Quantized Neural Networks
The training of neural networks has been extensively studied from both algorithmic and complexity-theoretic perspectives, yet recent results in this direction almost exclusively concern real-valued networks. In contrast, advances in machine learning practice highlight the benefits of quantization, where network parameters and data are restricted to finite integer domains, yielding significant improvements in speed and energy efficiency. Motivated by this gap, we initiate a systematic complexity-theoretic study of ReLU Neural Network Training in the full quantization mode. We establish strong lower bounds by showing that hardness already arises in the binary setting and under highly restrictive structural assumptions on the architecture, thereby excluding parameterized tractability for natural measures such as depth and width. On the positive side, we identify nontrivial fixed-parameter tractable cases when parameterizing by input dimensionality in combination with width and either output dimensionality or error bound, and further strengthen these results by replacing width with the more general treewidth.
treewidth, parameterized complexity, quantized neural networks, ReLU networks
learning theory
We study the classical and parameterized complexity of training quantized neural networks and obtain new upper as well as lower bounds for the problem.
[ 6, 8, 6 ]
Accept (Poster)
Robert Ganian, Frank Sommer, Manuel Sorge
~Robert_Ganian1, ~Frank_Sommer1, ~Manuel_Sorge1
20250918
https://openreview.net/forum?id=BAQNrsr987
BAQNrsr987
@inproceedings{ ganian2026tractability, title={Tractability via Low Dimensionality: The Parameterized Complexity of Training Quantized Neural Networks}, author={Robert Ganian and Frank Sommer and Manuel Sorge}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=BAQNrsr987} }
OpenReview/ICLR/figures/2026/accept_poster/BAQNrsr987/Figure4.png
4
Figure 4: An illustration of the reduction behind Theorem 3 for the universe U = [6] and the set family F with sets S1 = {1, 4, 5}, S2 = {2, 3}, S3 = {1, 6}, S4 = {2, 5}, S5 = {3, 5}, S6 = {6} and k = 3 and with a hitting set S = {2, 5, 6}. In the solution corresponding to S, inputs p1, p2 and p3 are associated with elements 2, 5 and 6, respectively. Moreover, each red arc has weight 0 and each blue arc has weight 1. The orange numbers are the biases of the output neurons.
<paragraph_1>We construct an equivalent instance I of 2-QNNT as follows; see Figure 4 for an illustration. Description of architecture G. We create two input neurons z1 and z2. For each of the two literals</paragraph_1> <paragraph_2>Construction. We construct an instance I of 2-QNNT as follows. For an illustration, see Figure 4. Description of the architecture G. We create k input neurons p1, . . . , pk. Abusing notation, for each set F ∈F we create one set output neuron F. We add arcs between every input and output neuron. Description of the data set. For each element u ∈U we add k element u data points d1 u, . . . , dk u. Element u data point di u has value 1 in input pi and value 0 in each other input. Moreover, di u has value 1 in each set output F such that u ∈F. Thus, di u has value 0 in each set output F ′ such that u /∈F ′. Observe that the k element u data points all have the same output but they have pairwise different inputs. Then, we add a verifier data point d∗which has value 1 in each input and in each output. In the following, we say that two data points d1 and d2 have the same type if the input values of d1 and d2 are pairwise identical. Note that we have exactly k + 1 distinct types of data points.</paragraph_2>
diagram
0.90793
OpenReview
ICLR
2,026
Constrained Decoding of Diffusion LLMs with Context-Free Grammars
Large language models (LLMs) have shown promising performance across diverse domains. Many practical applications of LLMs, such as code completion and structured data extraction, require adherence to syntactic constraints specified by a formal language. Yet, due to their probabilistic nature, LLM output is not guaranteed to adhere to such formal languages. To address this, prior work has proposed constrained decoding to restrict LLM generation to particular formal languages. However, existing works are not applicable to the emerging paradigm of diffusion LLMs, as this requires supporting token generation in arbitrary order instead of the traditional left-to-right order. In this paper, we address this challenge and present the first constrained decoding method for diffusion models, one that can handle formal languages captured by context-free grammars. We begin by reducing constrained decoding to the more general additive infilling problem, which asks whether a partial output with holes can be completed to a valid word in the target language. This problem also naturally subsumes the previously unaddressed multi-region infilling constrained decoding. We then reduce this problem to the task of deciding whether the intersection of the target language and a regular language is empty and present an efficient algorithm to solve this task for context-free languages. Empirical results on various applications, such as C++ code infilling and structured data extraction in JSON, demonstrate that our method achieves near-perfect syntactic correctness while consistently preserving or improving functional correctness. Importantly, our efficiency optimizations ensure that the computational overhead remains practical.
diffusion llm, constrained decoding, llm, code generation, json, multi-region infilling, fill in the middle, code synthesis
generative models
We reduce constrained decoding for generalized code generation paradigms to an operation on formal languages, enabling constrained decoding for infilling and diffusion LLMs.
[ 4, 8, 6, 4 ]
Accept (Poster)
Niels Mündler, Jasper Dekoninck, Martin Vechev
~Niels_Mündler1, ~Jasper_Dekoninck1, ~Martin_Vechev1
20250916
https://openreview.net/forum?id=7Sph4KyeYO
7Sph4KyeYO
@inproceedings{ mundler2026constrained, title={Constrained Decoding of Diffusion {LLM}s with Context-Free Grammars}, author={Niels M{\"u}ndler and Jasper Dekoninck and Martin Vechev}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=7Sph4KyeYO} }
OpenReview/ICLR/figures/2026/accept_poster/7Sph4KyeYO/Figure3.png
3
Figure 3: Examples of Figures 1 and 4 processed during our method. (a) The grammar is first normalized into C2F+ε, and (b) the NFA is transformed into a minimal DFA. (c) To determine
<paragraph_1>Constructing the regular language The language Cx of all possible completions of x = x1 . . . xn contains all words that start with x1, end with xn, and contain the strings xi (1 ≤i ≤n) in the correct order, with arbitrary symbols in between. We prove that Cx is regular by constructing an NFA that accepts Cx. We first construct automata Di, which accept exactly xi. Then, we concatenate Di with an additional state qi that accepts any string in Σ∗, i.e., δ(qi, σ) = qi for all σ ∈Σ. For the concatenation, we add an ε-edge from the accepting states of Di to qi and from qi to the start state of Di+1. A visualization for the prior example is shown in Figure 2b. In our algorithm, we construct this NFA for each update. We then transform it into an equivalent DFA and minimize the DFA using standard methods (Hopcroft and Ullman, 1979), as shown in Figure 3b.</paragraph_1> <paragraph_2>Constructing the intersection language We leverage the well-established facts that (a) the intersection L∩of CFL L and regular language Cx is a CFL, whose grammar can be constructed from L’s grammar G and Cx’s DFA, and (b) that the emptiness of a CFL can be checked in time polynomial to the size of the grammar (Gasarch, 2014; Hopcroft and Ullman, 1979). The symbols in the intersection language have the form p⃗A q for p, q ∈Σ and A ∈V , where each symbol intuitively represents deriving a word from A that also traverses the DFA from state p to q. The language is nonempty if we can derive a word from q0⃗S qf for start symbol S and initial and final state q0 and qf. An example of deriving a word in the intersection language is shown in Figure 3c. The intersection grammar G∩= (V∩, Σ, P∩, S∩) will have a cubic size in nonterminals and productions, with |V∩| ∈O(|V ||Q|2) and |P∩| ∈O(|P||Q|3 + |P||Q|2|Σ|) (Gasarch, 2014; Bar-Hillel et al., 1961). While we can not reduce the worst case complexity of this blowup, we carefully construct the intersection language to keep its size at a minimum, and employ several heuristics to reduce the practical cost of determining its emptiness, explained next.</paragraph_2> <paragraph_3>Efficient normalization The standard intersection algorithms require G to be transformed to Chomsky normal form, which only allows rules of the form A →BC or A →a, where A, B, C ∈V and a ∈Σ (Hopcroft and Ullman, 1979). The resulting grammar may have a quadratic increase in the number of production rules (Lange and Leiß, 2009). To avoid this increase, we extend the standard construction to support CFGs in C2F+ε, a normal form that additionally allows productions of the form A →ε and A →B. We provide an example of the normalized C++ grammar in Figure 3a. This normal form can be obtained with only a linear increase in production rules (Lange and Leiß, 2009). Our adaptations to the standard intersection algorithm and a proof of its correctness are provided in Appendix B.1. In Appendix B.2, we describe several further heuristics to reduce the size of the normalized CFG of G. After this step, we can intersect the languages and determine the emptiness of the intersection language.</paragraph_3>
diagram
0.965765
OpenReview
ICLR
2,026
Grounding Generative Planners in Verifiable Logic: A Hybrid Architecture for Trustworthy Embodied AI
While Large Language Models (LLMs) show immense promise as planners for embodied AI, their stochastic nature and lack of formal reasoning capabilities prevent the strict safety guarantees required for physical deployment. Current approaches fall short: they either rely on other unreliable LLMs for safety checks or simply reject unsafe plans without offering a path to success. This work bridges this critical gap by introducing the Verifiable Iterative Refinement Framework (VIRF), a neuro-symbolic architecture that shifts the paradigm from a passive safety gatekeeper to an active safety collaborator. Where prior verifiers simply reject failures, our framework provides causal, pedagogical feedback that teaches the LLM why its plan was unsafe, enabling intelligent repairs rather than mere avoidance.Our core contribution is a novel tutor-apprentice dialogue, where a deterministic Logic Tutor, grounded in a formal safety ontology, provides causal and explanatory feedback to an LLM Apprentice planner. This pedagogical interaction allows the apprentice to perform intelligent, creative plan repairs, resolving safety conflicts rather than merely avoiding them. To ground this dialogue in verifiable truth, we introduce a scalable knowledge acquisition pipeline that synthesizes a comprehensive safety knowledge base from real-world documents, a process that simultaneously reveals and corrects significant blind spots in existing benchmarks. On a new suite of challenging home safety tasks, VIRF achieves a perfect 0\% Hazardous Action Rate (HAR), completely eliminating unsafe actions while attaining a 77.3\% Goal-Condition Rate (GCR)—the highest among all baselines. It does so with remarkable efficiency, requiring only 1.1 correction iterations on average. By acting as a verifiable safety scaffold, VIRF demonstrates a principled and robust pathway toward building embodied agents that are not just capable, but fundamentally trustworthy.
neurosymbolic AI, hybrid AI, formal reasoning, large language models, AI safety, verifiable AI, embodied AI, robotics
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
We propose a hybrid neuro-symbolic architecture where a formal logic verifier tutors an LLM planner, enabling the generation of verifiably safe plans for embodied agents.
[ 4, 2, 6, 4 ]
Accept (Poster)
Feiyu Wu, Xu Zheng, Yue Qu, Zhuocheng Wang, Zicheng Feng, HUI LI
~Feiyu_Wu1, ~Xu_Zheng1, ~Yue_Qu4, ~Zhuocheng_Wang1, ~Zicheng_Feng1, ~HUI_LI17
20250916
https://openreview.net/forum?id=wb05ver1k8
wb05ver1k8
@inproceedings{ wu2026grounding, title={Grounding Generative Planners in Verifiable Logic: A Hybrid Architecture for Trustworthy Embodied {AI}}, author={Feiyu Wu and Xu Zheng and Yue Qu and Zhuocheng Wang and Zicheng Feng and HUI LI}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=wb05ver1k8} }
OpenReview/ICLR/figures/2026/accept_poster/wb05ver1k8/Figure1.png
1
Figure 1: The architecture of the Verifiable Iterative Refinement Framework (VIRF). Instead of direct execution, an LLM planner’s actions are verified in a symbolic sandbox against a formal knowledge base. The framework’s core is the Logic Tutor feedback loop, which provides three distinct responses: approval for safe plans, clarification questions for UNKNOWN states, and a structured diagnostic report for unsafe plans. This report enables a pedagogical dialogue, teaching the LLM Linguistic Apprentice how to refine its plan and avoid hazards.
<paragraph_1>Our work introduces the Verifiable Iterative Refinement Framework (VIRF), a novel neurosymbolic architecture designed to govern a generative Large Language Model (LLM) planner. At its core, VIRF transforms the interaction between the stochastic LLM and a deterministic symbolic verifier from a simple pass/fail gate into a rich, pedagogical dialogue. To provide the necessary logical rigor for this dialogue, we build our verifier upon the Web Ontology Language (OWL 2) and its underlying Description Logics (DL), which enable a level of formal, inferential reasoning unattainable by other symbolic approaches (see Appendix A for a detailed justification). As illustrated in Figure 1, our methodology is built upon three foundational pillars.</paragraph_1>
diagram
0.91071
OpenReview
ICLR
2,026
Characterizing and Optimizing the Spatial Kernel of Multi Resolution Hash Encodings
Multi-Resolution Hash Encoding (MHE), the foundational technique behind Instant Neural Graphics Primitives, provides a powerful parameterization for neural fields. However, its spatial behavior lacks rigorous understanding from a physical systems perspective, leading to reliance on heuristics for hyperparameter selection. This work introduces a novel analytical approach that characterizes MHE by examining its Point Spread Function (PSF), which is analogous to the Green's function of the system. This methodology enables a quantification of the encoding's spatial resolution and fidelity. We derive a closed-form approximation for the collision-free PSF, uncovering inherent grid-induced anisotropy and a logarithmic spatial profile. We establish that the idealized spatial bandwidth, specifically the Full Width at Half Maximum (FWHM), is determined by the average resolution, $N_{\text{avg}}$. This leads to a counterintuitive finding: the effective resolution of the model is governed by the broadened empirical FWHM (and therefore $N_{\text{avg}}$), rather than the finest resolution $N_{\max}$, a broadening effect we demonstrate arises from optimization dynamics. Furthermore, we analyze the impact of finite hash capacity, demonstrating how collisions introduce speckle noise and degrade the Signal-to-Noise Ratio (SNR). Leveraging these theoretical insights, we propose Rotated MHE (R-MHE), an architecture that applies distinct rotations to the input coordinates at each resolution level. R-MHE mitigates anisotropy while maintaining the efficiency and parameter count of the original MHE. This study establishes a methodology based on physical principles that moves beyond heuristics to characterize and optimize MHE.
multi-resolution hash encoding, implicit neural representations, neural fields, point spread function, spatial kernel analysis, anisotropy, resolution limit, FWHM, hash collisions, signal-to-noise ratio, NeRF
applications to computer vision, audio, language, and other modalities
We analyze Multi-Resolution Hash Encoding (MHE) using its Point Spread Function (PSF) to reveal that effective resolution is governed by average, not finest, resolution, and introduce Rotated MHE to mitigate inherent anisotropy and collision noise.
[ 4, 6, 6, 4 ]
Accept (Poster)
Tianxiang Dai, Jonathan Fan
~Tianxiang_Dai1, ~Jonathan_Fan1
20250920
https://openreview.net/forum?id=q05hC1Pzkr
q05hC1Pzkr
@inproceedings{ dai2026characterizing, title={Characterizing and Optimizing the Spatial Kernel of Multi Resolution Hash Encodings}, author={Tianxiang Dai and Jonathan Fan}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=q05hC1Pzkr} }
OpenReview/ICLR/figures/2026/accept_poster/q05hC1Pzkr/Figure1.png
1
Figure 1: Overview of MHE Characterization and Optimization. (a) The MHE architecture utilizes L grid levels with resolutions growing by a factor b. The encoding e(x) is passed to an MLP gθ. We characterize the system by optimizing for a point constraint and measuring the resulting Point Spread Function (PSF). (b) This analysis reveals inherent grid-induced anisotropy (narrower along axes) and optimization-induced broadening, establishing that the effective resolution (FWHM) scales with 1/Navg. (c) To mitigate anisotropy, we propose Rotated MHE (R-MHE), which applies distinct rotations at each resolution level, leading to a more isotropic PSF.
<paragraph_1>In this work, we introduce a novel methodology to characterize and understand the performance of MHE by analyzing its Point Spread Function (PSF). Analogous to measuring the Green’s function of a physical system, the PSF characterizes the model’s response when optimized to represent an idealized point source (Figure 1b). This approach permits the rigorous quantification of effective</paragraph_1> <paragraph_2>We further investigate the impact of finite hash capacity, demonstrating how collisions introduce speckle-like side lobes and degrade the Signal-to-Noise Ratio (SNR). Informed by our comprehensive PSF analysis, we demonstrate how these insights can be leveraged to improve reconstruction quality. We introduce Rotated MHE (R-MHE) (Figure 1c), an architecture that applies distinct rotations to the input coordinates at each resolution level. By utilizing the existing multi-resolution structure, R-MHE improves isotropy without requiring additional hash tables or parameters, maintaining the efficiency of the original MHE.</paragraph_2>
diagram
0.984853
OpenReview
ICLR
2,026
CaTs and DAGs: Integrating Directed Acyclic Graphs with Transformers for Causally Constrained Predictions
Artificial Neural Networks (ANNs), including fully-connected networks and transformers, are highly flexible and powerful function approximators, widely applied in fields like computer vision and natural language processing. However, their inability to inherently respect causal structures can limit their robustness, making them vulnerable to covariate shift and difficult to interpret/explain. This poses significant challenges for their reliability in real-world applications. In this paper, we introduce Causal Transformers (CaTs), a general model class designed to operate under predefined causal constraints, as specified by a Directed Acyclic Graph (DAG). CaTs retain the powerful function approximation abilities of traditional neural networks while adhering to the underlying structural constraints, improving robustness, reliability, and interpretability at inference time. This approach opens new avenues for deploying neural networks in more demanding, real-world scenarios where robustness and explainability is critical.
transformers, causal inference, causality, inductive bias, DAGs
causal reasoning
Causal Transformers (CaTs) are neural networks constrained by a causal DAG, combining the power of standard ANNs with improved robustness to covariate shift, greater reliability, and interpretability for real-world applications.
[ 4, 6, 4 ]
Accept (Poster)
Matthew James Vowels, Mathieu Rochat, Sina Akbari
~Matthew_James_Vowels1, ~Mathieu_Rochat1, ~Sina_Akbari1
20250910
https://openreview.net/forum?id=ZIQactmQxb
ZIQactmQxb
@inproceedings{ vowels2026cats, title={CaTs and {DAG}s: Integrating Directed Acyclic Graphs with Transformers for Causally Constrained Predictions}, author={Matthew James Vowels and Mathieu Rochat and Sina Akbari}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=ZIQactmQxb} }
OpenReview/ICLR/figures/2026/accept_poster/ZIQactmQxb/Figure8.png
8
Figure 8: The DAG used in the real-world psychology example - reconstructed from the causal discovery and domain expertise results presented in (Vowels et al., 2023a). Treatment is attachment style ’attachment’ (also highlighted in orange) and the two outcomes of interest at the measures of depression (highlighted in green).
<paragraph_1>We follow closely the process in (Vowels et al., 2023a) for estimating the causal effect of shifting from one category of attachment style to another on depression. We also report the results for a subset of their analyses in Table 3, which use a ‘naive’ estimator (comprising the bivariate linear model between the categorical treatment ‘attachment’ and the two outcomes), a targeted learning estimator specialized for causal inference which incorporates semi-parametric techniques (van der Laan & Starmans, 2014; Vowels et al., 2023b), and our results using CaT. Note that there may be some minor differences in their data preprocessing which we were not able to reproduce. In particular, for each node in the DAG, the original authors reduced the dimensionality of the construct to be uni-dimensional by taking the sum of the scores for each of the individual items. In contrast, we padded all input variables so that they were the same dimensionality as the node with the highest dimensionality. For instance, social distancing ‘social dist’ was found to have 16 items, so loneliness, which has only 3 items, was zero-padded to have 16 dimensions. The enables us to use all available information in the input. The dimensionalities / number of items for each construct are shown in Table reftab:realworlddimensions. We also use the DAG presented in (Vowels et al., 2023a) which was the result of a causal discovery process alongside domain expertise, this DAG is reproduced in Figure 8.</paragraph_1>
diagram
0.913085
OpenReview
ICLR
2,026
A.I.R.: Enabling Adaptive, Iterative, and Reasoning-based Frame Selection For Video Question Answering
Effectively applying Vision-Language Models (VLMs) to Video Question Answering (VideoQA) hinges on selecting a concise yet comprehensive set of frames, as processing entire videos is computationally infeasible. However, current frame selection methods face a critical trade-off: approaches relying on lightweight similarity models, such as CLIP, often fail to capture the nuances of complex queries, resulting in inaccurate similarity scores that cannot reflect the authentic query-frame relevance, which further undermines frame selection. Meanwhile, methods that leverage a VLM for deeper analysis achieve higher accuracy but incur prohibitive computational costs. To address these limitations, we propose A.I.R., a training-free approach for Adaptive, Iterative, and Reasoning-based frame selection. We leverage a powerful VLM to perform deep, semantic analysis on complex queries, and this analysis is deployed within a cost-effective iterative loop that processes only a small batch of the most high-potential frames at a time. Extensive experiments on various VideoQA benchmarks demonstrate that our approach outperforms existing frame selection methods, significantly boosts the performance of the foundation VLM, and achieves substantial gains in computational efficiency over other VLM-based techniques.
Video Frame Selection, Vision Language Model, Training-Free, Video understanding
applications to computer vision, audio, language, and other modalities
[ 6, 4, 6, 4 ]
Accept (Poster)
Yuanhao Zou, Shengji Jin, Andong Deng, Youpeng Zhao, Jun Wang, Chen Chen
~Yuanhao_Zou1, ~Shengji_Jin1, ~Andong_Deng2, ~Youpeng_Zhao2, ~Jun_Wang7, ~Chen_Chen18
20250902
https://openreview.net/forum?id=SZVpOKw0YD
SZVpOKw0YD
@inproceedings{ zou2026air, title={A.I.R.: Enabling Adaptive, Iterative, and Reasoning-based Frame Selection For Video Question Answering}, author={Yuanhao Zou and Shengji Jin and Andong Deng and Youpeng Zhao and Jun Wang and Chen Chen}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=SZVpOKw0YD} }
OpenReview/ICLR/figures/2026/accept_poster/SZVpOKw0YD/Figure2.png
2
Figure 2: General pipeline of A.I.R. with three stages: (1) Adaptive Initial Sampling that identifies potential ‘events’ based on query similarity and dynamically samples frames around them using an adaptive budget; (2) Iterative Frame Selection that progressively refines the frame selection via four steps; and (3) QA Stage that feeds the final selected frames into Answering VLM.
<paragraph_1>As illustrated in Fig. 2, our proposed approach, A.I.R., performs frame selection in three stages: Adaptive Initial Sampling, Iterative Frame Selection, and QA Stage. The process begins by sampling n frames from the video (containing N total frames) at a fixed frame rate. As a pre-processing step, these n frames are passed to a CLIP model (Radford et al., 2021) to compute query-frame similarity scores, which is stored as a sparse vector S ∈RN 1. This similarity signal S is the input to the Adaptive Initial Sampling stage (Sec. 3.2), which identifies an initial set of K high-potential frame</paragraph_1> <paragraph_2>Step 2: Reasoning-Based VLM Analysis. Following the Potential Interval Ranking, the C selected frames Fcand are analyzed by a Analysis VLM for a focused, reasoning-based evaluation. We leverage the zero-shot, instruction-following capabilities of foundation VLMs to assess the relevance of each frame quantitatively. Guided by a detailed prompt (see Fig. 3 (b) and A.2.5), the VLM is instructed to reason step-by-step, providing both a textual justification and a relevance score (e.g., an integer from 1 to 5) for each candidate frame. Based on the relationship to a predefined threshold θ, these scores are classified as ‘Positive’ (> θ), ‘Neutral’ (= θ), or ‘Negative’ (< θ) and collected into a vector R ∈NC. We retain the ‘Positive’ frames to form a validated frame set F∗as:</paragraph_2>
diagram
0.968053
OpenReview
ICLR
2,026
Amortising Inference and Meta-Learning Priors in Neural Networks
One of the core facets of Bayesianism is in the updating of prior beliefs in light of new evidence$\textemdash$so how can we maintain a Bayesian approach if we have no prior beliefs in the first place? This is one of the central challenges in the field of Bayesian deep learning, where it is not clear how to represent beliefs about a prediction task by prior distributions over model parameters. Bridging the fields of Bayesian deep learning and probabilistic meta-learning, we introduce a way to $\textit{learn}$ a weights prior from a collection of datasets by introducing a way to perform per-dataset amortised variational inference. The model we develop can be viewed as a neural process whose latent variable is the set of weights of a BNN and whose decoder is the neural network parameterised by a sample of the latent variable itself. This unique model allows us to study the behaviour of Bayesian neural networks under well-specified priors, use Bayesian neural networks as flexible generative models, and perform desirable but previously elusive feats in neural processes such as within-task minibatching or meta-learning under extreme data-starvation.
neural processes, Bayesian neural networks, meta-learning, priors, variational inference
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
[ 4, 6, 4, 6 ]
Accept (Poster)
Tommy Rochussen, Vincent Fortuin
~Tommy_Rochussen1, ~Vincent_Fortuin1
20250919
https://openreview.net/forum?id=KG6SSTz2GJ
KG6SSTz2GJ
@inproceedings{ rochussen2026amortising, title={Amortising Inference and Meta-Learning Priors in Neural Networks}, author={Tommy Rochussen and Vincent Fortuin}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=KG6SSTz2GJ} }
OpenReview/ICLR/figures/2026/accept_poster/KG6SSTz2GJ/Figure9.png
9
Figure 9: Computational diagrams of the amortised attention layer (a), amortised attention block (b), and BNAM (c). Due to the numerous crossing lines in (a), we colour code the context and target input data paths as orange and light blue respectively. Arbitrarily many amortised attention blocks can be stacked sequentially in the BNAM; our diagram shows the simplest possible BNAM architecture.
<paragraph_1>We see in Fig. 9(a) that amortised inference can be performed in an attention layer by using amortised linear layers in place of standard linear layers, where MHA is the usual multi-head dot-product attention mechanism acting on keys K, queries Q, and values V. Similarly, in Fig. 9(b) we follow the standard approach (Vaswani et al., 2017) for constructing stackable attention blocks from attention layers, residual connections, layer norms, and 2-layer MLPs, but replacing each of the attention layer and MLP with their amortised counterparts. In Fig. 9(c) we show how amortised inference can be performed in a transformer by composing amortised linear layers and amortised attention blocks. We note that the resulting model can only be used in a somewhat unusual way for transformers; to map from test inputs Xt to predicted test outputs Yt where attention is performed between the test inputs, and where the posterior over the transformer’s weights is estimated from a context set.</paragraph_1>
diagram
0.988838
OpenReview
ICLR
2,026
DETR-ViP: Detection Transformer with Robust Discriminative Visual Prompts
Visual prompted object detection enables interactive and flexible definition of target categories, thereby facilitating open-vocabulary detection. Since visual prompts are derived directly from image features, they often outperform text prompts in recognizing rare categories. Nevertheless, research on visual prompted detection has been largely overlooked, and it is typically treated as a byproduct of training text prompted detectors, which hinders its development. To fully unlock the potential of visual-prompted detection, we investigate the reasons why its performance is suboptimal and reveal that the underlying issue lies in the absence of global discriminability in visual prompts. Motivated by these observations, we propose DETR-ViP, a robust object detection framework that yields class-distinguishable visual prompts. On top of basic image-text contrastive learning, DETR-ViP incorporates global prompt integration and visual-textual prompt relation distillation to learn more discriminative prompt representations. In addition, DETR-ViP employs a selective fusion strategy that ensures stable and robust detection. Extensive experiments on COCO, LVIS, ODinW, and Roboflow100 demonstrate that DETR-ViP achieves substantially higher performance in visual prompt detection compared to other state-of-the-art counterparts. A series of ablation studies and analyses further validate the effectiveness of the proposed improvements and shed light on the underlying reasons for the enhanced detection capability of visual prompts.
object detection, prompt-based detection, open-set object detection
applications to computer vision, audio, language, and other modalities
This paper presents the DETR-ViP framework, which enhances visual prompt detection by improving the semantic consistency of visual prompts and introducing a selective fusion strategy.
[ 6, 4, 6 ]
Accept (Poster)
Bo Qian, Dahu Shi, Xing Wei
~Bo_Qian1, ~Dahu_Shi2, ~Xing_Wei5
20250903
https://openreview.net/forum?id=2KKDWERRm3
2KKDWERRm3
@inproceedings{ qian2026detrvip, title={{DETR}-ViP: Detection Transformer with Robust Discriminative Visual Prompts}, author={Bo Qian and Dahu Shi and Xing Wei}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=2KKDWERRm3} }
OpenReview/ICLR/figures/2026/accept_poster/2KKDWERRm3/Figure2.png
2
Figure 2: The overview of DETR-ViP. DETR-ViP builds on Grounding DINO by incorporating a visual prompt encoder for visual-prompted detection. It improves prompt semantics via global prompt Integration and visual-textual prompt relation distillation, and refines the fusion module to stabilize image-prompt interactions, thereby enhancing detection robustness.
<paragraph_1>We develop the baseline VIS-GDINO from Grounding DINO by inserting the visual prompt encoder, as defined in Equation (3), between the backbone and the encoder, and removing the fusion modules in the encoder and decoder as represented in Equation (2). On top of this architecture, we introduce the global prompt integration, visual-textual prompt relation distillation loss, and selective fusion strategy to enhance visual prompt detection, thereby upgrading VIS-GDINO to DETR-ViP, as shown in Figure 2.</paragraph_1>
diagram
0.991753
OpenReview
ICLR
2,026
When Large Multimodal Models Confront Evolving Knowledge: Challenges and Explorations
Large Multimodal Models (LMMs) store vast amounts of pretrained knowledge but struggle to remain aligned with real-world updates, making it difficult to avoid capability degradation when acquiring evolving knowledge. Furthermore, most current work focuses on exploring static textual knowledge injection, neglecting dynamic multimodal evolving knowledge injection, leaving the potential of LMMs for multimodal knowledge injection as an open question. To address this, we first propose a pipeline to construct MMEVOKE, a benchmark for evaluating LMMs' ability in multimodal evolving knowledge injection. MMEVOKE contains 9,422 samples spanning 159 subtypes. Then, based on extensive experiments with MMEVOKE, we reveal challenges such as poor injection performance and capability degradation in existing knowledge injection methods through knowledge injection tests and general capability tests. Finally, to tackle these challenges, we introduce knowledge augmentation and knowledge retention methods, finding that knowledge-aware augmentation strengthens knowledge injection performance, and that Data Replay and MoE methods effectively mitigate capability degradation.
Evolving Knowledge Injection; Large multimodal model; Benchmark and Dataset
datasets and benchmarks
This work introduces MMEVOKE benchmark to reveal challenges in knowledge injection and explores potential solutions.
[ 6, 6, 4, 8 ]
Accept (Poster)
Kailin Jiang, Yuntao Du, Yukai Ding, Yuchen Ren, Ning Jiang, Zhi Gao, Zilong Zheng, Lei Liu, Bin Li, Qing Li
~Kailin_Jiang1, ~Yuntao_Du2, ~Yukai_Ding2, ~Yuchen_Ren1, ~Ning_Jiang7, ~Zhi_Gao5, ~Zilong_Zheng1, ~Lei_Liu28, ~Bin_Li8, ~Qing_Li1
20250901
https://openreview.net/forum?id=iaPEM00wEs
iaPEM00wEs
@inproceedings{ jiang2026when, title={When Large Multimodal Models Confront Evolving Knowledge: Challenges and Explorations}, author={Kailin Jiang and Yuntao Du and Yukai Ding and Yuchen Ren and Ning Jiang and Zhi Gao and Zilong Zheng and Lei Liu and Bin Li and Qing Li}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=iaPEM00wEs} }
OpenReview/ICLR/figures/2026/accept_poster/iaPEM00wEs/Figure25.png
25
Figure 25: Fine-grained dimensional results on MathVision and HallusionBench.
<paragraph_1>According to Figures 22, 23, 24, 25, and 26, we conduct result analysis for each benchmark.</paragraph_1>
diagram
0.915522
End of preview. Expand in Data Studio

DiagramBank

DiagramBank is a large-scale dataset designed for Retrieval-Augmented Generation (RAG) on scientific figures. It aggregates papers and their corresponding diagrams from top AI conferences (ICLR, ICML, NeurIPS, TMLR), providing rich metadata including review scores, acceptance decisions, and figure captions.

Dataset Structure

The dataset is provided as a single JSONL file (data.jsonl). Each row represents a specific figure extracted from a paper.

Data Fields

Field Description
platform Source platform (e.g., OpenReview).
venue Conference venue (ICLR, ICML, NeurIPS, TMLR).
year Year of the conference venue.
title Title of the research paper.
abstract Full abstract of the paper.
keywords Comma-separated list of keywords provided by authors.
areas Primary subject areas (e.g., "Deep Learning", "Optimization").
tldr One-sentence summary ("Too Long; Didn't Read").
scores List of reviewer scores (integers).
decision Final decision for the paper (e.g., Accept, Reject).
authors Comma-separated list of author names.
author_ids Comma-separated author IDs on the source platform.
cdate Creation date of the record (YYYYMMDD).
url Direct URL to the paper on the source platform.
platform_id Unique identifier for the paper on the platform.
bibtex BibTeX citation entry for the paper.
figure_number "Figure<figure_number>".
figure_path Relative path to the raw image file in the accompanying archives.
figure_caption The caption text associated with the figure.
figure_context Paragraphs from the paper that explicitly reference this figure.
figure_type Classification of the image (e.g., "diagram").
confidence Confidence score of the figure classification.

Usage

Loading the Data

from datasets import load_dataset

# This will automatically load 'data.jsonl' as the train split
dataset = load_dataset("zhangt20/DiagramBank", split="train")

# Example: Access the first figure's caption
print(dataset[0]['figure_caption'])

Loading the figures

You can download the diagrams and automatically reconstruct the folder structure using our setup script.

# 1. Download the setup script directly from this repo
wget [https://huggingface.co/datasets/zhangt20/DiagramBank/resolve/main/download_diagrambank.py](https://huggingface.co/datasets/zhangt20/DiagramBank/resolve/main/download_diagrambank.py)
# 2. Run it (downloads ~60GB of diagrams from accepted papers)
# Set the target folder using the FIG_RAG_DIR environment variable
export FIG_RAG_DIR=<a scratch folder with at least 60 GB of space>
# 1. Default: Download Accepted papers + Core files (DBs/FAISS)
python download_diagrambank.py

# 2. Download Everything: All papers (Accept + Reject) + Core files
# python download_diagrambank.py --subset all

# 3. Download Rejected papers only + Core files
# python download_diagrambank.py --subset reject

# 4. Skip Core Files: Download only images (no DBs or FAISS)
# python download_diagrambank.py --no-core

# 5. Combine Flags: Download all images but skip core files
# python download_diagrambank.py --subset all --no-core

For a more detailed usage, see https://github.com/csml-rpi/DiagramBank

Downloads last month
15