InterPrior: Scaling Generative Control for Physics-Based Human-Object Interactions
Abstract
A scalable framework called InterPrior learns a unified generative controller through imitation learning and reinforcement learning to enable humanoids to generalize loco-manipulation skills across diverse contexts while maintaining physically coherent whole-body coordination.
Humans rarely plan whole-body interactions with objects at the level of explicit whole-body movements. High-level intentions, such as affordance, define the goal, while coordinated balance, contact, and manipulation can emerge naturally from underlying physical and motor priors. Scaling such priors is key to enabling humanoids to compose and generalize loco-manipulation skills across diverse contexts while maintaining physically coherent whole-body coordination. To this end, we introduce InterPrior, a scalable framework that learns a unified generative controller through large-scale imitation pretraining and post-training by reinforcement learning. InterPrior first distills a full-reference imitation expert into a versatile, goal-conditioned variational policy that reconstructs motion from multimodal observations and high-level intent. While the distilled policy reconstructs training behaviors, it does not generalize reliably due to the vast configuration space of large-scale human-object interactions. To address this, we apply data augmentation with physical perturbations, and then perform reinforcement learning finetuning to improve competence on unseen goals and initializations. Together, these steps consolidate the reconstructed latent skills into a valid manifold, yielding a motion prior that generalizes beyond the training data, e.g., it can incorporate new behaviors such as interactions with unseen objects. We further demonstrate its effectiveness for user-interactive control and its potential for real robot deployment.
Community
Distillation reconstructs motor skills, while RL fine-tuning interpolates and consolidates the latent space into a coherent skill manifold for versatile whole-body loco-manipulation.
arXivLens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/interprior-scaling-generative-control-for-physics-based-human-object-interactions-108-a21f45b6
- Executive Summary
- Detailed Breakdown
- Practical Applications
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- HoRD: Robust Humanoid Control via History-Conditioned Reinforcement Learning and Online Distillation (2026)
- Decoupled Generative Modeling for Human-Object Interaction Synthesis (2025)
- HumanX: Toward Agile and Generalizable Humanoid Interaction Skills from Human Videos (2026)
- Learning Generalizable Hand-Object Tracking from Synthetic Demonstrations (2025)
- ZEST: Zero-shot Embodied Skill Transfer for Athletic Robot Control (2026)
- Learning Whole-Body Human-Humanoid Interaction from Human-Human Demonstrations (2026)
- Robust and Generalized Humanoid Motion Tracking (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper