Disentangled World Models: Learning to Transfer Semantic Knowledge from Distracting Videos for Reinforcement Learning

1MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China 2Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo, China 3Ningbo Key Laboratory of Spatial Intelligence and Digital Derivative, Ningbo, China 4University of Chinese Academy of Sciences, Beijing, China 5Shenyang Institute of Computing Technology, Chinese Academy of Sciences, Shenyang, China 6Shenyang CASNC Technology Co., Ltd, Shenyang, China

*Denotes equal contribution     Indicates corresponding author


ICCV 2025

overview

Overview of our proposed framework. The key idea is to leverage distracting videos for semantic knowledge transfer, enabling the downstream agent to improve sample efficiency on unseen tasks.

Abstract

Training visual reinforcement learning (RL) in practical scenarios presents a significant challenge, i.e., RL agents suffer from low sample efficiency in environments with variations. While various approaches have attempted to alleviate this issue by disentangled representation learning, these methods usually start learning from scratch without prior knowledge of the world. This paper, in contrast, tries to learn and understand underlying semantic variations from distracting videos via offline-to-online latent distillation and flexible disentanglement constraints. To enable effective cross-domain semantic knowledge transfer, we introduce an interpretable model-based RL framework, dubbed Disentangled World Models (DisWM). Specifically, we pretrain the action-free video prediction model offline with disentanglement regularization to extract semantic knowledge from distracting videos. The disentanglement capability of the pretrained model is then transferred to the world model through latent distillation. For finetuning in the online environment, we exploit the knowledge from the pretrained model and introduce a disentanglement constraint to the world model. During the adaptation phase, the incorporation of actions and rewards from online environment interactions enriches the diversity of the data, which in turn strengthens the disentangled representation learning. Experimental results validate the superiority of our approach on various benchmarks.

Evaluation

Showcases in Drawerworld.

DisWM
TD-MPC2
ContextWM

Showcases in DMC.

DisWM
TD-MPC2
ContextWM

BibTeX

@inproceedings{wang2025disentangled,
title={Disentangled World Models: Learning to Transfer Semantic Knowledge from Distracting Videos for Reinforcement Learning}, 
author={Qi Wang and Zhipeng Zhang and Baao Xie and Xin Jin and Yunbo Wang and Shiyu Wang and Liaomo Zheng and Xiaokang Yang and Wenjun Zeng},
booktitle={ICCV},
year={2025}
}

Acknowledgements

This website adapted from Nerfies template.