Reinforcing the Diffusion Chain of Lateral Thought with Diffusion Language Models
Paper
•
2505.10446
•
Published
We introduce the Large Language Diffusion with Ordered Unmasking (LLaDOU), which is trained by reinforcing a new reasoning paradigm named the Diffusion Chain of Lateral Thought (DCoLT) for diffusion language models.
Compared to standard CoT, DCoLT is distinguished with several notable features:
LLaDOU-v0-Math is a math-specific model trained on GSM8K and MATH.
For inference codes and detailed instructions, please refer our github page: maple-research-lab/LLaDOU.
Base model
GSAI-ML/LLaDA-8B-Instruct