LMEnt: A Suite for Analyzing Knowledge in Language Models from Pretraining Data to Representations Paper • 2509.03405 • Published Sep 3, 2025 • 23
The Coverage Principle: A Framework for Understanding Compositional Generalization Paper • 2505.20278 • Published May 26, 2025 • 7
How Do Large Language Models Acquire Factual Knowledge During Pretraining? Paper • 2406.11813 • Published Jun 17, 2024 • 31
Do Large Language Models Latently Perform Multi-Hop Reasoning? Paper • 2402.16837 • Published Feb 26, 2024 • 28