“Do Language Models Plan Ahead for Future Tokens?”, Wilson Wu, John X. Morris, Lionel Levine2024-04-01 (, , )⁠:

Do transformers “think ahead” during inference at a given position?

It is known transformers prepare information in the hidden states of the forward pass at t that is then used in future forward passes t + τ. We posit two explanations for this phenomenon: pre-caching, in which off-diagonal gradient terms present in training result in the model computing features at t irrelevant to the present inference task but useful for the future, and breadcrumbs, in which features most relevant to time step t are already the same as those that would most benefit inference at time t + τ.

We test these hypotheses by training language models without propagating gradients to past timesteps, a scheme we formalize as myopic training.

In a synthetic data setting, we find clear evidence for pre-caching. In the autoregressive language modeling setting, our experiments are more suggestive of the breadcrumbs hypothesis.