āChatGPT Can Predict the Future When It Tells Stories Set in the Future About the Pastā, 2024-04-11 ()ā :
This study investigates whether OpenAIās ChatGPT-3.5 and ChatGPT-4 can accurately forecast future events using two distinct prompting strategies. To evaluate the accuracy of the predictions, we take advantage of the fact that the training data at the time of experiment stopped at September 2021, and ask about events that happened in 2022 using ChatGPT-3.5 and ChatGPT-4.
We employed two prompting strategies: direct prediction and what we call future narratives which ask ChatGPT to tell fictional stories set in the future with characters that share events that have happened to them, but after ChatGPTās training data had been collected. Concentrating on events in 2022, we prompted ChatGPT to engage in storytelling, particularly within economic contexts.
After analyzing 100 prompts, we discovered that future narrative prompts enhanced ChatGPT-4ās forecasting accuracy. This was especially evident in its predictions of major Academy Award winners as well as economic trends, the latter inferred from scenarios where the model impersonated public figures like the Federal Reserve Chair, Jerome Powell. These findings indicate that narrative prompts leverage the modelsā capacity for hallucinatory narrative construction, facilitating more effective data synthesis and extrapolation than straightforward predictions.
Our research reveals new aspects of LLMsā predictive capabilities and suggests potential future applications in analytical contexts.
[Probably data leakage from continued-training and user feedback, among other possibilities, but why do the story prompts put so much more weight on just a few (correct) outcomes? Is this just a kind of RLHF mode-collapse (onto the mode, which is correct due to leakage) manifesting in fiction samples, where ChatGPT output is so dreary? Or is this some kind of superior prompting for eliciting dark-knowledge, and possibly an unusual form of inner-monologue?]