“Visualization-Of-Thought Elicits Spatial Reasoning in Large Language Models”, 2024-04-04 ():
Large language models (LLMs), such as those developed by OpenAI and Google, have exhibited impressive performance in language comprehension and various reasoning tasks. However, their abilities in spatial reasoning, a crucial aspect of human cognition, remain relatively unexplored. Humans possess a remarkable ability to create mental images of unseen objects and actions through a process known as the Mind’s Eye, enabling the imagination of the unseen world.
Inspired by this cognitive capacity, we propose Visualization-of-Thought (VoT) prompting. VoT aims to elicit spatial reasoning of LLMs by visualizing their reasoning traces [using ASCII art/Unicode], thereby guiding subsequent reasoning steps. We employed VoT for multi-hop spatial reasoning tasks, including natural language navigation, visual navigation, and visual tiling in 2D grid worlds.
Experimental results demonstrated that VoT enhances the spatial reasoning abilities of LLMs. Notably, VoT outperformed existing multimodal large language models (MLLMs) in these tasks.
While VoT works surprisingly well on LLMs, the ability to generate mental images to facilitate spatial reasoning resembles the mind’s eye process, suggesting its potential viability in MLLMs. [Seems like it would make more sense just to go multimodal and tokenize images inline like CM3/Gato/etc.]