Abstract In this talk, I plan to explore what effect the recent AI Transformer revolution can possibly have on one of the most foundational debates in early cognitive neuroscience, namely the mental imagery debate. This latter issue pitted propositional linguistic accounts of cognition (Pylyshyn) against the possibility of genuine inner image representation, or pictorial theories (Kosslyn). What drove the debate and caused problems for propositional accounts were experiments concerning mental image rotation and scanning. How can a quasi-linguistic medium account for such phenomena? I will argue that the transformer revolution in AI and NLP offers some clue in the form of a proof of concept that linguistic models can outperform models with explicit visual structure (such as convolutional neural networks) on many image-classifying tasks, thus providing a possible answer to a generation’s old quandary in cognitive neuroscience.