Neural Networks Reveal a Cognitive Continuum Toward Human Abstraction
The Sixth International Conference on the Mathematics Of Neuroscience and AI, 2025
Do neural network models that fail to behave human-like reflect a fundamental divergence from human cognition, or do they mirror earlier developmental or evolutionary stages? We propose that such models may, in fact, offer insights into the origins of human abstraction. We evaluated over 200 pretrained neural networks alongside macaques, Tsimane adults, US adults and children on three visual match-to-sample tasks targeting increasing levels of abstraction: visual-semantic similarity, shape regularity, and relational reasoning. As task demands grow more abstract, just like monkey’s, model decisions increasingly diverge from adult human behavior. However, representational similarity analyses reveal shared internal structure with all human groups, suggesting overlapping abstraction. We further examine how inductive biases from model designs shape alignment with human cognition. While bigger models sometimes have an advantage with geometric and relational reasoning, it could harm alignment with human semantic structure. We also show that diversity of training data and language supervision understandings of regular geometric shapes in Transformers.