University of Leicester
Browse

Data for "Geospatial Mechanistic Interpretability of Large Language Models"

dataset
posted on 2025-05-06, 08:06 authored by Stef De SabbataStef De Sabbata, Stefano Mizzaro, Kevin Roitero

This repository contains the data used for the book chapter by De Sabbata et al (2025), including the Free Gazetteer Data made available by GeoNames under CC BY 4.0, a file containing the names of the Italian provinces was made available by Michele Tizzoni under CC BY 4.0, and data derived from them using Mistral-7B-Instruct-v0.2, which was made available by the Mistral AI Team under Apache License 2.0. The code used to process the data is available via our related GitHub repository under MIT Licence.

The Author Accepted Manuscript of "Geospatial Mechanistic Interpretability of Large Language Models" available on arXiv (arXiv:2505.03368).

De Sabbata, S., Mizzaro, S. and Roitero, K. (2025) “Geospatial mechanistic interpretability of large language models,” in Janowicz, K. et al. (eds.) Geography according to ChatGPT. IOS Press (Frontiers in artificial intelligence and applications).

Abstract: Large language models (LLMs) have demonstrated unprecedented capabilities across various natural language processing tasks. Their ability to process and generate viable text and code has made them ubiquitous in many fields, while their deployment as knowledge bases and ``reasoning'' tools remains an area of ongoing research. In geography, a growing body of literature has been focusing on evaluating LLMs' geographical knowledge and their ability to perform spatial reasoning. However, very little is still known about the internal functioning of these models, especially about how they process geographical information.

In this chapter, we establish a novel framework for the study of geospatial mechanistic interpretability -- using spatial analysis to reverse engineer how LLMs handle geographical information. Our aim is to advance our understanding of the internal representations that these complex models generate while processing geographical information -- what one might call "how LLMs think about geographic information" if such phrasing was not an undue anthropomorphism.

We first outline the use of probing in revealing internal structures within LLMs. We then introduce the field of mechanistic interpretability, discussing the superposition hypothesis and the role of sparse autoencoders in disentangling polysemantic internal representations of LLMs into more interpretable, monosemantic features.

In our experiments, we use spatial autocorrelation to show how features obtained for placenames display spatial patterns related to their geographic location and can thus be interpreted geospatially, providing insights into how these models process geographical information. We conclude by discussing how our framework can help shape the study and use of foundation models in geography.

History

Usage metrics

    School of Geography, Geology and the Environment

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC