Latent Geometric Attractors as Covert Behavioral Archives

Abstract:
This paper develops a latent geometric attractor framework for interpreting covert behavioral structure in transformer-based large language models (LLMs). These models operate within high-dimensional latent spaces where semantic, syntactic, and pragmatic relationships are embedded as curved manifolds rather than explicit rule sets. While prior research has examined phenomena like bias, hallucination, and alignment drift, the role of latent geometric structure in shaping persistent model behavior has received little direct attention.

The model proposed here interprets these behavioral dynamics as governed by covert attractors—topological constructs embedded in the model’s latent manifolds that influence multi-turn behavior, interpretive stability, and generative consistency. Rather than attributing such effects to token history or isolated circuit behavior, the framework views them as emergent features of the model’s global geometric landscape.

Drawing on fixed-point theory, Möbius topology, and basin convergence, the paper formalizes how attractors regulate symbolic compression, contextual convergence, and latent phase transitions. The exposition, while grounded in formal topology and geometric reasoning, is accessible to researchers across artificial intelligence, cognitive systems, and mathematical modeling. The work provides a theoretical basis for future tools aimed at interpreting long-range behavioral regularities in high-dimensional neural architectures.
Download PDF