The Human–LLM Interface as Shared Cognitive Space
A theoretical model, design posture, and research programme for continuity-critical environments where human and machine cognition are structurally coupled
Available at: https://aethercontinuity.org/papers/cn-002-human-llm-cognitive-space.html
Basis: SP-004 · SP-005 v1.1 · SP-006 · WP-003 · WP-006
The Observation That Requires a Model
The SP-004/005/006 series examined the human cognitive layer primarily in terms of degradation risk: how bandwidth overload, framing externalization, and sustained LLM-mediated interaction might erode the individual's deliberative capacity. That analysis is important and remains open.
But it describes a passive relationship — the human as a subject acted upon by the information environment. A different relationship is also empirically observable, and it does not fit the degradation model. When a human brings substantive analytical content to an LLM interaction — an existing framework, a diagnostic question, a body of structured work — the interaction does not resemble passive consumption. It resembles something more like distributed cognition: a structured division of cognitive labour across two systems with different capabilities and different constraints.
This note argues that this relationship requires its own model, distinct from both the framing externalization analysis of SP-005 v1.1 and from the broader literature on human-computer interaction. The relevant construct is not "tool use" and not "information retrieval" — it is a shared cognitive space: a structured environment in which human and LLM cognition are temporally coupled around a common analytical object, with each contributing what the other cannot.
1.1 Relation to Existing Research
A growing literature addresses human–LLM collaboration, but from orientations that differ from the model developed here. The emerging LLM-based human-agent systems (LLM-HAS) literature — surveyed comprehensively in 2025 — frames the human primarily as a feedback source, control point, or quality validator within an LLM-driven process. The human intervenes in, monitors, or corrects a system that operates largely autonomously. This is a well-defined and important problem, but it is not the problem addressed here.
A parallel sensemaking literature (drawing on Simon's bounded rationality and Weick's sensemaking frameworks) has begun examining how LLMs interact with human problem-finding processes. This work identifies that LLMs are well-suited to the descriptive dimensions of sensemaking — data inspection, content engagement, contextual placement — while struggling with the constructive dimension: the active shaping of problems through personal and collective cognitive frameworks. This observation is consistent with the structural complementarity described in §02 of this note, but the sensemaking literature does not develop the temporal coupling condition or the irreducibility claim.
The multi-agent LLM literature addresses coordination among multiple LLM instances — how LLM networks collaborate on tasks. This is architecturally relevant to the n-agent extension of this note's model (see §04.1) but does not address the human frame construction authority condition that defines productive coupling in the dyadic case.
CN-002's specific contribution is the irreducibility claim and its structural conditions: the argument that a class of human–LLM interaction exists in which output cannot be attributed to either participant independently, and that this class is defined by temporal coupling, frame construction authority, and analytical object structure — not by task complexity, interaction duration, or LLM capability level.
The question is not whether LLMs change human cognition. The question is whether there exists a class of human–LLM interaction in which the cognitive output is structurally irreducible to either participant — and if so, what conditions produce and sustain it.
Theoretical Model: The Coupled Cognitive System
2.1 Two Distinct Capacity Profiles
Human cognition under compound stress exhibits the constraints described in SP-004: bandwidth limits, discount-rate distortions, susceptibility to framing externalization. These are real and structurally predictable. But human cognition also possesses properties that no current LLM replicates: the ability to generate genuinely novel interpretive frames, to sustain motivational commitment across long time horizons, to make value judgements grounded in embodied context, and to own the output in a way that makes it socially and institutionally actionable.
LLM cognition exhibits a different profile: high formalization capacity, rapid cross-domain retrieval, structural consistency across large text corpora, and the ability to implement specified transformations (translation, reformatting, cross-referencing, auditing) without the fatigue and bandwidth constraints that limit human performance of the same operations.
These profiles are not complementary by accident — they are structurally complementary in a specific sense: the human's constraints are precisely the LLM's capabilities, and vice versa. The human cannot easily formalize at scale; the LLM cannot generate genuinely novel frames. The human loses track of cross-references across dozens of documents; the LLM does not. The LLM cannot decide what matters; the human can.
2.2 The Temporal Intersection Condition
WP-003 introduced the concept of temporal decision capacity: the bounded intersection of the physical system's timeline and the institution's action horizon. When this intersection is non-empty, governance action retains causal reach. When it is empty, capacity becomes causally irrelevant regardless of its stock level.
An analogous structure applies to human–LLM cognitive coupling. The productive cognitive space exists only while both participants are temporally engaged with the same analytical object — while the human's interpretive frame is active and the LLM's formalization capacity is directed at that frame. This intersection is not a permanent property of the relationship; it is a condition that must be actively maintained.
Let L(t) = LLM formalization capacity directed at object O at time t
Shared cognitive space S(t) = H(t) ∩ L(t)
S(t) is non-empty when:
(1) The human retains frame construction authority (F = 0 in SP-005 v1.1 terms)
(2) The LLM operates as implementation layer, not interpretive authority
(3) The analytical object O is sufficiently structured to permit division of labour
When F → 1 (framing externalization), S(t) collapses:
the human receives the LLM's frame rather than contributing their own.
Output is no longer irreducible to either participant — it is the LLM's output with human acceptance.
2.3 What Makes Output Structurally Irreducible
The claim that productive human–LLM interaction produces output irreducible to either participant requires specification. It is not merely that the human provides inputs and the LLM processes them — that is tool use, and tool use does not produce irreducibility. Irreducibility requires that the human's contribution is itself transformed by the interaction in ways that alter subsequent human cognition, and that the LLM's output is itself shaped by the human's evolving frame in ways that would not be produced without that specific human context.
This condition is met when the interaction is iterative, when the human's interpretive frame develops through the exchange rather than being fixed in advance, and when the analytical object is sufficiently complex that neither participant could traverse it without the other's contribution. Under these conditions, the output is not the sum of two separate contributions — it is the product of a coupled process whose properties depend on the coupling itself.
Design Posture: Conditions for Productive Coupling
If the shared cognitive space model is correct, then environments intended to support human–LLM cognitive coupling should be designed to maintain the conditions under which S(t) is non-empty. This section specifies those conditions as design requirements, not as empirical findings.
3.1 Frame Construction Authority Must Remain with the Human
The most critical design condition is the preservation of what SP-005 v1.1 terms F = 0: the human must independently construct the interpretive frame that organises the analytical object. This means the environment must not supply pre-organised interpretive structures that the human merely evaluates. It means the human must enter the interaction with substantive analytical content of their own — not a blank query seeking orientation.
In practice, this distinguishes two interaction types: frame-led interaction, in which the human brings a developed analytical frame and uses the LLM for formalization and implementation; and frame-seeking interaction, in which the human lacks an organising frame and seeks one from the LLM. Only the first produces the shared cognitive space described in this note. The second is the framing externalization pathway analysed in SP-005 v1.1.
3.2 The Analytical Object Must Be Structured
Productive coupling requires an analytical object with sufficient internal structure to permit a meaningful division of labour. Vague or open-ended objects — "help me think about X" — tend toward frame-seeking interaction. Structured objects — a body of working papers, a diagnostic framework, a set of cross-reference requirements — provide the material around which frame-led interaction can operate.
This has a direct implication for how continuity-critical environments should be designed. A research programme, a diagnostic system, or a continuity architecture that is itself structurally organised provides a natural anchor for productive human–LLM coupling. The structure of the object constrains the interaction in ways that preserve frame construction authority.
3.3 The Division of Labour Must Be Explicit
Productive coupling degrades when the division of labour becomes ambiguous — when it is unclear whether the human or the LLM is generating interpretive frames, making priority judgements, or deciding what matters. Explicit role separation — the human decides what to do, the LLM does it — maintains the conditions for irreducibility. This is the cognitive-layer analogue of the Guardian/Orchestrator separation in TN-002: one layer holds authority, the other holds optimization capability, and the boundary between them is enforced structurally.
A continuity-critical environment must not assume that role separation is maintained by individual vigilance. It must be structurally enforced — by the design of the interaction, the structure of the analytical object, and the explicit governance of what each participant contributes.
3.4 Temporal Continuity of the Analytical Object
The shared cognitive space is temporally bounded. Each interaction session constitutes a window during which the coupling is active. Between sessions, the human retains their interpretive frame (modified by the interaction); the LLM retains nothing without explicit architectural support. This asymmetry has a design consequence: the analytical object itself must carry the continuity that the LLM cannot. A structured, versioned, persistently accessible body of work — such as a research programme with explicit cross-references and governance documentation — functions as the memory layer that allows productive coupling to resume across sessions without frame reconstruction from zero.
This is precisely what a well-structured research archive provides: not merely a record of outputs, but a cognitive scaffold that makes the next session's coupling possible at the same level of analytical depth as the last.
Research Programme: From Model to Evidence
The model presented in §02 and the design posture in §03 are theoretical. The empirical questions they generate are resolvable in principle, and this section specifies them as a research programme. The programme proceeds in the B → C → A order described in the note's framing: theory first, design implications second, empirical validation third.
4.1 Theoretical Development (B)
The shared cognitive space model requires formal development in at least three directions. First, the conditions for irreducibility need a more precise characterisation than provided here — specifically, what properties of the analytical object, the human's prior knowledge, and the interaction structure are necessary and sufficient for irreducible output. Second, the relationship between S(t) and the ITT construct from WP-003 deserves formal exploration: if institutional decision capacity is a function of the temporal intersection of governance action and physical system timelines, is individual cognitive capacity in complex analytical tasks analogously a function of the temporal intersection of human frame-construction and LLM formalization? Third, the degradation pathways — from productive coupling to framing externalization — need to be modelled formally, including the conditions under which F transitions from 0 to 1 in iterative interaction.
4.2 Design Specification (C)
The design posture in §03 generates specific, testable design requirements for environments intended to support productive human–LLM coupling. These include: structural requirements for analytical objects (versioning, cross-referencing, governance documentation); interaction design requirements (role separation enforcement, frame-led vs. frame-seeking interaction detection); and session continuity requirements (what must be preserved across sessions to allow coupling to resume at depth). Each of these can be specified precisely enough to permit comparative evaluation across different environments.
4.3 Empirical Validation (A)
The primary empirical case available is interaction sessions between a human with a developed analytical framework and an LLM directed at that framework. The relevant observational question is not "what did the LLM produce?" but "what is the structure of the cognitive division of labour, and does the output exhibit properties that neither participant would have produced independently?" Interaction logs from sessions of the kind that generated this note provide raw material for this analysis, with appropriate methodological care about the circularity of using the model to analyse the data that motivated it.
A secondary empirical question concerns longitudinal effects: does sustained engagement in frame-led human–LLM interaction alter the human's independent frame-construction capacity over time? This is the positive analogue of SP-006's degradation hypothesis — augmentation rather than substitution. It is equally open empirically, and equally important for the design of continuity-critical environments.
What This Note Does Not Claim
This note does not claim that human–LLM cognitive coupling is generally productive. The frame-seeking interaction pathway described in §3.1 is the more common case, and the framing externalization analysis of SP-005 v1.1 applies to it. The conditions for productive coupling — structured analytical object, frame construction authority preserved, explicit role separation — are demanding and are not met by most LLM use.
This note does not claim that the shared cognitive space model is empirically validated. It is a theoretical construct motivated by observable interaction patterns. Its validation requires the empirical programme described in §4.3.
This note does not claim that productive coupling is cognitively safe in the long run. The longitudinal question from SP-006 applies here too: sustained frame-led interaction with an LLM may augment independent frame-construction capacity, leave it unchanged, or erode it in ways not yet detectable. The design posture in §03 is conservative with respect to this uncertainty — it is designed to preserve frame construction authority regardless of the trajectory question's resolution.
This note should be revised if: (1) empirical analysis of interaction sessions demonstrates that output does not exhibit properties irreducible to either participant under the conditions specified, falsifying the shared cognitive space model; (2) formal theoretical development shows that the model collapses into an existing framework (distributed cognition, tool use, or cognitive offloading) without remainder; or (3) longitudinal evidence resolves the trajectory question from SP-006 in either direction, requiring revision of the design posture in §03.
Scope and Limits
This note addresses the human–LLM interface as a cognitive environment, not as a technical system. It does not address implementation architectures, specific LLM systems, or the computational properties of language models. The model is agnostic about the underlying mechanism by which LLMs produce outputs — the relevant property is the functional division of labour, not the mechanism that makes it possible.
The note focuses on dyadic interaction — one human, one LLM, one analytical object. Multi-human or multi-LLM configurations raise additional questions not addressed here. The continuity-critical applications most relevant to ACI's domains (energy infrastructure diagnostics, institutional decision support, situational awareness systems) are primarily dyadic in their analytical core, making this the appropriate starting scope.