Aether Continuity Institute Working Paper  ·  No. 013
Year  2026
Version  1.0
Series  WP
Open Working Draft
ACI Working Paper No. 013 · Domain D-3 · D-5

Distributed Authorship and Structural Coherence in Human–LLM Research Systems

Frame authority, the anchor-document model, and a three-level coherence hierarchy for n-human, m-LLM analytical environments

Cite as: Aether Continuity Institute (ACI), Working Paper No. 013, 2026.
Available at: https://aethercontinuity.org/papers/wp-013-distributed-authorship-coherence.html
v1.0 — Initial working draft. Empirical validation programme open.
D-3 · Institutional Decision Capacity D-5 · Continuity Computing
Abstract

CN-002 established the theoretical model of the human–LLM shared cognitive space for the dyadic case: one human, one LLM, one structured analytical object. This paper extends that model to the general case of n humans and m LLMs working on the same analytical object across sessions. The extension introduces a new failure mode — frame authority fragmentation — that does not appear in the dyadic case and is not captured by existing multi-agent or institutional decision capacity frameworks.

The paper develops three propositions. First, coherence in distributed human–LLM systems is not a single property but a hierarchy of three distinct levels — terminological, frame, and output coherence — each of which is a necessary condition for the level above it. Second, frame authority in n-participant systems can be distributed through three structural models; this paper focuses on the anchor-document model (Model C), in which a structured governance document replaces personal authority hierarchies as the primary coherence mechanism. Third, the anchor-document model is falsifiable: specific observable conditions would demonstrate that document-based frame authority does not prevent coherence fragmentation. An empirical research programme based on these falsification conditions is specified.

§ 01

The Problem: From Dyad to Distributed System

CN-002 identified the structural conditions under which a human–LLM interaction produces output irreducible to either participant: the human retains frame construction authority (F = 0), the LLM operates as implementation layer, and the analytical object is sufficiently structured to permit a meaningful division of cognitive labour. Under these conditions, the shared cognitive space S(t) = H(t) ∩ L(t) is non-empty and productive coupling is possible.

This model applies cleanly to the dyadic case. When a single human brings a developed analytical framework to a single LLM directed at that framework, the conditions for productive coupling are definite and observable. Frame construction authority is unambiguous — there is only one human to hold it. The analytical object is defined by that human's prior work. The division of labour is clear: the human decides what matters, the LLM implements.

The extension to n humans and m LLMs introduces a set of problems that the dyadic model cannot address. When multiple humans bring different analytical perspectives to the same object, frame authority is no longer unambiguous — it must be distributed somehow, and the mechanism of distribution determines whether coherent output remains possible. When multiple LLMs operate on the same object with potentially different formalization tendencies, their outputs may compound rather than reinforce each other. When the system operates across sessions and the same object is worked on by different subsets of participants at different times, the temporal coupling condition S(t) = ∩ H_i(t) ∩ L_j(t) becomes increasingly difficult to maintain.

The central question of this paper: under what structural conditions does a distributed human–LLM system maintain coherence across participants and sessions — and what observable properties distinguish coherent from incoherent systems?

1.1 A New Failure Mode

The failure mode specific to distributed systems is frame authority fragmentation: the condition in which different participants operate with different implicit interpretive frames, none of which is dominant, and the system produces outputs that reflect the fragmentation rather than a coherent analytical contribution. This is distinct from the framing externalization failure mode described in SP-005 v1.1. Framing externalization is a dyadic failure: the human cedes frame authority to the LLM. Frame authority fragmentation is a distributed failure: frame authority is claimed by multiple participants simultaneously, producing incoherence rather than LLM dominance.

Frame authority fragmentation is not easily visible in the output. A system in this failure mode still produces text, still generates documents, still accumulates a body of work. The failure is structural: the output cannot be understood as a coherent contribution to a shared analytical project because there is no shared analytical project — there are n separate projects that happen to share a file system.

1.2 Relation to Existing Research

The multi-agent LLM literature has developed sophisticated protocols for coordinating LLM-to-LLM coherence. MetaGPT's Standard Operating Procedures (SOPs) prevent error propagation by modularizing task distribution. CIR3's collective intentional reading mechanism maintains coherence in multi-LLM question-answer generation. CollabStory examines sequential multi-LLM authorship and its effect on narrative coherence. These approaches address coordination among LLM instances but do not model the distribution of frame authority among human participants — which is the specific problem this paper addresses.

The LLM-HAS (human-agent systems) literature models the human primarily as a feedback source, control point, or quality validator. The framework assumes a single human in a supervisory role. It does not model the coordination problem that arises when multiple humans, each with legitimate frame-construction authority, contribute to the same analytical object.

§ 02

The Coherence Hierarchy

Coherence in distributed human–LLM systems is not a single property. This paper proposes a three-level hierarchy in which each level is a necessary condition for the level above it. The hierarchy is not merely descriptive — it has a structural implication: coherence can only be lost from the top down, and it can only be built from the bottom up.

2.1 Level 1 — Terminological Coherence (TC)

Terminological coherence is the condition in which the same concept carries the same operational meaning for all participants. This is the minimum necessary condition for any productive interaction. Without it, participants cannot verify whether they are in agreement or disagreement on any substantive point — apparent consensus may conceal divergent meanings, and apparent disagreement may reflect only terminological differences.

TC is the easiest level to measure and the easiest to lose. In a system with no explicit terminology governance, concepts drift across participants as each uses existing terms in new contexts and as LLMs generate terminologically adjacent but meaningfully distinct formulations. TC is also the only level that can in principle be restored purely through explicit negotiation — participants can agree on definitions without necessarily agreeing on analytical frames.

2.2 Level 2 — Frame Coherence (FC)

Frame coherence is the condition in which participants share the same implicit structure for organizing what the analytical object is, what questions it addresses, and how findings within it connect to each other. TC is necessary but not sufficient for FC: participants can use the same terminology while holding fundamentally different organizing frames. A team that agrees on what "decision capacity" means may still disagree on whether it is a property of individuals, institutions, or systems — and this disagreement is a frame-level, not a terminology-level, difference.

FC is harder to measure because frames are often implicit. Participants in a frame-incoherent system may not recognize the incoherence until they attempt to produce a joint output — at which point the incompatibility becomes visible. The FAI (Frame Authority Index) developed in §03 provides an observable proxy for FC.

2.3 Level 3 — Output Coherence (OC)

Output coherence is the condition in which the system's output is irreducible to the sum of individual participant contributions — the property identified in CN-002 as the defining characteristic of productive human–LLM coupling. TC and FC are necessary but not sufficient for OC: a system can maintain shared terminology and shared frames while still producing output that is simply the concatenation of individual contributions rather than an irreducible joint product.

OC requires not only that participants share a frame but that their contributions are genuinely interdependent — that each participant's output is shaped by the others' contributions in ways that alter what they would have produced independently.

Coherence hierarchy:
TC ⊂ FC ⊂ OC

(TC is necessary for FC; FC is necessary for OC)

For a system with n human participants H₁...Hₙ and m LLMs L₁...Lₘ:

TC: ∀i,j: definition(concept, Hᵢ) ≅ definition(concept, Hⱼ)
FC: ∀i,j: frame(O, Hᵢ, t) ≅ frame(O, Hⱼ, t) across sessions
OC: output(S) ≠ Σ output(Hᵢ) + Σ output(Lⱼ)

Shared cognitive space: S(t) = ∩ Hᵢ(t) ∩ Lⱼ(t)
S(t) is non-empty only when TC and FC are maintained.
Structural implication

Coherence loss is top-down: OC fails first (output becomes reducible), then FC fails (frames diverge), then TC fails (terminology fragments). Coherence restoration is bottom-up: TC must be restored before FC can be rebuilt, and FC must be rebuilt before OC becomes possible. Interventions that target OC directly without restoring TC and FC will not succeed.

§ 03

Frame Authority Distribution: The Anchor-Document Model

In the dyadic case, frame authority is unambiguous: the human holds it. In n-participant systems, frame authority must be distributed. Three structural models are possible: hierarchical (one designated human holds primary frame authority), egalitarian (all participants hold equal authority, with explicit coordination protocols), and anchor-document (a structured governance document holds frame authority, with human participants operating within the constraints it specifies).

This paper focuses on the anchor-document model (Model C) because it is the only model that scales without a fixed personal hierarchy. Hierarchical authority requires a stable designated authority figure whose participation cannot be assumed across sessions. Egalitarian authority requires explicit real-time coordination that becomes exponentially costly as n grows. The anchor-document model delegates frame authority to a persistent artifact that all participants — human and LLM — can access, query, and be audited against.

3.1 What an Anchor Document Must Contain

For an anchor document to function as the primary frame authority mechanism, it must satisfy four requirements. It must define the analytical object — what the system is studying, what questions it addresses, what is in and out of scope. It must specify terminology — the operational definitions of core concepts that all participants are expected to use consistently. It must articulate the frame — the implicit structure that organizes how findings connect to each other and to the analytical object. And it must specify what participants may and may not change autonomously — the governance layer that prevents individual participants from altering the frame without collective decision.

A document that satisfies the first two requirements but not the third and fourth functions as a reference document, not an anchor document. The governance layer is what makes the document an authority rather than a resource.

3.2 The Frame Authority Index (FAI)

The FAI provides an observable proxy for frame coherence at the participant level. It measures the degree to which frame construction authority remains with the human participants rather than being ceded to LLM instances.

FAI = w₁ · I(t) + w₂ · R(t) + w₃ · S(t)

I(t) = Initiative ratio:
  human-initiated new interpretive directions /
  all new interpretive directions in session

R(t) = Resistance ratio:
  human rejections of LLM-proposed frames /
  all LLM-proposed frames in session

S(t) = Structural control ratio:
  human-decided changes to analytical object /
  all changes to analytical object in session

w₁ + w₂ + w₃ = 1   (calibrated empirically; default: 1/3 each)
FAI ∈ [0, 1]
FAI → 1: frame authority maintained by human participants
FAI → 0: frame authority ceded to LLM instances

In the anchor-document model, the FAI is expected to be high across all human participants because the anchor document provides a structural constraint that directs LLM outputs toward the established frame. LLM instances operating within the anchor document's constraints are less likely to generate frame-displacing outputs, and human participants have an explicit reference for resisting such outputs when they occur.

3.3 The Anchor Document as Temporal Continuity Mechanism

CN-002 identified temporal continuity as a critical design requirement: between sessions, the human retains their evolving interpretive frame while the LLM retains nothing. The anchor document addresses this asymmetry directly. By externalizing the shared frame into a persistent, versioned artifact, it enables productive coupling to resume across sessions without requiring participants to reconstruct the shared frame from scratch. Each session begins with the anchor document as the shared context — the LLM instance is oriented to the current state of the analytical object, and human participants can verify that their frame remains consistent with the documented one.

This function is analogous to what WP-003 describes as institutional memory in the context of temporal decision capacity: the capacity to act effectively under pressure depends not only on having the right resources in the moment but on having preserved the interpretive structures that make those resources usable. An anchor document is the cognitive-layer equivalent of institutional memory — it preserves the frame across the gaps between sessions.

§ 04

Empirical Cases

Four cases are examined: ACI itself as the primary internal case, and three external cases drawn from the multi-agent and collaborative authorship literature. The cases are used to illustrate the coherence hierarchy and the conditions under which anchor-document-based frame authority succeeds or fails, not to validate the model empirically. Systematic empirical validation is specified in §06.

4.1 ACI as an Internal Case

Internal Case — Aether Continuity Institute

ACI's research programme has been developed through a series of human–LLM interaction sessions in which the human participant maintained frame construction authority while LLM instances performed formalization, translation, cross-referencing, and structural auditing. The analytical object — the body of working papers, supporting papers, and diagnostic assessments — grew from a single working paper to over thirty published documents across six research domains.

The anchor document (ACI-STRUCTURE.md) explicitly governs what LLM instances may and may not change autonomously: publication types, domain definitions, naming conventions, and cross-reference requirements are fixed; content production, translation, and formatting are permitted. This separation functions as a structural enforcement of frame authority — LLM instances cannot alter the frame without a documented human decision.

The coherence properties observable in ACI are: consistent terminology across documents (TC maintained), consistent domain framing across working papers authored in different sessions (FC maintained), and cross-document argument dependencies that would not exist without the iterative human-LLM coupling (OC evidenced but not yet independently validated).

4.2 MetaGPT: SOP-Based Role Protocols

External Case — MetaGPT (Hong et al., 2024)

MetaGPT formalizes role-based protocols by encoding Standard Operating Procedures (SOPs) that define each LLM agent's role through expert-level knowledge. Agents act as specialized operators who can verify each other's results, preventing error propagation through modularized task distribution.

Interpreted through the coherence hierarchy: MetaGPT's SOP structure functions as an anchor document at the LLM-to-LLM level — it defines terminology and task boundaries, achieving TC and partial FC among LLM instances. However, the human's role in MetaGPT is that of system designer rather than frame-construction participant. Once the SOPs are defined, the human exits the productive coupling. This is a different model from the anchor-document model developed here, in which humans remain active participants throughout.

4.3 CollabStory: Sequential Multi-LLM Authorship

External Case — CollabStory (Lee et al., 2025)

CollabStory examines multi-LLM sequential story generation, where up to five LLM agents write successive segments of a narrative. Each agent contributes without awareness of the others' generative processes, receiving only the prior text as context.

CollabStory provides a clear instance of a system without an anchor document. TC may be maintained within the shared narrative context, but FC is not structurally guaranteed — each LLM agent implicitly organizes the narrative around its own generative tendencies. The study finds coherence degradation as the number of agents increases, consistent with the prediction that S(t) = ∩ Lⱼ(t) narrows with more participants. This is the LLM-only analogue of the frame authority fragmentation failure mode.

4.4 Collaborative Gym: Asynchronous Human–LLM Task Collaboration

External Case — Collaborative Gym (Shao et al., 2024)

Collaborative Gym facilitates asynchronous interactions among humans, agents, and task environments across tasks including travel planning, data analysis, and academic writing. It evaluates both task outcomes and interaction quality.

Collaborative Gym represents a system where human frame authority is present but not structurally enforced. The human can intervene at any point but is not required to. In the tasks where human frame authority is exercised consistently, performance improves; in tasks where the human defers to the LLM agent, performance is more variable. This pattern is consistent with the FAI hypothesis: higher human initiative and resistance ratios correlate with higher output quality, even without an explicit anchor document.

§ 05

Design Principles for Anchor-Document Systems

The anchor-document model generates specific design requirements for systems intended to support coherent distributed human–LLM research. These are stated as structural requirements, not recommendations — systems that do not satisfy them should not be expected to maintain coherence at the FC and OC levels.

5.1 The Anchor Document Must Be Versioned and Accessible

An anchor document that is not versioned cannot function as a temporal continuity mechanism. If the document changes without a record of what changed and when, participants returning to the system after a gap cannot determine whether their prior frame is consistent with the current document. Versioning is not merely good practice — it is a structural requirement for the anchor-document model's temporal continuity function.

Accessibility means that all participants — human and LLM — can query the document at the start of each session. For LLM instances, this means the anchor document must be provided as explicit context, not assumed from prior interaction. For human participants, it means the document must be locatable, readable, and current.

5.2 The Governance Layer Must Be Explicit

The anchor document's governance layer — the specification of what participants may and may not change autonomously — must be explicit rather than implied. An implied governance layer requires participants to infer what is and is not open for revision, which introduces frame authority ambiguity. An explicit governance layer removes this ambiguity by specifying categories of permitted and restricted operations directly.

This is the cognitive-layer analogue of the Guardian/Orchestrator role separation in TN-002: one layer holds authority over the frame, the other holds optimization capability within that frame, and the boundary between them is enforced structurally rather than by individual vigilance.

5.3 LLM Role Specialization Reduces Frame Arbitration Risk

In systems with multiple LLM instances, the risk that LLMs begin performing implicit frame arbitration — selecting among competing human frames rather than implementing within a shared frame — increases with the number of instances and the diversity of human participant frames. Explicit LLM role specialization (formalizer, auditor, cross-referencer) limits this risk by constraining each instance's operation to a defined function within the established frame.

Role specialization does not require different LLM models — it requires different operational constraints applied to the same model. A system in which one LLM instance is tasked with formalization and another with structural auditing, with explicit constraints on each, is more resistant to frame arbitration than a system in which any LLM instance can perform any operation.

5.4 Coherence Level Monitoring Must Match Intervention Capacity

A system that monitors only output coherence — the highest level — cannot intervene effectively when coherence begins to fail at the terminological or frame level. By the time OC failure is visible, TC and FC may already be severely degraded, and restoration requires bottom-up reconstruction rather than targeted correction. Systems should monitor TC continuously (terminology drift is the earliest visible signal), FC at session boundaries (frame drift is visible in cross-session inconsistencies), and OC through periodic blind attribution testing.

§ 06

Scope, Limits, and Falsification Conditions

This paper develops a theoretical framework for the anchor-document model of distributed human–LLM coherence. The empirical cases in §04 are illustrative, not validating. The falsification conditions below specify the observations that would require substantial revision of the framework.

6.1 Scope Limits

The framework applies to systems in which the analytical object is structured and persistent — a research programme, a diagnostic framework, a body of technical documentation. It does not apply to ephemeral interactions (single-session tasks without a persistent object), to systems where the analytical object is primarily quantitative (databases, simulation models), or to interactions where the human's primary contribution is data provision rather than frame construction. The Model C (anchor-document) focus excludes Models A and B from systematic treatment; a comparative analysis of all three models is a research programme item, not a finding of this paper.

6.2 Falsification Conditions

FC-1 — Hierarchy Falsification

If TC does not predict FC — that is, if systems with high terminological consistency routinely exhibit frame incoherence — the proposed hierarchy is false. TC and FC may be independent properties rather than hierarchically ordered ones, requiring a revised model.

FC-2 — FAI Falsification

If FAI does not correlate with output-level coherence as measured by the blind attribution test — that is, if high FAI does not predict that output is irreducible to individual participants — then frame authority is not the operative variable for OC. Something else determines whether output is irreducible, and the model requires revision.

FC-3 — Anchor Document Falsification

If systems with explicit anchor documents do not outperform systems without them on FC maintenance — that is, if the presence or absence of a governance document does not predict frame coherence across sessions — then the anchor-document model's core claim is false. Document-based frame authority does not function as claimed, and the mechanism requires re-examination.

FC-4 — Intersection Falsification

If coherence does not decrease with increasing n (number of participants) — that is, if S(t) = ∩ Hᵢ(t) ∩ Lⱼ(t) does not narrow as n grows — then the intersection model is incorrect. Coherence may be emergent rather than conjunctive, requiring a fundamentally different formal model.

FC-5 — Blind Attribution Falsification

If blind attribution of outputs to participants consistently succeeds at a rate significantly above chance (p < 0.05) in systems where the anchor-document model is correctly implemented — that is, if output remains reducible to individual participants even under optimal conditions — then OC as defined here may not be achievable in distributed human–LLM systems regardless of structural design. This would require revising the OC criterion itself.

6.3 Research Programme

The falsification conditions above define the empirical research programme. A suitable experimental platform would consist of a cloned structured analytical object (such as ACI's research repository) made available to n human participants and m LLM instances under controlled conditions. Three variants of the anchor-document model should be tested: with a full anchor document (governance layer explicit), with a partial anchor document (terminology defined but governance layer absent), and without an anchor document. FAI, TC, FC, and OC measures are recorded across sessions and compared across variants. The blind attribution test is administered by evaluators who have not participated in the sessions.

Pre-registration of analysis criteria before data collection is required to prevent the circularity risk identified in CN-002 §4.3 — the risk of using the model to analyse data that motivated the model. Evaluators should be blinded to the experimental condition (full/partial/no anchor document) during the attribution test.

§ 07

Relation to ACI Framework

WP-013 occupies a specific position within ACI's research programme. It extends CN-002's dyadic model to the distributed case, making it a direct theoretical successor. It applies the temporal decision capacity construct from WP-003 at the cognitive-system level rather than the institutional level — the anchor document performs the same function as institutional memory in WP-003's framework, preserving causal reach across the temporal gaps between action opportunities. It connects to WP-006's continuity computing architecture by specifying the cognitive-layer requirements that complement WP-006's computational-layer requirements.

The question of whether the failure modes identified here warrant a new research domain (D-7) or are adequately captured as an extension of D-3 remains open. This paper is written as a D-3 extension — institutional decision capacity applied to distributed human–LLM research systems. If the empirical programme specified in §06 produces findings that cannot be interpreted within D-3's core question ("under what conditions does governance lose causal influence over outcomes?"), a domain extension proposal would be warranted.

§ 08

Conclusion

Distributed human–LLM research systems face a failure mode — frame authority fragmentation — that does not appear in the dyadic case and is not addressed by existing multi-agent or institutional decision capacity frameworks. This failure mode is structural: it arises from the distribution of frame construction authority across participants and is not remedied by increasing LLM capability or human expertise.

The anchor-document model provides a structural mechanism for managing this failure mode without requiring a fixed personal authority hierarchy. By delegating frame authority to a versioned, accessible, governance-specified document, the model enables coherent distributed operation across participants and sessions. The three-level coherence hierarchy — terminological, frame, output — provides an observable, hierarchically ordered set of properties that can be monitored and restored.

The framework is falsifiable, and its falsification conditions are specified. Whether the anchor-document model functions as claimed is an empirical question. The answer will determine whether document-based governance is a sufficient mechanism for distributed human–LLM coherence or whether the problem requires a fundamentally different approach.

Version History
v1.0 · Mar 2026 · Initial working draft