Decision Capacity as a System Invariant Under Compound Stress
Jatkuvuuslaskenta: Päätöskapasiteetti järjestelmäinvarianttina yhdistelmästressissä
This paper introduces Continuity Computing as a formal analytical category: distributed systems whose primary invariant is the preservation of decision capacity under compound stress, rather than the maximisation of performance under normal conditions.
We argue that as infrastructure complexity increases and disruption durations extend, a structural selection pressure emerges that favours systems capable of sustained degraded operation over systems optimised for peak efficiency. This pressure is independent of design intent — it operates on systems that encounter it regardless of whether they were built with continuity as an objective.
The paper formalises this argument through a control-theoretic model of institutional decision capacity (adapted from IRCM v1.0), identifies five structural failure modes of continuity-oriented systems, and establishes the relationship between computational continuity and the Institutional Termination Time (ITT) framework of WP-003. The resulting vocabulary grounds the architectural specification in TN-002.
Tässä paperissa esitellään jatkuvuuslaskenta muodollisena analyyttisena kategoriana: hajautetut järjestelmät, joiden ensisijainen invariantti on päätöskapasiteetin säilyttäminen yhdistelmästressissä — ei suorituskyvyn maksimointi normaaliolosuhteissa.
Väitämme, että infrastruktuurin monimutkaistuessa ja häiriöiden keston pitkittyessä syntyy rakenteellinen valintapaine, joka suosii järjestelmiä, jotka kykenevät ylläpitämään toimintaansa heikentyneessä tilassa — verrattuna huippusuorituskykyyn optimoituihin järjestelmiin. Tämä paine on riippumaton suunnittelutavoitteista.
Paperi formalisoi tämän argumentin institutionaalisen päätöskapasiteetin ohjausteoreettisen mallin avulla (mukautettu IRCM v1.0:sta), tunnistaa viisi jatkuvuusorientoituneiden järjestelmien rakenteellista vikamuotoa ja rakentaa yhteyden WP-003:n ITT-kehykseen. Syntyvä käsitteistö luo pohjan TN-002:n arkkitehtuurimääritykselle.
WP-001 establishes that continuity risk is not adequately characterised by capacity metrics alone. WP-003 establishes that institutions fail not when resources are exhausted but when decision capacity ceases to be causally relevant to outcomes. WP-004 extends this to technical systems: the structural variables that determine recovery capacity are variation, redundancy, and recovery time — not installed power.
This paper extends the argument one layer further: the computational infrastructure on which decisions are made is itself subject to the same failure dynamics as the systems it monitors. A decision layer that depends on centralised infrastructure inherits the fragility of that infrastructure. Under compound stress, the monitoring system and the system being monitored fail together.
The implication is architectural: decision-capable systems under stress conditions require a different design invariant than decision-capable systems under normal conditions. This invariant is duration — the ability to sustain function across a disruption window — not throughput, latency, or peak performance.
WP-001 osoittaa, että jatkuvuusriski ei kuvaudu riittävästi kapasiteettimittareilla. WP-003 osoittaa, että instituutiot eivät häiriinny resurssien puuttuessa vaan kun päätöskapasiteetti lakkaa olemasta kausaalisesti relevantti. WP-004 laajentaa tämän teknisiin järjestelmiin: palautumiskapasiteettia määrittävät rakenteelliset muuttujat ovat variaatio, redundanssi ja palautumisaika — ei asennettu teho.
Tässä paperissa argumenttia laajennetaan yhden kerroksen verran: laskentainfrastruktuuri, jonka varassa päätökset tehdään, on itsessään samojen vikaamisdynamiikkojen alainen kuin se järjestelmä, jota se valvoo. Päätöskerros, joka on riippuvainen keskitetystä infrastruktuurista, perii tuon infrastruktuurin haurauden.
Arkkitehtoninen seuraus: stressiolosuhteissa päätöskykyiset järjestelmät vaativat eri suunnitteluinvariantin kuin normaaliolosuhteissa toimivat järjestelmät. Tämä invariantti on kestävyys — kyky ylläpitää toimintaa häiriöikkunan yli — ei suorituskyky.
Continuity Computing: Distributed computational systems whose primary design invariant is the preservation of decision capacity under compound stress and sustained disruption — as distinct from systems designed to maximise performance, throughput, or efficiency under nominal operating conditions.
Jatkuvuuslaskenta: Hajautetut laskentajärjestelmät, joiden ensisijainen suunnitteluinvariantti on päätöskapasiteetin säilyttäminen yhdistelmästressin ja pitkittyneen häiriön alla — erotuksena järjestelmistä, jotka on suunniteltu maksimoimaan suorituskyky, läpäisykyky tai tehokkuus normaaliolosuhteissa.
Three properties follow from this definition. First, Continuity Computing systems are evaluated on worst-case endurance, not average-case performance. Second, they are designed to operate in degraded isolation — without guaranteed access to external infrastructure. Third, decision outputs must remain auditable under stress: the system must be able to account for its decisions when external verification is unavailable.
These properties distinguish Continuity Computing from adjacent categories. Edge computing addresses latency and bandwidth; Continuity Computing addresses duration and isolation. Fault-tolerant computing addresses recovery from component failure; Continuity Computing addresses sustained function under environmental compound stress. Resilient computing addresses the ability to bounce back; Continuity Computing addresses the ability not to stop.
Continuity Computing is not a design school that emerges from intention. It emerges from adaptive pressure. Three structural pressures are identified.
| Pressure | Mechanism | WP-004 Variable | Observable indicator |
|---|---|---|---|
| P1 Infrastructure Complexity |
Interdependence increases failure propagation. Each additional dependency creates a new failure pathway. Systems that buffer dependencies outperform systems that expose them. | Variable II · Redundancy | Increasing cascade frequency in critical infrastructure incidents. |
| P2 Duration Risk |
Disruption durations are extending beyond the endurance envelope of systems designed for rapid recovery. Fast recovery assumes short disruptions. When disruptions are long, persistence becomes the relevant capability. | Variable III · Recovery Time | WP-001 Black Period concept; DA-001 S1 signal extending beyond single-hour events. |
| P3 Decision Load Concentration |
Centralised decision infrastructure becomes a single point of failure under stress. Systems that distribute decision capacity reduce their exposure to this failure mode. | Variable I · Variation | WP-003 ITT precursor patterns; DA-001 S4 institutional substitution signal. |
These pressures operate independently of whether any actor intends to build a Continuity Computing system. A system exposed to all three pressures that survives will have acquired Continuity Computing properties — whether by design or by selection. A system that has not acquired them will not survive extended compound stress events.
The following model formalises decision capacity as a bounded state variable subject to two interacting stressors: structural drift and authority concentration. It is adapted from IRCM v1.0 and applied here to computational decision systems rather than governance institutions — though the structural isomorphism between the two is itself a finding of this paper (§06).
M(t) ∈ [0,1] Decision capacity / legitimacy of decision output
Inputs:
d(t) ∈ [0,1] Structural drift (degradation of redundancy, connectivity, sensor integrity)
r(t) ∈ [0,1] Authority concentration (degree of centralisation of decision pathways)
Parameters:
α Drift sensitivity
β Concentration penalty (applied when r(t) > r_c)
γ Recovery inertia (rate at which capacity recovers toward 1.0)
r_c Structural safeguard threshold (constitutional / architectural limit)
M_crit Critical capacity floor (below which outputs enter advisory mode)
Discrete update (Δt = 1):
M_{k+1} = clamp[0,1]( M_k + γ(1 − M_k) − α·d_k − β·max(0, r_k − r_c) )
Recovery term γ(1 − M_k) encodes asymmetric recovery:
capacity lost rapidly under stress, restored only gradually.
Fixed-point for constant inputs:
M* = 1 − (α·d + β·max(0, r − r_c)) / γ
Setting M* = M_crit yields the safe concentration ceiling as a function of drift:
r_crit(d) = r_c + ( γ(1 − M_crit) − α·d ) / β
Interpretation:
— As drift d increases, r_crit decreases.
A more degraded system can tolerate less centralisation.
— As recovery capacity γ decreases,
r_crit decreases. Slower recovery demands greater distribution.
— This boundary is a diagnostic instrument, not a control law.
It identifies when the combination of drift and concentration
places the system below the critical capacity floor.
The model produces two operating regimes. When M(t) ≥ M_crit and r(t) ≤ r_c, the system operates in authoritative mode: decisions are produced with full capacity and are causally effective. When M(t) < M_crit or r(t) > r_c, the system enters advisory mode: outputs are flagged as capacity-constrained and require external acknowledgement before escalation.
The IRCM model was originally developed for institutional resilience analysis. Its application here to computational decision systems rests on the structural isomorphism identified in §06: both institutional and computational decision layers exhibit drift, concentration, and asymmetric recovery. The parameter values (α, β, γ, r_c, M_crit) are not calibrated in this paper — calibration requires domain-specific measurement and is outside the scope of a structural working paper. Readers applying the model should treat it as a diagnostic instrument requiring empirical parameter estimation for any specific system.
Five structural failure modes of Continuity Computing systems are identified. Each corresponds to a deterioration pathway in the M(t) model.
WP-003 defines Institutional Termination Time (ITT) as the point at which decision capacity ceases to be causally relevant to outcomes — not because resources are exhausted, but because the physical decision window and the institutional action horizon have become disjoint. The ITT framework was developed for governance institutions. This paper argues that the same structure applies to computational decision systems.
A computational decision system has an effective decision window: the interval during which a decision output can influence the system state it addresses. It has an action horizon: the interval within which the decision layer can produce a verified, auditable output. When the action horizon exceeds the decision window — because drift has slowed the system, because concentration has reduced its pathways, or because isolation has removed its data sources — the system retains the formal structure of a decision system while having lost its causal relevance. This is computational ITT.
Computational ITT is not a failure of computation. It is a failure of temporal alignment between the computational layer and the physical system it is intended to influence. The correct diagnostic question is not "is the system producing outputs?" but "are the outputs reaching their decision window?"
The M(t) model operationalises this: M_crit is the minimum decision capacity required to maintain causal relevance. When M(t) falls below M_crit, the system enters advisory mode — not because it has stopped functioning, but because its outputs cannot be certified as causally effective without external acknowledgement. This is the computational analogue of WP-003's LR-Class B→C transition.
The structural isomorphism between institutional and computational ITT has a diagnostic implication: the same observable precursor signals apply to both. DA-001's S4 signal (institutional substitution — backup capacity normalised as primary) has a computational equivalent: decision systems designed for contingency operation that are normalised into routine primary decision roles without addressing the underlying capacity deficit that made them necessary.
The central claim of WP-001 — that power does not equal persistence — applies to computation as directly as it applies to energy systems. A computational system with high instantaneous throughput but no endurance architecture is not a Continuity Computing system. It is a system whose decision capacity degrades to zero when its power supply, network connectivity, or data sources are disrupted for longer than its buffer duration.
Duration, in the computational context, has four components:
| Component | Description | Failure condition | Continuity requirement |
|---|---|---|---|
| D1 Power endurance |
Duration of operational power availability without external supply. | Grid failure, supply disruption | Local energy independence across Black Period envelope (WP-001) |
| D2 Data endurance |
Duration of decision-relevant data availability without network access. | Connectivity loss, infrastructure failure | Locally held sensor data, cached state, offline inference capability |
| D3 Identity endurance |
Duration of verifiable decision authority without central attestation infrastructure. | Trust infrastructure unavailability | Local trust anchor with deferred attestation reconciliation |
| D4 Audit endurance |
Duration of decision auditability without external verification systems. | Logging infrastructure loss, synchronisation failure | Local immutable decision log with post-event reconciliation protocol |
A configuration that addresses D1 alone — local power — retains energy but loses decision validity when network connectivity fails. A configuration that addresses D1–D3 but not D4 may retain operational capability but lose the auditability required for the decision outputs to carry institutional weight. Continuity Computing requires all four components to be addressed across the same duration envelope.
The duration envelope is not a fixed number. It is determined by the compound stress profile of the operating environment — specifically, the expected Black Period length under the WP-001 framework. A system that cannot sustain all four duration components across the worst-case Black Period is not duration-capable for that environment.
Define Continuity Computing as a formal analytical category distinct from adjacent fields.
Provide a control-theoretic model of decision capacity dynamics under drift and concentration stress.
Identify five structural failure modes and map them to the M(t) model.
Establish the relationship between computational decision capacity and the WP-003 ITT framework.
Define the four components of computational duration and their failure conditions.
Ground the architectural vocabulary for TN-002.
Calibrate model parameters (α, β, γ, r_c, M_crit) for any specific system — this requires domain-specific empirical measurement.
Specify which technologies, protocols, or implementations satisfy the duration components — this is TN-002's scope.
Evaluate the decision capacity of any existing system — diagnostic application requires instrument development beyond this paper.
Predict failure timing for any specific system or environment.
Replace engineering feasibility assessment, security evaluation, or operational risk analysis.
This paper formalises the structural observations recorded in DN-V0 (convergence observation), DN-V1 (selection pressures), and DN-V2 (failure modes). The DN documents are internal archival records. This paper is the first public working paper in Domain D-5. The DN documents are not cited as sources — they are the prior working notes from which this paper develops. Where DN-V2 identifies failure modes descriptively, this paper maps them to the formal M(t) model.
DN-V0 identified a convergence across cognitive, computational, physical, and institutional continuity domains. This paper addresses the computational layer. The broader convergence across all layers is acknowledged but outside the scope of this working paper.
The analytical claims of this paper should be considered falsified if any of the following conditions is demonstrated through empirical evidence:
| Condition | Claim falsified |
|---|---|
| FC-1 Centralised computational systems demonstrate sustained decision capacity across compound stress events at comparable duration to distributed architectures, without the duration components specified in §07. |
The duration architecture claim — that distributed, locally anchored decision systems are necessary for compound stress environments. |
| FC-2 The M(t) model fails to produce bounded trajectories under empirical parameter estimation for documented system degradation cases, or its fixed-point structure is shown to be structurally incorrect. |
The formal model's applicability to computational decision capacity dynamics. |
| FC-3 Documented compound stress events show that computational decision capacity degradation does not follow the asymmetric recovery pattern (slow recovery, fast decay) encoded in the γ(1 − M_k) term. |
The asymmetric recovery assumption — a core structural feature of the model. |
| FC-4 The ITT framework of WP-003, when applied to computational decision systems, produces systematically different diagnostic outputs than when applied to institutional decision systems under comparable structural conditions, undermining the isomorphism claim of §06. |
The structural isomorphism between institutional and computational ITT. |
Domain D-5 · Continuity Computing · WP-006 v1.0 · ACI · March 2026
Decision capacity is not a property of hardware. It is a property of temporal alignment
between the computational layer and the physical system it is intended to influence.