SYS / OPERATIONAL
Command center
OPEN FRAMEWORK · v1.0 · APRIL 2026

UNKNOWNS LAB · SYS.00 / FRAMEWORK.v1.0

The Decision Authority Framework

A discipline for governing critical decisions in the cyber and AI era.

Systems don't fail first. Decisions do.

This paper defines Decision Authority as a measurable discipline — distinct from incident response, governance-risk-compliance, crisis management, and tabletop rehearsal — and proposes four primitives by which organizations, auditors, insurers, and regulators can assess it.

VERSION

1.0

PUBLISHED

April 2026

PAGES

23

STATUS

Open for Comment

[§0]/PREFACE

About this document

This document is version 1.0 of the Decision Authority Framework, published by Unknowns Lab as a contribution to the emerging discipline of decision governance in high-consequence enterprises. It is written for boards, general counsel, chief information security officers, chief risk officers, regulators, insurers, and the analysts who serve them.

It is not a product brochure. It is not a methodology for hire. It is a public articulation of a category we believe will become as legible in the next decade as cybersecurity itself became between 2010 and 2020 — because the forcing functions that made cybersecurity a board-level concern are now operating on decision-making itself.

We welcome reference, critique, adoption, and adaptation. The four primitives introduced in §6 are offered as open definitions; practitioners who cite them are asked only to cite accurately. Where this document is adopted into regulatory guidance, underwriting methodology, or analyst taxonomy, Unknowns Lab will publish annotated revisions in subsequent versions.

This framework exists because the space between detection and decision is the least measured and most consequential failure surface in modern enterprises.

PART I · THE PROBLEM
[§1]/THE PREAMBLE

Why This Document Exists

Between 2020 and 2025, cyber and AI failures stopped being engineering problems and started being decision problems. The proof is in the public record. In nearly every major incident that made the front page — a ransomware event at a hospital network, a supply-chain compromise at a software vendor, an autonomous-system failure at a commercial airline, a data-exposure cascade at a financial platform — the technical root cause was present, but the cost was determined by what leadership decided to do in the first hours after detection.

In our observation across principal-led engagements with organizations whose failure costs are measured in hundreds of millions to billions of dollars, the pattern is consistent: detection times have compressed; escalation paths have not. Technical telemetry now arrives in minutes. Authority to act on that telemetry still arrives in hours, days, or — for novel AI failures — never.

Detection is an engineering problem, and we have largely solved it.
Decision is a governance problem, and we have barely begun.

This is not a matter of better playbooks, thicker runbooks, or more frequent tabletop exercises. Those address the surface. The underlying issue is structural: large organizations have not articulated, with the clarity that modern adversarial speed demands, who holds the authority to decide what, under which conditions, with what reversibility, and with what accountability when the decision fails.

[§2]/THE FAILURE SURFACE NOBODY MEASURES

We observe three recurring failure patterns in post-incident reviews of catastrophic cyber and AI events. Each has engineering literature. None has a measurement discipline in the organizations where the failures actually occur.

2.1]

Decision Latency

The interval between the moment sufficient information exists to authorize a decision and the moment the decision is actually authorized. It is not the time to detect. It is not the time to contain. It is the time to decide.

In our field observations, decision latency during a contested incident runs 4× to 40× longer than the detection-to-authorization gap senior leadership believes it runs.

2.2]

Authority Ambiguity

The condition in which, at the moment a decision must be made, no one in the chain is certain who is empowered to make it.

Authority ambiguity is almost always invisible in peacetime. It materializes at machine speed during a crisis.

2.3]

AI Handoff Failure

The discontinuity between an autonomous system acting on its own authority and the human supervisory layer that is assumed — but not architected — to intervene.

The typical failure mode is not an AI doing something catastrophic. It is an AI doing something consequential without any human noticing in time.

// OBSERVED BASELINES · UNKNOWNS LAB

83%

Breaches in which leadership decision failure materially worsened outcome

72h

Median time to identify decision-authority gaps, post-incident

60min

Critical window where early decisions compound exponentially in cost

<15%

Fortune 2000 with board-ratified authority map for cyber incidents

[§3]/WHY EXISTING CATEGORIES DO NOT ANSWER

Every pattern in §2 has an adjacent discipline that addresses some portion of it. None addresses it centrally. The Decision Authority discipline exists in the overlapping blind spots of five established categories.

Incident Response

GOVERNS

Technical containment

DEFERS

Authority chain above CISO

GRC

GOVERNS

Stated posture & evidence

DEFERS

Live decision performance

Crisis Management

GOVERNS

External perception

DEFERS

Internal authority architecture

Tabletop Exercises

GOVERNS

Rehearsal atmospherics

DEFERS

Structural outputs & latency

Cyber Insurance

GOVERNS

Loss indemnification

DEFERS

Authority as underwriting signal

Decision Authority

GOVERNS

Who decides, how fast, with what accountability

DEFERS

— (the layer itself)

PART II · THE FRAMEWORK
[§4]/DECISION AUTHORITY: A WORKING DEFINITION

Decision Authority (n.) — the structured, measurable capacity of an organization to identify, assign, and exercise the right to decide, at sufficient speed and with sufficient accountability, under conditions of adversarial pressure, incomplete information, and irreversible consequence.

This definition is deliberately constructed to be falsifiable. Each clause names a property that can be measured, failed, and improved:

Structured

Authority is articulated before the event, not improvised during it.

Measurable

An organization's position can be scored, benchmarked, and re-scored.

Identify, assign, and exercise

Three distinct acts, each a separate failure mode.

At sufficient speed

Latency is a first-order property, not a secondary attribute.

With sufficient accountability

The authority holder is identifiable after the fact.

Adversarial pressure, incomplete information, irreversible consequence

The operating conditions under which most organizations have never instrumented their own decisioning.

[§5]/THE THREE-LAYER OPERATING MODEL

Decision Authority operates across three time horizons. Each horizon has a distinct mode, distinct outputs, and distinct failure signatures. A mature Decision Layer spans all three.

[LAYER 01]PEACETIME

ANTICIPATE

Months to years before incident

In peacetime, Decision Authority is built. Adversary intent is mapped. Decision paths are stress-tested under simulated but unannounced pressure. Human-AI authority boundaries are articulated before any agentic system goes live.

OUTPUTS

Authority maps, escalation graphs, scored baselines

FAILURE SIGNATURE

The absence of these assets, or their presence only as draft documents no one has operationalized.

[LAYER 02]CRISIS WINDOW

DECIDE

Minutes to hours during incident

In the crisis window, Decision Authority is exercised. The interval is compressed — often to the first sixty minutes, rarely longer than the first seventy-two hours. Inside this window, the structures built in peacetime either hold or they do not.

OUTPUTS

Authorized actions, escalation traces, decision telemetry

FAILURE SIGNATURE

Elapsed time from trigger to authorized action, number of escalation hops required, number of reversals.

[LAYER 03]PERMANENT

BUILD

Years after incident, continuous

In the permanent layer, Decision Authority is compounded. Lessons from the crisis window feed back into peacetime structure. Leadership transitions are instrumented so authority does not evaporate with a CISO or CEO change.

OUTPUTS

Institutional capability, governance evolution, compounding authority

FAILURE SIGNATURE

The organization that treats each incident as a discrete event rather than an input to a compounding capability.

An organization is as mature in Decision Authority as its weakest layer.
The most common weak layer, in our observation, is the permanent one.

[§6]/THE FOUR PRIMITIVES

The framework proposes four primitives by which Decision Authority can be measured. The primitives are intended to be adopted, cited, and refined by the broader community. They are open definitions; they are not proprietary to any single practitioner, including ourselves.

6.1]

The Authority Graph

A directed representation of the organization's decision rights: the nodes are decisions, the edges are the authority relationships between the roles or bodies that hold them.

Who holds primary authority?
Who holds concurrent authority (veto, concurrence, notice-only)?
What is the reversal path if the decision proves wrong?
Who is accountable after the fact?
6.2]

Decision Latency

The measured time, under instrumented conditions, from the moment sufficient information exists to authorize a decision to the moment the decision is authorized.

Baseline latency: under non-adversarial conditions
Pressure latency: under simulated adversarial conditions
Peak latency: under conditions simulating the worst 72 hours
6.3]

The Override Taxonomy

Classifies the acts by which a decision already in motion — by a human or by an automated system — can be halted, reversed, or redirected.

Precautionary override: halting before consequence accrues
Corrective override: reversing after partial consequence
Emergency override: unilateral action bypassing concurrence
Authority escalation: invoking a higher decision body
6.4]

The Maturity Rubric

Scores an organization's position on a five-level scale, from absent to adaptive.

L1 Implicit: Authority assumed, no written map
L2 Declared: Written in policy, not tested
L3 Tested: Exercised in simulations, gaps identified
L4 Instrumented: Latency measured, override codified
L5 Compounding: Continuous capability, incidents improve it

The median Fortune 2000 enterprise scores between L1 and L2. A small number reach L3. L4 is rare. L5 is not yet present at scale anywhere.

PART III · APPLICATION
[§7]/THE AI-ERA EXTENSION

Every element of the framework changes when autonomous and semi-autonomous systems enter the decision chain. The Authority Graph acquires non-human nodes. Decision Latency can invert — the AI decides before humans are aware a decision was required.

Agentic AI does not create new decision authority questions. It forces existing ones to be answered at speeds the organization has never operated at.

The AI authority question, stated precisely

For every agentic system deployed inside or adjacent to high-consequence decision chains, an organization should be able to answer four questions without pause:

1.

What is this system authorized to decide on its own?

2.

What is it authorized to decide with human concurrence?

3.

What must it escalate to a named human authority, and how does the escalation channel function at the system's operating speed?

4.

Who is accountable — by name and role — when the system acts and the action proves wrong?

In 2026, the median Fortune 2000 organization deploying agentic AI can answer the first question partially, the second ambiguously, the third informally, and the fourth not at all. This is the single largest decision-authority gap visible in current enterprise reality.

[§8]/ASSESSMENT METHODOLOGY

The framework is measurable. A Decision Authority Assessment covers, at minimum, the enterprise's top twenty-five decision classes across cyber incident response, AI-system override, regulatory disclosure, financial containment, and executive succession under incident conditions.

ASSESSMENT OUTPUTS

Scored maturity position (L1–L5) per decision class
Gap findings with remediation priorities
Baseline latency profile for future reassessments
[§9]/ADOPTION PATHWAY

For Boards

Commission baseline assessment, adopt Maturity Rubric as standing metric

For Regulators

Incorporate primitives into supervisory guidance and examination handbooks

For Insurers

Use Decision Authority score as underwriting signal for cyber/D&O lines

Categories are minted when enough of the market uses the same vocabulary to describe the same problem. This document is the vocabulary.

Citation

Unknowns Lab. (2026). The Decision Authority Framework v1.0: A discipline for governing critical decisions in the cyber and AI era. Retrieved from unknownslab.com/framework

[APPENDIX C]/CALL FOR COMMENT

Submit Your Comment

Version 1.0 is open for comment. We invite critique from academic researchers, regulators, analysts, insurers, and practitioners with field observations that corroborate or challenge the primitives. Material feedback will be incorporated into version 1.1.