IRONLAW is the governance policy gate at the heart of Bastion. Here is what each of the seven rules does, why the ordering matters, and how they map to the compliance questions your legal and risk teams are already asking.
IRONLAW is not a marketing label. It is a normative doctrine — seven governance rules applied in a fixed order before any AI agent action is permitted to execute. Each rule answers a specific question that a compliance officer, regulator, or auditor might ask. Together they form an ordered policy gate that makes the difference between an AI system that has governance and one that merely claims it.
Here is what each rule does, why the ordering matters, and how it maps to the questions your legal and risk teams are already asking.
IRONLAW evaluates each rule in sequence. A rule failure at any point stops evaluation and denies the action — it does not fall through to the next rule. The ordering is intentional: earlier rules protect authority and intent integrity; later rules protect the audit record.
Is the requesting principal authorized to direct this action?
Every action must originate from a principal with documented authority to request it. Authority is not implied by access. A user who can invoke the agent does not automatically have authority to direct every class of action the agent is capable of.
Compliance mapping: Principal authorization logs, delegation records, role-based access control documentation. In financial services, this maps to "who has signing authority for this transaction class." In healthcare, it maps to clinical privilege documentation. In federal contracting, it maps to delegation of authority orders.
Has the stated intent been altered between formation and execution?
The intent record created at the moment of invocation is the authoritative statement of what the principal authorized. IRONLAW verifies that the intent has not been modified — by the agent, by a middleware layer, or by any other mechanism — between the time the principal formed it and the time the action executes.
Compliance mapping: This is the tamper-evidence requirement. It is what your auditors mean when they ask whether logs can be modified. A hash of the intent record at creation time is compared to the hash at execution time. Any modification — even whitespace — fails this rule.
If this action were replayed with the same inputs and authorization, would it produce the same result?
Deterministic auditability requires that any recorded action can be replayed in an isolated environment and produce a verifiably identical outcome. Actions that cannot be replayed — because they depend on external state that has changed, or because the authorization context was not fully captured — fail this rule.
Compliance mapping: Federal and financial regulators increasingly require that AI systems be able to demonstrate deterministic behavior. This rule is the technical prerequisite for answering "would the system do this again under the same conditions?" It is also the prerequisite for meaningful incident reconstruction.
Is the expected outcome class documented and within authorized scope?
Before the action executes, the expected class of outcome must be recorded against the intent. Actions whose outcomes are ambiguous — or whose outcome class was not specified at intent formation — fail this rule. After execution, the actual outcome is recorded against the expected outcome class for reconciliation.
Compliance mapping: This is the "what did you think would happen?" question. It is what risk committees ask when reviewing automated processes. In a legal context, it maps to the supervision requirement — the authorizing attorney, manager, or clinician must have specified what kind of result they were authorizing before the agent acted.
Is there a valid warrant — a delegated authorization grant — covering this specific action?
Some actions require not just authority from the requesting principal but a documented warrant: an authorization grant from a higher-level principal that explicitly permits this class of action in this context. IRONLAW checks for the existence and validity of any required warrants before allowing execution.
Compliance mapping: This is the escalated-approval requirement. It is the analog of a court warrant in a law enforcement context, or a board resolution in a corporate context. For enterprise AI, it covers scenarios like "this agent can act on production data only when a warrant from the CISO has been issued."
Is the audit record intact and tamper-evident up to this point?
IRONLAW verifies the integrity of the hash chain in the intent ledger before permitting a new entry. If the chain has been broken — any prior record modified, deleted, or otherwise tampered with — the action is denied. The audit chain is only as valuable as its integrity; IRONLAW enforces that integrity at every write.
Compliance mapping: This is the chain-of-custody requirement. It is what forensic auditors check when reviewing whether a record is admissible. A hash chain that has been broken is not an audit trail — it is a set of unverified claims. IRONLAW ensures the chain is unbroken before adding new records to it.
Has the LLM been invoked within the bounds of its authorized role?
The final rule verifies that the language model's role in the action — the instructions it was given, the tools it was permitted to call, the output it was asked to produce — falls within the bounds of its authorized configuration. It is a check on whether the LLM itself has been used within scope, separate from whether the requesting principal has authority.
Compliance mapping: This rule addresses the "model misuse" scenario: an authorized user directing an authorized LLM to act outside the model's approved role definition. It is the analog of an employee using an approved tool in an unapproved way. In regulated industries, approved tool configurations must be documented and enforced — IRONLAW makes that enforcement automatic.
The rules are evaluated in this specific sequence because each depends on the prior rule being satisfied. You cannot verify outcome accountability (Rule 4) before you know the intent is immutable (Rule 2). You cannot validate the audit chain (Rule 6) before you know the warrant is valid (Rule 5). The ordering reflects a causal dependency chain, not an arbitrary priority ranking.
This also means that a failure at Rule 1 (Rightful Authority) costs nothing — you never reach Rule 7. Governance overhead is front-loaded at the cheapest checks, with the more computationally intensive checks only reached when the earlier rules are satisfied.
When all seven rules pass, IRONLAW writes a signed audit record to the intent ledger. The record includes:
This record answers the auditor's questions directly, without requiring post-hoc reconstruction from disparate logs. The answer to "what ran, under whose authority, and can you prove it" is in the ledger entry.
For regulated industries, this is what governance looks like in practice — not a policy document, but an enforced technical gate with an evidence path.
An audit trail is only as valuable as its credibility under examination. Here is the technical architecture behind Bastion's hash-chained intent ledger, and what it means for organizations that need AI audit evidence that holds up.
Most AI governance frameworks focus on the model. The harder problem is the chain of authority from the human who formed an intent to the system that acted on it. Here is why that distinction matters.
Interested in working together?
We help teams ship governed AI operations - book a call to discuss your specific needs.
Was this page helpful?