How to Make AI Accountable Without Burning Down Your Stack
Humble, Approachable, Subtle critique of Over-Engineering
If you're someone who builds things—especially things that run on other things—this might interest you.
I've been quietly thinking about a problem that doesn’t fit neatly into a Jira ticket or a product roadmap. It sits somewhere beneath all that: in the architecture. Not just code architecture, but information architecture. The quiet rules of who gets to see what, who gets to decide, and what counts as "working as intended."
Most organizations today are built on legacy information architectures that were optimized for efficiency, not accountability. They're designed to move data fast, not ask who should see it or why. These systems often lack the ability to trace actions, enforce ethical constraints, or adapt as the logic inside them evolves.
As Smaldino et al. (in press) argue, today’s IAs have become brittle—fine-tuned for throughput, not reflection. And that brittleness becomes a liability when you introduce AI agents into the system.
So why not decentralize everything?
Because full decentralization, while appealing in theory, doesn’t work for most real-world organizations. The governance complexity, latency overhead, and lack of compatibility with institutional control make it impractical at scale.
That’s where hybrid blockchain architectures offer a more grounded approach. Marar & Marar (2020) propose decentralization as something modular and contextual—you decentralize what needs verifiability and auditability, and nothing more. Keep the control. Add the trust.
Enter VISTA.
It’s a framework I’ve been working on, designed to retrofit verifiability into agent-based systems without asking you to throw everything out and start over. Not a protocol. A scaffold. For those of us working with AI in environments where you can’t afford to be vague about trust.
And no, it’s not a peer-reviewed thesis. It’s a working draft—for practitioners across UX, cryptography, governance, and systems. If you’ve ever asked yourself “What keeps this accountable?”—this is for you.
Where it stands in the landscape.
Modern information architectures underpin how data flows, how access is controlled, and how institutional intent is encoded at scale. As Smaldino et al. (in press) argue, these architectures have become increasingly brittle—optimized for efficiency rather than accountability—making them poorly suited for AI agents that act autonomously and adaptively.
Significant prior work has emerged in related domains: trusted computing (Costan & Devadas, 2016), decentralized identity standards (Sporny et al., 2021), and post-quantum cryptography (Chen et al., 2016). However, these components often exist in isolation or are tightly scoped to narrow verticals. VISTA integrates and generalizes them into a modular framework designed to retrofit verifiability into legacy IAs—without requiring a full protocol overhaul.
Research on compliance agents (Mavrouli & Anastasopoulou, 2022) also signals growing urgency around ethical traceability in automated decision systems. Similarly, Shapiro and Varian (1998) highlight how control over infrastructural layers can be exploited for platform dominance, reinforcing the need for verifiable checkpoints at both technical and organizational levels.
The five layers:
📎 Identity – Make sure the agent is who it says it is
🛠 Execution – Ensure its logic runs in a trusted, tamper-evident space
🕵️ Audit – Log what happened (so we’re not guessing later)
🗓 Escalation – Let messy cases route back to humans
🧠 Ethics – Align agent behavior with real, evolving policies
VISTA Agent Lifecycle (Visual Overview)
This flow illustrates how a request is processed through VISTA’s five-layer framework—from identity verification to secure execution, audit logging, and ethical escalation. Each decision point is backed by cryptographic and policy-aligned safeguards, ensuring traceability without sacrificing control.
VISTA doesn’t demand a full rebuild. It’s composable. Start with identity or audit. Add ethics and escalation later. Plug it into your system without dismantling your whole house.
If you're building for the long-term—and not just the quarterly release—this might be worth a look.
📄 Here’s the preprint:
👉 VISTA: Verifiable Infrastructure for Secure & Transparent Agents
🔧 P.S. I’ve already started working on the next draft.
It explores how this framework applies to e-commerce information architectures—especially in systems where loyalty, pricing, and personalization are driven by black-box incentives. If the first draft was about building trust into infrastructure, the next is about what happens when platform incentives and social trust drift apart—and how to design for that tension.
More soon.