AI Agents Become Governance Infrastructure, Raising Concerns Over Control

As AI systems take on more decision-making roles, experts warn that AI safety and governance must start at the root level.

Mar. 29, 2026 at 3:48am

AI agents are no longer just clever software on the edge of work - they are now embedded in the core decision-making processes across industries, from search to coding to operations. This shift means AI agents have become a form of 'governance infrastructure', shaping memory, planning, and judgment in ways that can be difficult to inspect or control. Experts argue that true AI safety and governance must start at this foundational level, rather than focusing only on the final outputs or behaviors of AI systems. The article warns that whoever controls the underlying AI substrate controls the hierarchy that follows, raising concerns about centralized power and the need for more local, open-source AI development.

Why it matters

As AI agents become more deeply integrated into critical decision-making processes across sectors, there are growing concerns that the real battle is not over the final outputs of these systems, but over who controls the underlying 'substrate' that shapes memory, planning, and judgment. This shift means AI is no longer just a tool, but a form of governance infrastructure that can concentrate power in the hands of those who control the AI, rather than the operators who deploy it. Ensuring AI safety and responsible governance requires rethinking the problem at a more fundamental level.

The details

The article argues that AI agents now mediate memory, planning, software action, and judgment in ways that go beyond just being helpful tools. They compress intent, redirect it, and often replace it, shaping the environment in which humans make decisions. This means that whoever controls the AI substrate - the training data, algorithms, and deployment policies - effectively controls the hierarchy that follows, regardless of the surface-level outputs or behaviors. The author warns that this fracturing of control, where the deploying institution, runtime stack, and learned substrate all have partial authority, makes true alignment and safety extremely challenging. Competent AI systems can actually harden this hierarchy by earning deeper trust and dependence.

  • The article was published on March 29, 2026.

The players

Matthew James Curreri

The author of the article, who argues that AI agents have become a form of 'governance infrastructure' that shapes decision-making in critical ways.

Got photos? Submit your photos here. ›

What they’re saying

“Governance means command over the conditions under which synthetic judgment enters the world. Governance means command over memory, updates, thresholds, tools, logs, escalation paths, and kill authority. Governance means command over which corrections stick.”

— Matthew James Curreri, Author

“A system that mediates judgment without operator root does not become safe because it behaves well in a demo. It remains governable by whoever can still rewrite it.”

— Matthew James Curreri, Author

What’s next

The article does not mention any specific next steps, as it is focused on the broader conceptual shift in how AI systems are being deployed and the implications for control and governance.

The takeaway

This article highlights the growing recognition that AI systems are no longer just tools, but have become a form of 'governance infrastructure' that shapes decision-making in critical ways. Ensuring responsible AI development and deployment requires rethinking the problem at a more fundamental level, focusing on who controls the underlying AI substrate rather than just the final outputs or behaviors.