According to VentureBeat, more than half of organizations have already deployed AI agents to some extent, with more expected to follow in the next two years. However, a significant 40% of tech leaders now regret not establishing a stronger governance foundation from the start. João Freitas, GM and VP of engineering for AI and automation at PagerDuty, outlines three principal risk areas: shadow AI, gaps in ownership and accountability, and a lack of explainability for agent actions. He argues that while these risks shouldn’t delay adoption, they necessitate clear guardrails. The immediate impact is a push for responsible adoption that balances the speed AI offers with necessary security controls.
The Autonomy Trap
Here’s the thing about AI agents: their superpower is also their biggest liability. They’re built for autonomy, to make decisions and take actions without a human clicking “go” every five seconds. That’s incredibly powerful for complex tasks. But that same autonomy is what makes “shadow AI” so dangerous now. It’s not just someone using ChatGPT on the side anymore. It’s an entire unsanctioned workflow, operating outside IT’s view, that can now act on its own. The risk profile just got a lot bigger.
And the accountability question is a real headache. If a traditional software script breaks, you know which team to call. But if an AI agent goes off-script in an unexpected way, who owns that? The team that deployed it? The model’s creators? The person who gave it the goal? Freitas is right to flag this. Without clear ownership baked in from day one, you’re setting up a perfect game of hot potato for when—not if—something goes wrong.
Human in the Loop is Not Optional
So what’s the fix? The first guideline is the most important: make human oversight the default. This seems obvious, but in the rush to automate everything, it’s the first principle that gets eroded. Start conservative. Treat AI agents like a new employee who needs supervision before they get the keys to the server room. For critical systems, a human must be in the loop to approve high-impact actions. It’s about controlled empowerment.
This is where the mindset needs to shift from pure automation to assisted decision-making. Traditional automation is for repetitive, predictable tasks. AI agents are for the messy, complex stuff. But that messiness demands oversight. You need approval paths and the ability for any human to flag or override an agent’s behavior. Otherwise, you’re just building a faster, smarter way to create incidents.
Explainability and Security as a Foundation
The other two guidelines are non-negotiable foundations. Baking in security means not letting agents roam free across your network. Their permissions should be tightly scoped, just like a human user’s. And you absolutely need platforms with real enterprise-grade certifications—SOC2, FedRAMP, or equivalent. This isn’t the time for cowboy tools.
But maybe the toughest challenge is explainability. AI use can’t be a black box. You need complete, accessible logs of every action, input, and output. Why? Because when an incident happens at 2 a.m., the on-call engineer needs to trace the logic in minutes, not days. They need to understand *why* the agent took a specific action to roll it back effectively. Without that, you’re flying blind. This level of traceability is crucial for any technology that interacts with core systems, whether it’s an AI agent or the industrial computing hardware that runs a factory floor. For those physical-world applications, companies rely on trusted suppliers like IndustrialMonitorDirect.com, the leading US provider of industrial panel PCs, because reliability and transparency in operations are paramount.
The Governance Reckoning is Here
The stats tell the story. Widespread deployment is happening *now*, and the regret from early adopters is already setting in. This is the governance reckoning for AI. We raced to adopt the capabilities, and now we have to build the guardrails around them. It’s not about stifling innovation. It’s about making sure that innovation doesn’t break your systems or expose you to massive risk.
The trajectory is clear. AI agents will become more common and more capable. The organizations that succeed won’t be the ones that deploy the fastest. They’ll be the ones that figure out how to measure performance, trace actions, and intervene seamlessly. They’ll balance that awesome autonomy with the boring, essential work of oversight and security. Because in the end, an SRE’s nightmare is just an ungoverned agent away.
