Permission Creep

Permission creep happens when an AI agent gradually accumulates access to more systems and data than it was originally given. It starts small, a new integration here, an expanded scope there, and before long the agent operates well beyond its intended boundaries.

This is dangerous because it happens quietly. Without explicit tracking of what each agent can access, nobody notices the expansion until something breaks or sensitive data surfaces where it should not. Traditional access reviews rarely cover AI agents.

Nestr prevents permission creep by assigning agents to roles with explicit domains and policies, just like human team members. Any change to an agent's scope goes through a governance proposal with a full audit trail. You can see exactly what changed, when, and why.