AI Agents and Data Handling
- Colin Levy
- 6 days ago
- 1 min read
In this guide, I explain why AI agents built on artificial intelligence are not ordinary software, showing how autonomous agents decide at runtime which data to access, which tools to invoke, and which actions to take across enterprise systems, creating unpredictable scope and cascading risks that traditional SaaS contracts and security models cannot handle. I highlight how AI agents chain tools and APIs, cross system boundaries, and maintain persistent memory, which complicates attribution when something goes wrong and raises new questions about responsibility for agent-initiated actions.
I analyze cybersecurity threats unique to AI and artificial intelligence agents, including prompt injection and indirect prompt injection, data exfiltration through tool use, privilege and semantic privilege escalation, memory poisoning that persists across sessions, MCP-based supply chain risks, and “excessive agency” when agents get more permissions and autonomy than their tasks require. I connect these risks to contractual gaps such as misaligned liability caps, indemnification limits for agent outputs and actions, outdated Data Processing Agreements, and shrinking insurance coverage as carriers add broad AI exclusions.
The guide then maps how regulation is converging on AI agents, from the EU AI Act and NIST AI RMF to OWASP’s LLM and agentic Top 10 lists, state laws like the Colorado AI Act, and GDPR expectations for autonomous data processing. I close with “agent-aware” contracting strategies for AI and artificial intelligence, including explicit delegation of authority for agents, dynamic data governance and purpose-locked access, cascading liability frameworks, mandatory kill switches and human oversight, agent-specific incident response plans, and risk-transfer provisions tailored to AI agents in enterprise environments.


