The emergence of agentic artificial intelligence (AI) systems capable of autonomous planning, execution, and interaction creates unprecedented regulatory challenges. Unlike traditional AI applications that respond to specific prompts, agentic AI systems operate independently: making decisions, achieving goals, and executing complex tasks without the need for constant human guidance or intervention. For organizations leveraging or developing these advanced AI systems, understanding the evolving legal and regulatory landscape is critical for mitigating significant operational, financial, and reputational risks.
Key Technological Developments
Agentic AI systems possess critical capabilities that are distinct from conventional AI applications, including:
- Autonomous Planning: Ability to define actions needed to achieve specified goals.
- Tool Integration: Direct interaction with external systems, tools and application programming interfaces.
- Independent Execution: Multi-step task completion without continuous human intervention.
These capabilities represent a qualitative (not merely quantitative) shift in AI functionality. Real-world applications include autonomous financial trading systems that can adjust strategies based on market conditions, supply chain management platforms that independently negotiate with vendors and optimize logistics, and sophisticated customer service agents that can resolve complex issues across multiple systems without human intervention. Each of these applications creates distinct liability profiles that existing legal frameworks can struggle to address.
Enhanced Opacity Challenges
While “traditional AI explainability” (i.e., the “black box” problem) already presents difficulties, agentic systems can significantly increase these concerns. The NIST AI Risk Management Framework distinguishes between explainability (understanding how an AI system works) and interpretability (understanding what an AI system's output means in context), both tools for oversight of AI, and it explains how their absence can directly contribute to negative risk perceptions.
Agentic systems present particular opacity challenges:
- Complex, multi-step reasoning processes can obscure decision pathways.
- External system interactions introduce variables that can go beyond original design parameters.
- Autonomous planning capabilities can produce outcomes deviating from those initial parameters.
Liability Framework Implications
In July 2024, the US District Court for the Northern District of California held that Workday, a provider of AI-driven applicant screening tools, could be considered an “agent” of its clients (the ultimate employers of successful applicants). The decision underscores the evolving legal landscape surrounding AI and the responsibilities of AI service providers whose tools directly influence hiring decisions. It also directly relates to agentic AI in several ways: (1) employers delegated traditional hiring functions to Workday's AI tools; (2) the AI tools played an active role in hiring decisions rather than merely implementing employer-defined criteria; and (3) by deeming Workday an "agent,” the court created the potential for direct liability for AI vendors.
The Workday decision, while specific to employment screening, serves as a crucial precedent highlighting how existing legal principles like agency can be applied to AI systems. It underscores the additional liability concerns associated with AI systems, starting with the potential for direct liability for AI vendors. When considering the even broader capabilities of agentic AI, the liability considerations become more complex and multi-faceted, presenting challenges in areas such as product liability, vicarious liability and proximate causation.
Cross-jurisdictional deployment of agentic AI systems further complicates liability determination. When an autonomous system operating from servers in one jurisdiction makes decisions affecting parties in multiple other jurisdictions, questions of which legal framework applies become particularly relevant. This is especially problematic for agentic financial trading systems or global supply chain management platforms that operate across multiple regulatory regimes simultaneously.
Current Regulatory Environment
While the United States lacks comprehensive federal legislation specifically addressing AI (not to mention agentic AI), several frameworks are relevant:
- State-Level Initiatives: Colorado's AI Act, enacted in May 2024, applies to developers and deployers of “high-risk AI systems,” focusing on automated decision-making in employment, housing, healthcare, and other critical areas. The current political environment, however, creates additional regulatory uncertainty. The House has passed a 10-year moratorium on state-level AI regulations, which could eliminate state-level innovation in AI governance during the most critical period of agentic AI development. This regulatory uncertainty underscores the urgency for organizations to implement proactive governance frameworks rather than waiting for clear regulatory guidance.
- International Frameworks: The EU AI Act does not specifically address AI agents, but system architecture and task breadth may increase risk profiles. Key provisions, including prohibitions on certain AI practices deemed unacceptable (due to their potential for harm and infringement of fundamental rights) and AI literacy requirements, became applicable in February 2025.
- Federal Guidance: NIST released its “Generative AI Profile” in July 2024 and has identified explainability and interpretability guidance as priorities for connecting AI transparency to risk management.
Human Oversight Considerations
The requirement for human oversight may be inherently incompatible with agentic AI systems, which by definition are designed to act on their own to achieve specific goals. For agentic systems, meaningful human control might require pre-defined boundaries and kill switches rather than real-time oversight, but this approach may fundamentally limit the autonomous capabilities that make these systems valuable. This creates tension between regulatory requirements for meaningful human control and autonomous system operational value.
Strategic Implementation Recommendations
Organizations considering agentic AI deployment should address several key areas:
- Contractual Risk Management: Implement clear provisions addressing AI vendor indemnification for autonomous decisions, particularly those causing harm or violating laws and regulations.
- Insurance Considerations: Explore specialized cyber and technology insurance products given the nascent state of insurance markets for agentic AI risks. Coverage gaps are likely to persist, however, until the market matures (e.g., traditional cyber policies may not cover autonomous decisions that cause financial harm to third parties).
- Governance Infrastructure: Establish oversight mechanisms balancing system autonomy with accountability, including real-time monitoring, defined intervention points, and documented decision authorities.
- Compliance Preparation: Consider California's proposed Automated Decision-Making Technology (ADMT) regulations requiring cybersecurity audits and risk assessments, which suggest similar requirements may emerge for agentic systems.
- Cross-Border Risk Assessment: Develop frameworks for managing liability and compliance when agentic systems operate across multiple jurisdictions, including clear protocols for determining applicable law and regulatory authority.
Looking Ahead
The intersection of autonomous decision-making and system opacity represents uncharted regulatory territory. Organizations that proactively implement robust governance frameworks, appropriate risk allocation, and careful system design will be better positioned as regulatory frameworks evolve.
The unique challenges posed by agentic AI systems represent a fundamental shift that will likely expose critical limitations in existing governance frameworks. Unlike previous AI developments that could be managed through incremental regulatory adjustments, agentic AI's autonomous capabilities may require entirely new legal and regulatory paradigms. Organizations should engage legal counsel early in agentic AI planning to navigate these emerging risks effectively while maintaining compliance with evolving regulatory requirements.