Kilpatrick’s Starling Underwood recently presented on the topic of “The Governance of Innovation: Managing New AI, IP, and Regulatory Frameworks” at the firm’s annual “Ethics, Professional Well-Being, and Technology Seminar.” He joined fellow thought leader Keith Robinson, Associate Dean of Research, Professor of Law at Wake Forest University, as part of the presentation.
Starling’s key takeaways from the discussion include:
1. AI Demands Systemic Management, Not Routine Control
AI systems are fundamentally different from traditional legal tools. These systems are probabilistic and dynamic, evolving over time. Legal professionals must move from simply “controlling” tools to managing complex, changing systems that require constant oversight and adjustment.
2. AI Introduces Diverse and Significant Risks
AI brings a range of risks: legal (such as regulatory non-compliance and discrimination), operational (like system failures and model drift), privacy, security, business, and even environmental. Organizations must systematically map, measure, and manage these risks associated with AI usage.
3. Human Oversight and Accountability Are Non-Negotiable
AI should never operate without meaningful human oversight and clear lines of responsibility. Humans must be accountable for critical decisions to ensure ethical practice and legal defensibility.
4. Culture Is the Ultimate Safeguard for Responsible AI
A culture of responsible AI use is essential. Organizations should provide ongoing training, recognize and reward responsible behavior, and build trust with stakeholders and customers.