Loading…
Loading…

Legal & Regulatory Compliance SpecialistIndependent Reviewer
Judith C McKee is a legal and regulatory compliance specialist with more than ten years of experience advising technology companies on data protection law, AI-specific regulation, and corporate governance frameworks. With deep expertise across GDPR, the EU AI Act, and emerging national AI legislation, Judith provides independent expert review of AI Policy Desk content to ensure accuracy, regulatory currency, and practical applicability. Her reviews verify that every template, checklist, and guide reflects current legal standards and is appropriate for the jurisdictions and team sizes it targets. Judith brings a practitioner's eye to compliance content — cutting through regulatory complexity to confirm what small teams actually need to know and act on.
13 articles reviewed by Judith C McKee
Federal agencies are using existing statutes to police AI conduct while state AGs and private plaintiffs fill the gaps. A new Morgan Lewis analysis identifies four high-exposure areas for 2026. Here is what small teams need to know.
A landmark 2026 settlement ruled AI training on copyrighted books is fair use — but training on pirated copies is not. The practical implication for small teams is that data provenance is now a legal risk control, not just a best practice.
Colorado SB 24-205 takes effect June 30, 2026. With under 90 days left, here is exactly what developers and deployers of high-risk AI must have in place — and what the AG's enforcement posture means for small teams.
The EU Digital Omnibus proposes pushing the high-risk AI compliance deadline to late 2027, but trilogue is live and the August 2026 deadline still stands until a deal is signed. Here is what small teams must do right now.
The White House and Senate are pushing to override state AI laws with a single federal framework. Until that happens, Colorado, California, Texas and a dozen other states remain live obligations. Here is how to stay compliant through the transition.
The SEC's FY2026 examination priorities embed AI oversight into every exam category, not just tech reviews. Financial services teams using AI in investment, compliance, or operations need documented governance today — before an examiner asks for it.
Choosing an AI developer tool without vendor security questions invites supply-chain surprises. Use these ten prompts — build hygiene, disclosure culture, telemetry, SBOM — before you standardise an AI developer tool across your team.
An AI vendor security incident on a tool you depend on is still your governance problem. This practical guide helps small teams assess scope, rotate credentials, update threat models, and document decisions — without waiting for confirmed harm.
Hidden AI features often ship behind feature flags — without clear notice. Here is what that means for policy, vendor contracts, and small-team governance, and how to close the documentation gap.
Anthropic's Claude Code source code was accidentally exposed via a .map file in their npm registry. Here's what the leak reveals and what small teams should learn about AI vendor security and governance.
A practical guide for small teams on when you become a “modifier” under the EU AI Act, how to classify AI systems, and what controls to add without slowing delivery.
Learn how AI governance for small teams can navigate the EU AI Act's requirements, ensuring compliance and responsible AI adoption in staffing businesses.