AI and cybersecurity have displaced cryptocurrency as the dominant risk topic in the SEC Division of Examinations' FY2026 priorities. For the first time, SEC AI governance examination scrutiny is embedded across every examination category — not siloed into technology reviews — meaning your firm's AI use in investment recommendations, AML screening, fraud detection, and back-office automation is all in scope.
This is not a future obligation. SEC examinations using the 2026 priorities are active now.
Key Takeaways
- The SEC's FY2026 examination priorities embed AI oversight into every exam category — investment advisers, broker-dealers, transfer agents, and market infrastructure firms all face AI-specific scrutiny.
- Examiners will ask whether AI-driven investment recommendations align with fiduciary and suitability standards. If you cannot show how you supervise AI outputs, you have a finding.
- Written supervisory procedures (WSPs) must now explicitly address AI tools used in compliance, trading, AML, fraud detection, and client-facing functions.
- Representations about AI capabilities — in marketing, ADV filings, or client agreements — must be accurate. Overstating AI involvement or performance is a specific examination target.
- Small firms are not exempt. There is no de minimis threshold in the exam priorities.
Summary
The SEC's approach in 2026 reflects a maturation of regulatory thinking: AI is no longer a novel technology risk to be handled in a tech-specific section of an exam. It is a pervasive operational and compliance risk that examiners will probe across every function. For financial services teams that have deployed AI tools without building corresponding governance documentation, this is a clear signal to act.
The good news: the governance practices the SEC expects — documented supervision, ongoing monitoring, clear override authority — are the same practices that make AI tools safer and more reliable in practice. Getting exam-ready and running better AI operations are the same exercise. The broader US regulatory picture — including federal AI preemption proposals and active state laws — compounds the compliance requirement for firms operating across multiple jurisdictions.
What the FY2026 Examination Priorities Say About AI
The SEC Division of Examinations published its FY2026 priorities in late 2025, effective for all examinations conducted during the fiscal year. The key AI-related language:
Investment advisers:
- AI tools used to generate investment recommendations are subject to the same fiduciary analysis as human recommendations — advisers must demonstrate that AI recommendations are in clients' best interests
- Firms must have written supervisory procedures that describe how AI recommendations are reviewed before implementation
- Conflicts of interest in AI tool selection (e.g., using a tool that generates higher-fee product recommendations) are an examination focus
- Performance claims about AI tools in marketing or client materials will be tested for accuracy
Broker-dealers:
- AI use in order routing, trade execution, and market-making must be described in firm supervisory procedures
- Suitability analysis for AI-assisted product recommendations must be documented
- Best execution obligations extend to AI-driven order flow decisions
Compliance and operations:
- AML and KYC screening tools using AI require documented oversight, including false positive/negative rate monitoring
- Fraud detection AI models require ongoing performance validation
- Back-office AI automation (reconciliation, reporting, settlement) is in scope if failures could affect customer assets or regulatory reporting
Representations about AI:
- Form ADV disclosures about AI use must be accurate and complete
- Marketing materials claiming AI-driven alpha, AI-powered analysis, or AI-enhanced services will be tested against actual operational practices
- Firms that use AI in client-facing workflows without disclosing it face additional scrutiny
Why This Matters for Small Teams
Three patterns make small financial services firms particularly vulnerable to AI examination findings:
Shadow AI adoption. A portfolio manager starts using an AI tool to screen securities. The tool is not in the firm's approved list, there is no supervisory procedure for it, and its outputs feed into client recommendations. This is a textbook examination finding — not because the AI gave bad advice, but because the firm cannot demonstrate it was supervising the advice. The hidden AI features governance gap explains how to detect and document shadow AI use before an examiner does.
Vendor AI without oversight. A compliance team uses a third-party AML screening platform that has integrated AI into its transaction monitoring. The firm's WSPs describe the platform but not the AI component — its training data, known error rates, or override procedures. Examiners will ask.
Capability misrepresentation. A small RIA markets its "AI-driven portfolio construction" in client materials. The firm uses a rules-based screening tool with a basic recommendation engine. The gap between the marketing language and the actual tool sophistication is an examination risk.
Governance Goals
For a financial services team getting exam-ready under the FY2026 priorities, governance should produce these outcomes:
- Complete AI inventory: every AI tool used in any client-facing, compliance, trading, or operational context is documented with ownership, use case, and examination relevance
- Written supervisory procedures for each tool: describing oversight, monitoring, error handling, and override authority
- Fiduciary and suitability review process for AI recommendations: documented procedure showing how AI outputs are reviewed before implementation
- Accurate AI representations: marketing materials, ADV filings, and client agreements reviewed and updated to match actual AI practices
- Ongoing monitoring: performance metrics defined, reviewed on a schedule, and documented for each AI tool in scope
Risks to Watch
The "we just use a vendor" defense does not work. The SEC expects you to supervise what you deploy, regardless of who built it. Using a third-party AI tool does not transfer supervisory responsibility to the vendor. Before selecting an AI tool for any regulated function, work through an AI vendor security and governance checklist to document what oversight your vendor enables.
Suitability chain breaks. If an AI tool recommends a product that generates higher fees for the firm, and the firm does not document how it manages that conflict, examiners will treat it as an undisclosed conflict of interest — the same way they treat a human adviser's conflict.
AML model decay. AI-based AML screening models trained on historical data can degrade as patterns change. Firms that deployed a model and have not reviewed its performance since deployment face both examination risk and real AML effectiveness risk.
Documentation exists only on paper. WSPs that describe AI oversight procedures that do not actually happen in practice are worse than no procedures — they document a knowing failure. Only write procedures you actually follow.
Controls: What to Actually Do
This week:
- Complete an AI tool inventory covering every tool used in client advisory, trading, compliance, and operations. Include third-party platforms with AI components, not just standalone AI tools.
- Identify which tools directly touch the five highest-risk examination categories: investment recommendations, AML/KYC, fraud detection, order routing, and client disclosures.
This month:
- Review existing written supervisory procedures. For every AI tool identified above, verify there is a WSP that describes: who reviews outputs, what monitoring occurs, what triggers escalation or override, and who owns the tool.
- Pull ADV Part 2A and marketing materials. Review all AI-related claims for accuracy. Engage compliance counsel if any representations appear overstated.
- Define monitoring metrics for each high-risk AI tool: false positive rate for AML, recommendation acceptance rate for advisory AI, execution quality metrics for trading AI.
This quarter:
- Conduct a supervisory review of each AI tool against its WSP. Document findings and any corrective actions.
- Test your override and escalation procedures — if the AML AI flags a transaction incorrectly, can your team actually override it and document the decision? Run a tabletop.
- Prepare an AI governance summary document suitable for presenting to examiners: tool inventory, WSP status, monitoring results, and any known issues and remediation.
Checklist (Copy/Paste)
- Complete AI tool inventory — client-facing, compliance, trading, and operations
- Map each tool to SEC examination categories (investment recommendations, AML, fraud, order routing, disclosures)
- Verify WSP exists for every in-scope AI tool
- Review WSPs: do they describe real oversight that actually happens?
- Audit ADV Part 2A and marketing materials for accurate AI representation
- Define and document performance monitoring metrics for each high-risk AI tool
- Conduct supervisory review of each AI tool against its WSP; document findings
- Test override and escalation procedures with a tabletop exercise
- Prepare examiner-ready AI governance summary document
- Schedule quarterly AI governance review cadence
Implementation Steps
- Day 1: Pull a list of all software tools used by the investment team, compliance team, operations, and trading. Flag anything with AI, ML, or "intelligent" in its description. Add tools you know have AI but do not advertise it (Bloomberg PORT, Orion, Riskalyze/Nitrogen, etc.).
- Week 1: For each flagged tool, document: what it does, where its outputs go, who reviews those outputs, and whether a WSP covers it.
- Week 2: Write or update WSPs for the top 3-5 highest-risk gaps. Focus on investment recommendation AI first, then AML/KYC.
- Week 3: Review ADV and marketing materials. Fix any capability claims that are not operationally accurate.
- Month 2: Establish a monitoring cadence: monthly review of performance metrics for high-risk AI tools; quarterly supervisory review of each tool against its WSP.
- Before any exam: Prepare a one-page AI governance summary: tool count, WSP status, last supervisory review date, and known issues. Examiners will find this useful and it signals a mature governance posture.
Frequently Asked Questions
Q: We use AI through Bloomberg or a similar platform. Does that count as AI use for SEC purposes? A: Yes. If you use AI-assisted analytics, screening tools, or recommendations from any platform in your investment process, that use is in scope for examination. You do not need to build the AI yourself to be responsible for supervising it.
Q: What should our WSP for an AI tool actually say? A: At minimum: (1) the name and function of the tool, (2) the name of the person responsible for oversight, (3) how outputs are reviewed before action is taken, (4) what monitoring metrics are tracked and how often, (5) what triggers escalation or override, and (6) how decisions and overrides are documented.
Q: Our ADV was filed before we started using AI tools. Do we need to amend it? A: If your AI use materially affects how you provide advisory services, construct portfolios, or generate recommendations, and that is not disclosed in your ADV, you likely need an amendment. Engage compliance counsel to assess whether a material change disclosure is required.
Q: We are a two-person RIA. Do these priorities really apply to us? A: Yes. There is no firm-size threshold in the examination priorities. A small firm using an AI portfolio construction tool in client recommendations has the same supervisory obligation as a large firm. The practical difference is scale, not obligation.
Q: What happens if we have a finding about AI governance in an exam? A: Findings range from observations (flagged but not formally cited) to deficiency letters requiring written remediation plans. Serious or repeated violations can escalate to enforcement referrals. The best outcome from a finding is demonstrating a credible remediation plan — which requires the same documentation practices you should already have in place.
References
- SEC FY2026 Division of Examinations Priorities (Harvard Law School Forum on Corporate Governance): https://corpgov.law.harvard.edu/2026/01/04/2026-sec-division-of-examinations-priorities/
- SEC Releases 2026 Examination Priorities — Consumer Finance and Fintech Blog: https://www.consumerfinanceandfintechblog.com/2025/12/sec-releases-2026-examination-priorities-highlighting-compliance-information-security-and-emerging-technology/
- NIST AI Risk Management Framework — Govern and Measure functions: https://www.nist.gov/system/files/documents/2023/01/26/AI%20RMF%201.0.pdf
- OECD AI Principles — Accountability and transparency: https://oecd.ai/en/ai-principles
- SEC Form ADV — General Instructions Part 2A: https://www.sec.gov/form/form-adv