AI Policy Desk · Guides

Governing Embedded AI in Third-Party Tools

When your SaaS tools ship with AI features built in — Notion AI, Copilot, HubSpot AI, Zoom AI — your team is using AI whether you approved it or not.…

Back to blog

The Invisible AI Problem

When your team adopted ChatGPT, someone made a decision. When Notion added an AI writing assistant to your existing workspace, it appeared one morning and was available by default to everyone with a login.

This is the embedded AI problem. The tools your team already uses are quietly becoming AI systems, and most small teams' governance processes — which focus on standalone AI tool requests and approvals — are not designed to catch it.

Embedded AI is any AI capability added to a software product you already use. It includes:

Every one of these represents an AI processing activity that may touch personal data, customer information, or confidential IP — and it is happening whether or not you have approved it.


Why Existing Governance Processes Miss It

Most AI governance processes are approval-driven: someone requests a new tool, it goes through a procurement or security review, and it either gets approved or not.

Embedded AI bypasses this entirely. The tool is already approved. The AI feature was added by the vendor. The employee enabling it is not requesting a new tool — they are clicking a button in software they already use every day.

The governance signals that catch standalone AI tool adoption — new vendor charges, new SSO connections, IT discovering a new domain — do not fire for embedded AI features.


An Inventory Problem First

Before you can govern embedded AI, you need to know where it exists. Go through your AI tool register and for every SaaS tool on the list, add a column: "AI features present?"

Then do a sweep. For each tool your team uses:

  1. Log in as an admin
  2. Check settings for any "AI," "Copilot," "Assistant," or "Summarize" features
  3. Check whether those features are on or off by default
  4. Check what data they process and where it goes

You will find AI features in places you did not expect. This exercise regularly surfaces 5–10 AI processing activities that teams had no visibility into.


The Governance Questions for Each Embedded AI Feature

Once you have identified the features, evaluate each one:

Is it enabled by default?

If yes, your team may already be using it. Determine current usage before disabling — a feature used heavily by 20 people needs a different response than one nobody has touched.

What data does it process?

Meeting AI processes conversation audio and transcripts. CRM AI processes customer and deal data. Writing AI processes whatever document is open. Map each feature to the data types in your classification scheme.

Does it introduce new subprocessors?

Vendors often power their AI features with a foundation model from a third party (OpenAI, Anthropic, AWS Bedrock, etc.). This introduces a new subprocessor that may not have been in the original DPA. Under GDPR, new subprocessors require your review and potentially your consent.

Action: When an embedded AI feature is identified, ask the vendor: "What subprocessors power this feature, and are they covered by our existing DPA?"

Is it covered by your existing DPA?

Many DPAs pre-date the AI features they are now supposed to cover. Check whether your DPA has been updated or whether the vendor has issued an AI-specific addendum.

Can you disable it at the admin level?

If the feature processes data you are not comfortable with, can you turn it off? On which plans? Check this before assuming you can control it.


Common Embedded AI Scenarios

Microsoft 365 Copilot

Microsoft Copilot is available across Word, Excel, Outlook, Teams, and SharePoint on Microsoft 365 Business and Enterprise plans. It processes the content of documents, emails, and meetings.

Governance actions:

Google Workspace AI Features

Google has added AI features across Gmail (Smart Compose, Gemini drafting), Docs, Slides, and Meet. These are being expanded continuously.

Governance actions:

Notion AI

Notion AI can summarize pages, draft content, and analyze data within your workspace. It is powered by third-party models and processes whatever content it is invoked on.

Governance actions:

Zoom AI Companion

Zoom AI Companion provides meeting summaries, next-action extraction, and conversation analysis. It processes audio and transcript data from calls.

Governance actions:

Helpdesk AI (Intercom, Zendesk)

Helpdesk AI processes customer messages to suggest replies, draft responses, or route tickets. It touches customer personal data directly.

Governance actions:


A Decision Framework

For each embedded AI feature you identify, make one of three decisions:

Approve with controls: The feature aligns with your data policy, the DPA covers it, and you are comfortable with the processing. Enable it with any available configuration (opt-outs, data boundaries) applied. Document it in the tool register.

Approve pending review: You want to allow it but need to confirm DPA coverage or configure data settings first. Put a 30-day deadline on the review. Do not allow use until it is complete.

Disable: The feature processes data at a classification level you are not comfortable sending to this vendor, or DPA coverage cannot be confirmed. Use the admin panel to disable it. Communicate the decision to the team.


Adding Embedded AI to Your Governance Cadence

Embedded AI governance is not a one-time audit. Vendors add AI features on their own release schedules — often quarterly.

Add these to your AI governance operating rhythm:

The teams most exposed to embedded AI risk are the ones that reviewed their AI tools once, documented their findings, and never looked again. Your SaaS vendors are shipping new AI features every quarter. Your governance needs to keep pace.