Loading…
Loading…
A mathematical framework for adding carefully calibrated statistical noise to data or query results, so that no individual record can be reliably inferred from the output. A system satisfies differential privacy if its outputs are nearly identical whether or not any single person's data is included — providing a formal, quantifiable privacy guarantee. Differential privacy is used in AI to allow training on sensitive datasets (health, financial, location) while limiting what can be learned about any individual contributor to that dataset.
Why this matters for your team
Full differential privacy implementation is technically demanding — it requires expertise in setting the right privacy budget (epsilon) and often degrades model accuracy. Look for vendors that offer DP as a built-in feature rather than building it yourself. It matters most when training on health, financial, or employee data.
A company trains a salary prediction model using differential privacy, so that no individual employee's salary can be reverse-engineered from the model's outputs, even by someone with access to most of the training data.