A Guide to AI DPIAs: When to Run Them and How to Get Them Right

AI is already embedded in how modern businesses operate, from product features to internal tooling to decision-making at scale. And for some, it’s not just part of the product, it is the product.

But as adoption grows, so does scrutiny.

Regulators (especially in the UK and EU) are paying close attention to how AI systems use personal data, and DPIAs are quickly becoming one of the clearest signals of whether you’re taking that responsibility seriously.

The challenge?

Most teams still aren’t sure when an AI DPIA is actually required, or what “good” looks like when they do one.

But don’t fret! We’ll be breaking down just that in this guide.


What we’ll cover:


 

What is an AI DPIA (and why does it matter)?

A Data Protection Impact Assessment (DPIA) is a structured way to identify and reduce privacy risk when processing personal data.

Under UK GDPR and EU GDPR, DPIAs are required where processing is “likely to result in a high risk to the rights and freedoms of individuals.”

AI often pushes you into that category, because it introduces:

  • Automation at scale
  • Complex or opaque decision-making
  • Increased potential for bias or harm
  • New and evolving use cases

AI doesn’t automatically require a DPIA, but it significantly raises the likelihood that you’ll need one.


 

When do you need to run an AI DPIA?

If your AI use case involves one or more of the scenarios below, you should strongly consider running a DPIA.

1. Automated decision-making with real impact

If your AI influences outcomes for individuals, especially without human involvement, this is a clear trigger.

Examples: Hiring or screening candidates, credit scoring or lending decisions, insurance pricing, health or wellbeing recommendations.

 

2. Profiling individuals at scale

Even if decisions aren’t fully automated, profiling can still create risk.

This includes:

  • Behaviour tracking
  • Predictive analytics
  • User segmentation
  • Personalisation engines

 

3. Processing sensitive (special category) data

AI systems handling:

  • Health data
  • Biometric identifiers
  • Ethnicity or religion
  • Criminal offence data

…are almost always considered high risk.

 

4. Large-scale or systematic monitoring

This includes:

  • Tracking users across platforms
  • Monitoring employee activity
  • Analysing large datasets continuously

5. Using new or emerging AI technologies

This is where most teams underestimate risk.

If you’re using:

  • Generative AI (e.g. LLMs)
  • Third-party AI APIs
  • Custom machine learning models

…you’re operating in an evolving space where risks aren’t fully understood.

Regulators (including the UK ICO) explicitly flag innovative technology use as a DPIA trigger.

 

6. Low transparency or explainability

If you can’t clearly answer:

  • How does the model work?
  • What data is it using?
  • Why did it produce this output?

That’s a problem, and it’s exactly the kind of scenario a DPIA is designed to address.

trust keith office hours


 

When do you not need an AI DPIA?

Not every AI use case crosses the threshold.

You may not need a DPIA if:

  • No personal data is involved
  • Data is truly anonymised (not just pseudonymised)
  • The use case is low-risk and internal
  • There’s no meaningful impact on individuals

But, and this is key, you still need to document your reasoning.

Regulators care just as much about how you made the decision as the decision itself.


 

How to get an AI DPIA right (without overcomplicating it)

A good DPIA isn’t about producing a long, formal document. It’s about understanding risk in context, making informed decisions, and being able to clearly show your reasoning if you’re ever asked.

Here’s what “good” looks like in practice:

Start with a clear, plain-English understanding of the use case

You should be able to clearly explain what the AI is doing and why.

That means stripping away technical language and describing the use case in simple terms: what problem you’re solving, what data is being used, and what outcome it drives. If this isn’t clear upfront, the rest of the DPIA won’t hold up.

A good test is whether someone outside of product or engineering can understand it. If they can’t, the risks likely aren’t fully understood either.

Map how data flows through the system

AI systems often add complexity, especially when third parties are involved.

You need a clear view of where data comes from, how it’s processed, where it’s stored, and who it’s shared with. This is often where hidden risks surface, particularly when assumptions have been made about tools or vendors.

Focus on real, use-case-specific risks

Avoid generic or template-driven risks.

Instead, focus on how your specific AI use case could impact individuals. That might include biased outcomes, lack of transparency, over-collection of data, or unintended uses.

Be clear on necessity and proportionality

This is a core GDPR principle, but often overlooked.

Ask whether AI is genuinely needed, and whether the level of data use is justified. In some cases, there may be less intrusive ways to achieve the same outcome.

Define how you’ll reduce and manage risk

Identifying risk isn’t enough, you need to show what you’re doing about it.

This could include adding human oversight, minimising data, improving transparency, or monitoring outputs over time.

The goal isn’t to remove all risk, but to show it’s understood and controlled.

Involve the right people

DPIAs shouldn’t sit with legal alone. You’ll need input from product, engineering, security, and privacy teams to reflect how the system actually works.

A collaborative approach leads to better assessments, and makes follow-through more likely.

Treat it as an ongoing process

AI systems evolve, and your DPIA should too.

As models, data, or use cases change, your assessment needs to be updated. A one-off document quickly becomes outdated.

The most effective DPIAs are treated as part of an ongoing process, not a single task.

trust keith ai dpia webinar


 

Common mistakes when carrying out a DPIA (and how to avoid them)

We see the same patterns again and again, and they’re usually what make DPIAs feel like paperwork rather than something useful.

  • Running DPIAs too late
    Risks are identified after the system is live, when fixing them is slower, more complex, and often deprioritised.

  • Relying on generic templates
    A generic template most likely won’t match your specific use case, so the DPIA ends up being inaccurate or incomplete.

  • Not keeping them up to date
    AI evolves quickly. An outdated DPIA gives a false sense of compliance.

  • Overcomplicating the process
    If no one understands it, no one uses it, and risk doesn’t get managed.

 

How to make AI DPIAs easier (and actually useful)

DPIAs work best when they’re a standard step in how your team rolls out new tools, products, or changes, not just scrambling to complete them after something’s already been implemented.

The challenge is making that happen consistently, especially when you’re juggling everything else.

And that’s where Trust Keith comes in.

Trust Keith gives you a dedicated privacy expert who works as an extension of your team, helping you run DPIAs properly, sense-check decisions, and stay ahead of risk as your use of AI evolves. All backed by a platform that keeps your processes, policies, and evidence in one place.

So privacy doesn’t sit on your to-do list, it just gets done.

find out more

trust keith newsletter