In 2025, a few privacy themes came up again and again for scaling businesses.
AI moved quickly from isolated use into day-to-day operations. Regulators increased their focus on how personal data is actually being used in practice. Breaches continued to be driven by familiar causes like human error and phishing. And for many scale-ups, growth - new products, new tools, new markets - made privacy harder to manage consistently.
So, what can we expect to see in 2026?
Our privacy experts have broken down the key trends for 2026, and what they mean for scaling businesses.
For most scale-ups, AI didn’t arrive as a single, well-planned initiative, it crept in. Through productivity tools, customer support platforms, analytics, marketing software, and teams experimenting to move faster. By the end of 2025, many businesses found that AI was already part of day-to-day operations, even if no one had formally signed it off.
The challenge going into 2026 isn’t whether AI is being used, it’s whether businesses actually have visibility and control over how personal data flows through AI tools.
From a privacy perspective, the same questions keep coming up:
Regulators are paying closer attention to these issues, but they’re not the only ones. Customers and investors are increasingly asking direct questions about AI use as part of security reviews, procurement processes, and due diligence.
For most scale-ups, the priority isn’t introducing heavy-handed controls. It’s getting a better grip on how AI is actually being used across the business.
Here’s a few practical steps worth focusing on:
You can download our AI Policy template for free here >>
For most businesses, the causes of data breaches haven’t changed much.
Incidents are still largely driven by human error, phishing, weak access controls, and misconfigurations. What has changed is how quickly those incidents escalate, and how many people are paying attention when they do.
Over the past year, breaches have increasingly triggered more than just internal incident response. Customers are asking detailed follow-up questions, procurement teams are getting involved, investors want reassurance, and in some cases, boards expect clear explanations about what happened and what’s been done to prevent a repeat.
From a privacy perspective, breaches now tend to bring up the same issues:
Managing breach risk is about getting two things right: reducing the likelihood of a breach, and being ready to respond if one does happen. Even with strong prevention in place, breaches can and do still happen.
A few things to focus on:
For many scale-ups, children’s data still feels like a niche issue. Something that only applies if you’re explicitly building products for kids. In reality, that boundary has become much less clear.
Regulators have continued to focus on how children’s data is handled, particularly where products or services are likely to be accessed by under-18s, even if they aren’t the intended audience. The shift isn’t about new rules so much as how expectations are being applied. There’s less weight placed on stated intent, and more attention on product design, default settings, and whether risks to children have been properly considered.
From a privacy perspective, the same questions tend to come up:
Age assurance adds another layer of complexity. Verifying age can itself require collecting additional data, sometimes sensitive data, which creates a tension between protecting children and minimising data collection. Getting that balance wrong can introduce new risks rather than reduce them.
For most scale-ups, the issue isn’t that they’re handling children’s data badly, it’s that they haven’t clearly decided whether children fall into scope at all.
Here’s a few practical steps worth focusing on:
Privacy decisions are made constantly across a business. They happen when tools are approved, data is reused, risks are assessed, or trade-offs are made. In most cases, those decisions are reasonable, but the problem is that they’re not always easy to evidence later.
What’s changing is how those decisions are assessed under scrutiny.
Across regulatory engagement, customer audits, and incident response, organisations are expected to show how decisions were made, what risks were considered, and what actions followed.
Where scale-ups struggle is rarely because the decision itself was unreasonable, it’s because the rationale isn’t clearly documented or easy to pull together later.
Evidencing privacy decisions doesn’t mean documenting everything. It means being clear about what does need to be recorded, and doing it consistently.
A few things to focus on:
In 2025, the EU’s Digital Omnibus proposals signalled a shift in how digital regulation may be applied going forward.
The focus isn’t on introducing entirely new privacy obligations, but on simplifying and clarifying how existing rules work in practice, particularly where GDPR, the AI Act, and other digital laws overlap. This is about reducing ambiguity, not reducing accountability.
What isn’t changing is the expectation that businesses understand their risks, make proportionate decisions, and can explain how privacy is managed day to day. Clearer rules leave less room to rely on broad interpretations or generic compliance statements.
The Digital Omnibus isn’t a reason to pause privacy work. It’s a prompt to focus on the fundamentals.
The trends we expect to see in 2026 aren’t new, but they’re becoming harder to manage without the right foundations in place.
For scaling businesses, the focus should be on putting the right foundations in place early, so risk can be managed consistently as the business grows.
To help you get there, we’ve put together a free Privacy Essentials Pack with the core policy templates you need to kickstart a practical, proportionate privacy programme - without starting from scratch.