It’s been slightly more than seven years since GDPR came into force – a key moment that reshaped how organisations approach data protection. We’ve seen UK businesses of all sizes refine their data strategies, audit their processes, and think more carefully about the impact of every bit and byte they collect.

As we started to get more comfortable with the post-GDPR landscape, AI has now entered the frame. Much like GDPR did in 2018 AI is pushing businesses to reconsider how they manage data. The stakes are arguably higher now – not just because of the greater volume of data businesses generally handle, but because the risks and opportunities tied to AI are moving at breakneck speed.

GDPR in practice

In the years since GDPR’s implementation, the shift from reactive compliance to proactive data governance has been noticeable. Data protection has evolved from a legal formality into a strategic imperative — a topic discussed not just in legal departments but in boardrooms. High-profile fines against tech giants have reinforced the idea that data privacy isn’t optional, and compliance isn’t just a checkbox.

That progress should be acknowledged — and even celebrated — but we also need to be honest about where gaps remain. Too often GDPR is still treated as a one-off exercise or a hurdle to clear, rather than a continuous, embedded business process. This short-sighted view not only exposes organisations to compliance risks but causes them to miss the real opportunity: regulation as an enabler.

When understood and applied correctly, GDPR offers more than a legal framework — it provides a clear, structured way to manage data responsibly, improve operational hygiene, and build trust with customers and partners. In other words, strong data governance isn’t a drag on innovation — it’s what makes innovation sustainable.

AI: Innovation, but with new questions regarding risk

Enter AI. Businesses clearly recognise the enormous benefits and potential of AI. According to Cisco’s 2024 AI Readiness Index, 95% of businesses stated they have a highly defined AI strategy in place or are in the process of developing one, with 50% allocating as much 10-30% of their IT budget towards AI. 

However, 65% of IT teams say they don’t fully understand the privacy implications of using AI. And only 11% trust it enough to handle mission-critical workloads (according to Splunk’s 2025 State of Security report). That tells us something important: AI may be moving forward apace, but perhaps governance around it is still catching up.

We’re seeing new risks emerge: the transparency issue, for example, feels particularly urgent: think of black-box models used in areas like fraud detection or credit scoring – if you can’t audit how a decision was made, you can’t explain it to a regulator or justify it to a customer.

The compliance questions you should be asking

As organisations embed AI deeper into their operations, it’s time to ask the tough questions around what kind of data we’re feeding into AI, who has access to AI outputs, and if there’s a breach – what processes we have in place to respond quickly and meet GDPR’s reporting timelines. 

Despite the urgency, there’s still a glaring gap of organisations that don’t have a formal AI policy in place, which exposes organisations to privacy and compliance risks that could have serious consequences. Especially when data loss prevention is a top priority for businesses.

The good news? We don’t have to start from scratch. GDPR already gives us a framework for assessing AI tools – think data minimisation, purpose limitation, and privacy-by-design principles. This means only collecting the data you need, using it for specific, legitimate purposes, and embedding privacy into every stage of AI development. Apply those principles to AI, and you’ve got a strong foundation to build on. These aren’t just legal safeguards – they’re the building blocks of ethical AI.

GDPR has laid the groundwork

As we reflect on GDPR’s seven-year journey, one thing is clear: its relevance hasn’t diminished. In fact, the arrival of AI has made its principles more essential than ever. The regulatory frameworks of yesterday are still fit for purpose — but only if they’re applied with the same agility and ambition that new technologies demand.

The challenge ahead isn’t simply about following rules — it’s about demonstrating leadership. The future of data governance won’t be dictated by regulators alone. It will be defined by businesses bold enough to align innovation with integrity, and fast enough to turn compliance into competitive advantage.

GDPR laid the groundwork. Now, in the age of AI, the opportunity is to build something even stronger on top of it.

James Hodge is the GVP & Chief Strategy Advisor – EMEA at Splunk



Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *