Articles
UK tightens laws on AI-generated sexual deepfakes — what organisations need to know
This is not a “big tech only” problem. Any UK organisation that develops, deploys, hosts, integrates or permits the use of AI tools — internal systems, customer‑facing products, workplace software or third‑party AI embedded in business processes — must act now.
What changed and why it matters
- The new offence targets creation, not just distribution. That closes a legal gap that previously left some AI‑generated intimate images in a grey area if they were not shared.
- The government is also moving to criminalise “nudification” apps through the Crime and Policing Bill, targeting tools designed to produce non‑consensual intimate images at source.
- Regulators have strong enforcement powers under the Online Safety Act, including fines of up to 10% of global turnover and, in extreme cases, blocking access in the UK. Ofcom is already investigating reported incidents involving AI tools.
How this affects UK organisations These changes intersect with several regulatory regimes you already know:
- Online Safety Act: platforms and services must prevent illegal content proactively.
- UK GDPR / DUAA: using personal data to train or prompt AI engages controller obligations and now carries criminal risk where non‑consensual intimate images are created.
- Criminal law: individuals and organisations can be prosecuted where AI tools facilitate illegal conduct.
- Corporate governance: failure to assess foreseeable misuse creates regulatory, reputational and litigation risk.
Practical steps for organisations — start today
- Include informal tools: plugins, browser extensions, chatbots, image‑generation services and third‑party integrations.
- Record where image, video or biometric data is processed.
2. Assess misuse risk, not only intended use
- Identify foreseeable abuse cases: deepfakes, nudification, image manipulation and targeted harassment.
- Prioritise controls where personal data or identifiable images are involved.
3. Update governance, policies and technical controls
- Set clear acceptable‑use rules and communicate legal consequences to staff.
- Block or restrict high‑risk functionality where possible.
- Implement moderation, logging and audit trails for image generation and requests.
- Ensure retention and deletion policies cover generated content and related prompts.
4. Re‑examine supplier and integration risks — but own compliance
- Perform due diligence and require contractual safeguards.
- Maintain independent controls and documentation to demonstrate your own compliance position.
5. Train and test frontline teams
- Ensure helpdesk, HR, legal and product teams understand reporting routes and escalation processes.
- Run scenario tests for misuse and response readiness.
This move shows the UK will act quickly where AI causes real‑world harm — especially to women and children. AI governance is no longer a narrow technical issue or a future project. Boards, compliance teams and senior leaders must treat it as a live legal and reputational risk.
Further reading and help for practical compliance guidance on data use and AI risk, see the DUAA checklist and related resources created by our partners Vinci Works: When Data Thinks — DUAA compliance checklist.
More information
To find out if Astute E-Learning is right for your business click the button below to request more information and one of our consultants will be in touch shortly.
Alternatively contact us on 0330 223 6180 or via email enquiries@Peoplefirsthr.co.uk .
PeopleFirstHR have been working on Human Resource Information Systems for over 20 years and with People Inc. and YouManage since 2011. Our experience means we can provide a common-sense approach to providing you with a comprehensive HR system to help you record and maintain your employee data.
If you would like to learn more about how we can help your organisation please contact us on 0330 223 6180 or via email enquiries@Peoplefirsthr.co.uk.