Even where laws are supposedly in place to protect survivors, awareness and reporting remain dangerously low – and when intimate image abuse crimes are reported, they are rarely prosecuted. The gap between the legislation and survivors’ lived experiences is yet another example of policy failing to keep pace with technology.
A recent UK-wide poll commissioned by Refuge found that fewer than one in three people would report certain forms of digital coercion, such as location tracking or a partner demanding access to their phone, if they happened to them or someone they knew. Just 58% said they would report the non-consensual sharing of intimate images, falling to 44% among 18- to 24-year-olds. This exposes a worrying lack of awareness among Gen Z; abuse thrives when it is minimised or dismissed, and this lack of recognition only compounds survivors’ trauma.
Ofcom’s draft guidance on how online platforms should respond to VAWG should be celebrated, particularly with the inclusion of online domestic abuse as a ‘key harm’ that tech companies must respond to. Refuge also strongly supports the watchdog’s recommendation that platforms should scan for duplicates of all non-consensual explicit material (and ensure such content is delisted from search results) as a welcome step towards tackling the spread of intimate images shared or made without consent.
At the same time, we are highly concerned that when it’s rolled out after the consultation process has concluded, the guidance will be hamstrung by the fact that tech companies will not be legally bound to comply.
Time and time again, we see companies prioritise profits over women’s safety – and it would not be surprising to see such behaviour continue. This is why Refuge and others in the sector are calling for the guidance to be elevated to a legally binding Code of Practice, backed by a cross-government department commitment to tackling tech abuse in the forthcoming VAWG strategy.
A piecemeal approach is not enough; only a whole-system response can confront the systemic weaponisation of technology against women and girls, and this must include policy frameworks which are effective at ensuring companies take action to prevent VAWG and are held to account where they fail to do so.
More generally, we need a fundamental policy shift towards designing technology with ‘safety by design’ – not bolting on protections as an afterthought. In internal research carried out by Refuge in March and April 2025, we found that certain AI chatbot models gave inappropriate responses to prompts about survivors seeking help – including advising them to stand up to their abuser, a suggestion that could put women at serious risk.
This is just one example of the potentially fatal consequences of treating safety as optional. All AI systems and social media platforms must be safety-tested from the outset, with survivor input and consultation with VAWG experts like Refuge.
Meaningful change will come only when survivor voices are embedded in both technology development and regulation, and when tech companies are held fully accountable for the harm they enable.