Hate speech and hate crime have become urgent societal challenges, exacerbated by the rise of social media and the rapid spread of misinformation.
Top global threats
The proliferation of digital platforms and the increasing ubiquity of social media have fundamentally altered how hate speech, misinformation and online harassment circulate.
Today, hate can travel further and faster than ever before, often fuelled by disinformation and amplified by algorithms.
As noted in the World Economic Forum’s Global Risks Report 2025, ‘misinformation and disinformation and societal polarization’ are now identified as two of the top global threats.
Hate crimes increased by 252%
Between 2012 and 2023, police-recorded hate crimes in England and Wales rose by a staggering 252% and more than half of 12 to 15-year-olds have seen hateful content online. This is according to Home Office research.
Such exposure not only brings significant risks to mental wellbeing, but hateful narratives can also trigger and escalate real-world hate crime, as seen so alarmingly in the 2024 riots in Southport.
The scale and complexity of the problem has outpaced traditional forms of moderation and policing. But a tech start-up led by two Cardiff University academics, Professor Matt Williams (criminology) and Professor Pete Burnap (computer science), is using AI to address the challenges posed by online harms.
Funding boost for AI spin-out
Established in 2023, Nisien.ai is a spin-out of Cardiff University’s Economic and Social Research Council (ESRC) funded HateLab. HateLab is a global data hub using social science and computer science expertise to help identify and tackle hate speech and crime.
In March 2025, Nisien.ai received significant seed investment from the British Business Bank’s Investment Fund for Wales, managed by Foresight, and the Development Bank for Wales. It aimed to build on Hero Detect, an AI-powered tool developed within HateLab to accurately detect and classify online harms across platforms in real time.
The investment is accelerating the innovation of the Hero platform, integrating the detection of online harms with additional capabilities to defend against and resolve harmful content, fostering safer online communities without excessive censorship.
Transforming research into action
HateLab worked with a wide range of private and public sector organisations.
Its research has contributed directly to the Welsh Government’s national Framework for Action on Tackling Hate Crime. Its technology was embedded into the UK National Cyber Hate Crime Hub, which produces intelligence reports for police, senior civil servants and MPs.
Nisien.ai has gone on to provide consultancy to social media platforms, including TikTok, to improve the safety of online communities. Nisien.ai has also worked with a number of global brands to shape anti-hate campaigns for international events, including:
- the Eurovision Song Contest
- Women’s Euros
- Men’s World Cup
Creating a hub for hate speech research
The foundations of HateLab came from the All Wales Hate Crime Project for the Welsh Government in 2012, led by criminologist Professor Williams. The research, which remains the largest and most comprehensive study of hate crime in the UK, revealed increasing numbers of people reporting victimisation on social media. These findings prompted Professor Williams to take a closer look at online hate, a relatively new phenomenon at the time.
With ESRC funding, Professor Williams, together with computer scientist Professor Burnap, established HateLab. The research hub brought together machine learning and criminological methods and concepts to:
- provide insights into the nature and dynamics of hate speech
- explore the nature and scale of the problem
- explore whether social media data could be used to predict crime rates on the streets
Professor Williams says:
For the first time, we could trace hate online in real time because every instance was recorded on social media sites. We could follow breadcrumb trails of where something was posted, what time, who was sharing and how far it spread.

From left to right: Professor Pete Burnap, Sefa Ozalp (research student), Professor Matthew Williams. Credit: Professor Matthew Williams
Link between online hate and real-world harm
HateLab research established a link between online hate and real-world harm with a study that explored hate crime in Greater London. It revealed that spikes in online hate, for example, anti-Black or Muslim rhetoric, were closely followed by hate crimes on the street in the same area. The team believed harnessing such data could provide valuable insights to many organisations.
In 2017, with further funding from ESRC, the team developed a digital dashboard that used AI algorithms to detect online hate speech in real time and at scale. Pilot schemes were established with a diverse group of partners to co-creatively build and test the dashboard, which is now known as the Hero platform. Partners included:
- the Welsh Government
- the National Online Hate Crime Hub (run by the National Police Chiefs’ Council)
- LGBTQIA+ anti-violence charity Galop
Professor Williams explains:
Based on HateLab’s foundational technology, Nisien.ai’s Hero platform equips clients to monitor online conversations at scale, flag emerging threats, and act quickly to protect their brand, audiences or communities. The Hero Detect capability surfaces areas of concern and highlights patterns that may indicate rising tensions or harmful narratives.
A watershed moment
As misinformation and hate speech continue to evolve, so must the responses. Professor Williams believes regulatory changes and a growing awareness of the threats posed by online harms is driving investment in solutions:
The riots in Southport were a watershed moment in that respect: we could see how online hate was manifesting on the streets. We consulted with Ofcom regarding the monitoring of social media leading up to and following the Southport riots in 2024, informing part of its inquiry.
With the Online Safety Act in the UK, and similar regulations across the world, the landscape is changing. Governments, businesses and organisations are recognising the need to invest in technology that promotes community and social good and minimises polarisation and hate.
Financial backing from the Investment Fund for Wales and the Development Bank of Wales is enabling Nisien.ai to continue to innovate and scale. It has made key hires to grow the company to 19 employees, and accelerate research and development.
A key focus of the investment is on enhancing the Hero platform’s ability to detect online harm. Internet trolls are increasingly sophisticated, using coded language and constantly evolving tactics to spread hatred across platforms. Nisien.ai has honed AI models that continually learn about evolving language patterns, context and intent, and takes its ability beyond keyword detection.
Informing the defence sector
Misinformation and disinformation are increasingly used as a tool of war. Nisien.ai is working with Airbus to develop technology that can detect if images have been modified by AI. Professor Williams explains:
Warring nations often produce visual evidence of infrastructure strikes, but these can be manipulated by AI. It can be expensive or impractical to send a satellite to verify, so we are modifying our technology to detect AI generation or manipulation in these images.
Using AI to resolve conflict
The team is also developing two new AI-powered capabilities based on emerging scientific evidence on ‘what works’ in building cohesive integrated online spaces. Professor Williams explains:
Once the Hero platform has detected toxic content, users can defend against them by hiding or removing those posts. In a pioneering next step, the Hero platform can be used to respond to online harm using generative AI to suggest responses. This capability to resolve or de-escalate online conflict is based on decades of research into effective behaviour change, such as counter narratives that induce empathy.
This approach, says Professor Williams, is particularly important given issues surrounding perceived censorship:
The best way to fight hate and authoritarianism is via debate, so it’s essential to maintain free speech. Through interacting with individuals using the Hero platform, the resolve capability aims to persuade them to change hateful behaviour. Independent research suggests that it is effective at reducing these negative interactions by up to 65% in testing environments, without resorting to heavy-handed censorship.
Smarter policies for social media
Nisien.ai’s technology has been used by numerous organisations to tackle online harms. Social media platform TikTok utilised the team’s expertise in the behavioural dynamics of hate speech to inform its community guidelines around online abuse that targets minority characteristics of creators.
By addressing online hate at both individual and systemic levels, Nisien.ai’s technology contributes to safer, healthier and more inclusive digital spaces and communities.
Monitoring social cohesion
The Welsh Government’s Community Cohesion teams monitor community tensions in their respective regions, working with partners to mitigate issues as they arise. Hero Detect improved monitoring of anti-migrant content related to the settlement of Ukrainian refugees and provided data that fed into national threat assessments on extremist activity.
A spokesperson from the Welsh Government’s Inclusion and Cohesion Team says:
Social media users have become more savvy in the way they direct abuse, and often do not use openly hateful language, instead choosing to use more coded words… The search function on HateLab provides a way of homing in on these terms… and provides us with a method of widening our searches and be more dynamic to developing terms or words.
Reducing hate around sporting and cultural events
Eurovision’s online engagement has increased dramatically in the past few years, resulting in safeguarding issues for contestants and staff.
Professor Williams says:
The organisers were concerned about their accounts and the accounts of contestants being targeted, particularly with pro-Israeli or pro-Palestinian narratives in the advent of the war in Gaza. We provided tailored training to participants and their managers, safeguarding against the full range of online harms. In addition, the Hero platform was deployed to conduct an online threat analysis.
Nisien.ai partnered with the Football Association for Wales to deploy the Hero platform to protect player safety and wellbeing during the UEFA Women’s Euro 2025. The company powered the FAW’s efforts to help protect players from online abuse and harmful content.
During the Women’s Euros and Men’s World Cup in 2022, Nisien.ai worked with EE, providing data to inform their Hope United campaigns. The Women’s Euros campaign featured Hope United football shirts that reflected online abuse from players’ social media accounts that was detected by the Hero platform.
Sophie deGraft-Johnson, Business Leader One EE, Saatchi & Saatchi, says:
[Nisien.ai’s] tracking of hate across the tournament allowed us to confidently talk about the levels of misogynistic hate our Hope United players received during the Euros and reflect that in reactive press, digital out of home and social, which ran over the finals weekend.