Research

We work at the intersection of computer security, artificial intelligence, and computational social science — building tools and methods to study, measure, and reduce online harms at scale.

Online Harms & Platform Accountability

Measuring harmful content, recommender system effects, and platform interventions at scale. We build methods to hold platforms accountable to the public.

AI-Driven Cybersecurity & Cybersafety

Applying artificial intelligence to detect, model, and mitigate emerging threats in online environments.

Hate Speech & Content Moderation

Auditing moderation systems and developing fairer, more transparent approaches to detecting and removing harmful speech.

Generative AI Harms

Characterising new risks introduced by generative AI systems — including synthetic media, automated disinformation, and misuse at scale.

Online Safety for Minors

Studying how at-risk populations — especially minors — are exposed to harmful content and dark design patterns on social platforms.

Platform Measurements & Data Access

Building public datasets, data-donation methods, and infrastructure for independent platform research and regulatory accountability.

Funded By

European CommissionNWOGoogle Trust & Safety