The Online Hate Index (OHI) research seeks to improve society's understanding of hate speech on YouTube, Reddit, and Twitter and beyond, including its prevalence over time, variation across regions and demographics, our ability to measure it through crowdsourcing and algorithms, and how to influence it through historical or future interventions. We are developing a nuanced measurement methodology which decomposes hate speech into constituent components that are easier for humans to rate than a single omnibus question (i.e. "is this comment hate speech?"). We anchor our labeling instrument to a comment reference set and construct map, linking our measurements to our hypothesized hate speech scale. The labels are then transformed via Item Response Theory into a continuous score. We use a crowdsourced worker recruitment procedure for labeling, and a comment allocation procedure that allows estimation of, and correction for, the "severity" of reviewers in terms of their rating harshness or leniency. We will then train Transformer-based deep learning models (BERT, GPT-2) to predict the ratings for each comment - an easier modeling task than directly estimating hatefulness. Those predictions will be transformed into the continuous hate speech scale.

In partnership with Google Jigsaw, D-Lab sets a new standard for the data science of hate speech: it 1) establishes a theoretically-grounded definition of hate speech inclusive of research/policies/practice, 2) develops and applies a multi-component labeling instrument, 3) creates a new crowdsourcing tool to scalably label comments, 4) curates an open, reliable multi-platform labeled hate speech corpus, 5) grows existing data and tool repositories within principles of replicable and reproducible research, enabling greater transparency and collaboration, 6) creates new knowledge through ethical online experimentation (and citizen science), and 7) refines AI models. Ultimately, we seek to understand the causal mechanisms for intervention and evaluation. All of these innovations are guided by an advisory group and consortium Consortium for Research on Abusive Language (CORAL) and a new open-source platform with tools that will make these resources available along with policy recommendations and ADL and other advocacy organizations will educate and grow the larger community.

Claudia von Vacano, Executive Director of the D-Lab, conceptualized and is the principal investigator of the hate speech research, introduction to data science curriculum for SAGE publications, and works as an advisor for the Data Science Education Program on Data Scholars. She is on the board of the Berkeley Center for New Media and Social Science Matrix.

Nora Broege is a postdoctoral fellow at the Joseph C. Cornwall Center for Metropolitan Studies at Rutgers University-Newark. Her research focuses on racial and ethnic inequality and quantitative methods. While completing her doctoral studies, in Sociology, at UC Berkeley she was a graduate research assistant at the D-Lab.

Chris Kennedy is a biostatistics PhD student, data science consultant, NIH biomedical big data trainee, fellow at the Berkeley Institute for Data Science, chair of the Text Analysis Across Domains conference ( and Kaiser Permanente researcher. He is co-author of the SuperLearner ensemble machine learning framework, and holds M.P.Aff. and B.A. degrees from the University of Texas.

Alexander Sahn is a PhD Candidate in Political Science at the University of California, Berkeley. He studies how Americans communicate their political preferences and representation, with a focus on cities and housing policy. He is affiliated with D-Lab, the Institute of Governmental Studies, and the Citrin Center for Public Opinion Research at Berkeley.

Two additional members of the research team chose not to be named. They include a political scientist and a linguist.

CORAL (Partial List):

Claudia von Vacano (Facilitator, Social Sciences D-Lab, UC Berkeley); Lucas Dixon and Rachel Rosen (Jigsaw); Michael I. Jordan (EECS at UC Berkeley); Mark R. Wilson and Karen Draney (Berkeley Evaluation and Assessment Research); Judith Butler (International Consortium of Critical Theory); Zeynep Tufekci (Berkman Klein Center for Internet and Society at Harvard); Susan Benesch (Dangerous Speech); Brandie Nonnecke (Center for Information Technology Research in the Interest of Society); Joshua Tucker (NYU Social Media and Political Participation); Lisa Garcia Bedolla (Data for Social Good); Jean-Philippe Cointet (MédiaLab SciencesPo, France); Teresa Caldeira (UC Berkeley and Public Policy Center, FGV, Brazil), Red en Defensa de los Derechos Digitales (Mexico).

Project type