Measuring What Matters in Traffic Stop Analysis

Posted By: Kevin Townsend Member Voices,

Measuring What Matters in Traffic Stop Analysis

Each year, the Racial and Identity Profiling Act (RIPA) Board releases a report that claims that racial disparities identified in their analysis of stops demonstrate that bias is pervasive in California law enforcement. A major driver of their conclusion, however, is the Board’s continued reliance on a deeply flawed method: comparing stop demographics to residential census data. It fails to account for changes in driving populations, crime concentrations, modern enforcement strategies, and other real-world factors that significantly skew and undermine conclusions. As leaders committed to both fairness and accuracy, we must push back against the use of population benchmarking as the primary method for evaluating police bias. It is not scientifically sound and, without explaining its limitations, can distort the public’s understanding of our work.

The Problem with Census Benchmarks

 At first glance, it seems logical to assume the racial breakdown of those stopped by the police should mirror the makeup of the jurisdiction. But that assumption is not supported by research or reality. In truth, the population at risk of being stopped is not defined by who lives in a city, but by who is driving in it, when and where officers are deployed, and the nature of police efforts. Census-based demographics ignores critical dynamics such as:

  • Commuter & Freeway Impacts: Many jurisdictions experience significant pass-through, inbound, and outbound commuter traffic because many Californians do not work where they live. For example, Riverside Police found that only 56% of the persons issued a citation in 2024 gave a city of Riverside address. Smaller and rural jurisdictions can be similarly impacted by major highways that go through them.
  • Public Transportation & Rideshare Usage: Residents who don't drive, especially in dense urban areas, are excluded from traffic stop risk, yet are listed in census figures.
  • Tourism & Seasonal Shifts: Locations with beaches, mountains, and deserts can experience vastly different driving populations at various times of the year. Convention centers, sports arenas, major shopping centers, amusement parks, and other tourism draws can also bring in many non-resident visitors to a jurisdiction.
  • Strategic Deployment: Many agencies employ Hot Spots policing, or some data-informed variation of it, to direct officers to areas where crime and/or traffic collisions are most prevalent, and officers often self-deploy to these same locations due to their experience. Research has shown that these crime concentrations often overlap with communities that have a higher percentage of minorities. This is important because racial disparities in stops can result not from officers being racist but from real-world deployment needs, placing them in a position to disproportionately interact with (and stop) minorities to combat crime and/or traffic collisions effectively.

Research Consistently Discredits Census-Based Benchmarks

Reliable statistical tests are necessary to properly investigate police racial profiling. However, the inherent difficulty in doing so is identifying the appropriate baseline to which to compare stop data. Twenty years ago, the Police Executive Research Forum (PERF) and the U.S. Department of Justice (DOJ) warned that “no one interpreting results based on benchmarking with adjusted census data can legitimately draw conclusions regarding the existence or lack of racially biased policing.” Leading scholars and policing experts have repeatedly noted in the decades since then that census benchmarks are flawed and should not be used to estimate the prevalence of police bias.

The research community’s failure to resolve the benchmarking issue is a fundamental challenge to quantifying racial profiling through stop data. For example, one study found that when comparing stops to census data, there was an 800% disparity between Black and White drivers. When the benchmark was shifted to homicide victim demographics, the disparity dropped to 58%. Researchers in another study found significant racial disparities in stops, as compared to census data. However, when they compared stops to the demographics of those arrested and those described as suspects by victims in police reports, they found little to no disparity. Their analysis revealed dramatically different interpretations of the same stop data when using various benchmarks.

The research is clear: the benchmark you choose greatly affects the outcome. And when you start with the wrong point of comparison, such as census data, you’re destined to reach misleading conclusions.

Why This Matters To You

The consequences of flawed analysis are not confined to a theoretical debate; they are real and growing. The results can lead to unjust media stories, support new state and local police restriction-centered legislation, and, per the California’s Racial Justice Act, impact court cases because statistical disparities can now trigger legal remedies, including reduced or dismissed criminal charges.

Poor analysis also contributes to growing pressure on agencies to reduce stops, avoid searches, and scale back proactive enforcement. However, the academic consensus on crime and traffic collision reduction is clear: Hot Spots policing works. Discouraging it, especially based on flawed analysis, risks community safety.

Better Science

Racism has been an ever-present discussion in the United States. However, highly charged points of debate, such as public safety and police bias, can cause one to ignore science and instead be driven by deeply held beliefs or ideology. Therefore, this is not a call to ignore the issue. Instead, it is a call to study it responsibly. We must distinguish between valid evidence of bias and statistical noise caused by flawed methodology and confirmation bias.

What Can We Do?

California’s police leaders should lead the conversation on rigorous analysis. That starts with rejecting census-based benchmarks in your own department’s evaluations and their use for community groups, the city council, and others you work with. Or, at a minimum, discuss the limitations of population-centered analysis to allow stakeholders to make their judgments about the value of a report's findings.

Further, we cannot dictate how the RIPA Board conducts its analysis, but we can lead by example in how we approach our analysis. As such, avoid simplistic comparisons such as census-based benchmarking. Instead, assess the pros and cons of other credible approaches, use multiple methods to examine relevant factors, and triangulate or corroborate findings. Researchers in one case looked at seven potential points of reference and concluded, “We have yet to identify an appropriate benchmark.”

However, if you choose to analyze your RIPA data, consider:

  • Be Context-Sensitive: Compare stops to crime, call-for-service, or traffic collision patterns and local, neighborhood-level demographics. Also, consider any directives or strategies your department employs that could impact stop data.
  • Transparent Reporting: If your agency publishes stop data analysis, clearly explain the methodology chosen, why, and its limitations. Doing so demonstrates objectivity and trustworthiness.

California has many public and private universities with reputable statisticians who can provide technical guidance and/or participate in a peer review process before any findings are published (essentially, a panel of experts validates your analysis). You may also consider contracting the analysis and findings to a professional research group.

A fair investigation of local stop data can better inform organizational leaders, jurisdictional policy groups (such as city councils), and the public to facilitate difficult conversations about the realities of crime, unique community features, and law enforcement that are based on objective evidence that considers a variety of complex factors and not partisan politics. 

About the Author:

Kevin Townsend has been a police officer for almost 27 years and is a captain with the Riverside Police Department. He earned a Doctor of Public Administration (DPA) degree and is an adjunct professor at California Baptist University. Kevin is a National Institute of Justice (NIJ) Law Enforcement Advancing Data and Science (LEADS) Scholar and studies various law enforcement, public sector, organizational, and leadership issues.