It’s Getting Better All The Time

In the past few years, two prominent scholars have written books claiming that human rights practices around the globe have stagnated or slipped into decline. In The Endtimes of Human Rights, Stephen Hopgood argues that the “civilizing mission” that led to significant gains in human rights in the 1980s and 1990s has stalled. As he puts it, “what seemed like a dawn is in fact a sunset.” Similarly, Eric Posner argues in The Twilight of Human Rights Law that increases in rhetoric and international agreements devoted to protecting human rights have failed to produce meaningful reductions in actual violations.

On their face, global data on human rights practices appear to support the empirical claim at the heart of those books. For example, the widely used CIRI Human Rights Dataset includes a Physical Integrity Rights Index, a nine-point scale summarizing the extent to which governments use methods like extrajudicial killing, disappearance, and torture against their own citizens. At the global level, the average score on this measure has declined slightly since 1981, the first year CIRI observes.

As it happens, though, those data may not accurately represent long-term trends in human rights practices. The problem lies in their origins. The data sets most often used by scholars and advocates who track human rights trends are derived from annual reports issued by the U.S. State Department, Amnesty International, and other watchdog organizations. A few academic projects have developed procedures to summarize the texts of those reports as numeric scales that represent the degree to which various human rights are respected. The CIRI data set, for example, includes a measure called “Torture” that is scored 0 when reports indicate that torture occurred frequently in a particular year, 1 when torture was practiced occasionally, and 2 when reports indicate that it did not occur.

The problem is that the rules used to do this summarizing have remained more or less consistent over time, but the reports themselves have not. Over the past four decades, the human rights reporting process and the international legal context in which that reporting occurs have both changed significantly.

The most obvious change in how human rights issues are reported is that the organizations that monitor these issues now have more and better information about relevant violations than ever before. In the age of nearly universal smart phones, social networking, and widespread Internet access, violations are more like to be seen, documented, and communicated to the broader world. As a result, monitors are more likely to notice and document failures to respect or protect human rights.

Human rights monitors have also gotten better at what they do. In a 2013 paper, Anne Marie Clark and Kathryn Sikkink note that both the State Department and Amnesty International have greatly increased their capacity to track violations. According to Clark and Sikkink, the State Department had just one human rights staffer in the early 1970s; by the end of the 1990s, however, the part of the agency that prepares the annual Country Reports on Human Rights Practices had over 100 staff. Between 1975 and 1985, Amnesty International doubled its staff to 205 employees. These organizations have also learned to collaborate more closely with each other and with local civic groups, further improving their monitoring power.

Less obviously but no less importantly, the standards against which human rights violations are measured have also changed over time. As gains have been made on the major concerns of the 1960s and 1970s, issues that monitors used to ignore or overlook now draw more attention. For example, Clark and Sikkink argue that, in human rights reports, the concept of “political killings” has expanded from government-sponsored murder political opponents on a large scale to include problems like excessive police violence and governments’ failure to prevent political killings by other groups. On other issues, such as torture, the international legal standards for what counts as a violation have become tighter and tighter as practices have improved and activists and lawyers have pressed for further gains.

Because of these two trends — richer information about abuses and changing standards for what constitutes an abuse — human rights reports released today may sound as dire as those from earlier decades, even in cases where the underlying practices have actually improved. So, when scholars convert those reports into numeric scales, countries that have made significant gains may appear to have stalled or even regressed.

In their 2013 paper, Clark and Sikkink take a close look at the watchdog reports and resulting data for Brazil and Guatemala. In the case of Brazil, they surmise that increased reporting on newer concerns like police brutality has produced data showing a downward trend in human rights practices, when in reality those practices have probably improved or, at worst, stagnated. In Guatemala, Clark and Sikkink argue that state repression was so severe in the early 1980s that outside observers had a hard time seeing just how bad it was, and thus underreported the violations. That distortion, combined with political bias at the U.S. State Department in support of the Guatemalan government, produced an artificially high baseline against which later reports must be compared, so the gains look smaller than they really are.

All of these changes in reporting on human rights violations around the world are spotlighted by political scientist Christopher Fariss in a study published in 2014 in the American Political Science Review. Using a statistical technique called a latent variable model, Fariss re-analyzed the human rights data while allowing for the possibility that the standards to which they were anchored had evolved over time.

When he applied this model, Fariss found that the conventional view — that human rights practices have stagnated or even worsened in recent decades — was mostly wrong. According to Fariss’s best estimates, once we account for these underlying changes in the information available and standards applied, we see that practices on many of the human rights tracked by existing data sets have improved significantly since the early 1980s. On some issues, such as political imprisonment, Fariss finds that there hasn’t been much change. On other core concerns, however, including torture and political killing, the adjusted data show substantial gains over the past 30 years. So, the trajectory varies across issues and countries, but in most cases the arc has continued to bend toward a better world.

Some human rights scholars see Fariss’s statistical adjustments as a step in the right direction. Others, however, do not believe that existing data are systematically biased. David Cingranelli, one of the founders of the CIRI project, disputes the claim that changes in reporting practices have led CIRI to overstate human rights violations in recent years. He worries that statistical estimates like Fariss’s will be harder for researchers to understand and reproduce.

The claim that human rights practices have improved significantly in recent decades might seem to undercut the work of advocacy groups, which often make dire statements about the current state of affairs and the risk of further backsliding. In fact, advocates and activists ought to take these findings as a compliment. Those advocates and activists are largely responsible for the closer scrutiny of abusers and stricter definitions of abuse that have transformed human rights reporting over the past 40 years. These transformations make it harder for social scientists to compare human rights practices over time, but they also indicate that, for many people, the world is becoming a better and more humane place.

In the photo, spectators hold a banner reading “peace” during the Brazilian Grand Prix at the Interlagos racetrack in Sao Paulo, on November 15, 2015.

Photo credit: Miguel Schincariol/AFP/Getty Images