Framing the Picture: UPR Info Database Action Category

Author: Edward R. McMahon Ed.D, Adjunct Associate Professor, Department of Community Development and Applied Economics, Department of Political Science, University of Vermont.

In the dozen years since the Universal Periodic Review became operational, it has gained a reputation as a helpful tool in promoting State adherence to human rights norms and practices. The heart of the mechanism, of course, are the recommendations that are produced by member States and then either accepted or noted by States under Review (SuRs).

Over the first 33 UPR sessions almost 80,000 recommendations have been made. Recent sessions, in which 14 countries are reviewed, have produced on average slightly over 3,000 recommendations, up from an average of just under 1,800 in the UPR’s first 12 sessions. Interestingly, the overall percentage of accepted and noted/other responses has remained remarkably consistent, at about 75%.

The overall number of recommendations renders developing a comprehensive picture of the mechanism similar to drinking from a firehouse – one risks being overwhelmed by the sheer magnitude of numbers involved. That is why the UPR Info data base can be very helpful – it breaks down recommendations by 12 different fields e.g. State under Review, Recommending State, Region of SuR or RS, Issue addressed, State Response, and others. This provides analytical tools for both disaggregated and aggregated analysis, and for comparative analysis of various recommendations. As such it is helpful to Geneva-based HRC missions, governments, human rights defenders and other civil society organizations, journalists, academics and others seeking to understand and make use of the UPR.

One part of the UPR Info database addresses a challenge inherent in assessing UPR recommendations - how to ascertain their relative importance? Is it possible to differentiate between recommendations that have less value, or import, than others? This is a challenging problem, because in one sense every recommendation is made in a particular and unique context; it is very challenging to objectively assess which recommendations may be more important than others. And yet this is important in determining whether the UPR process is having a substantive impact or, as some cynics would maintain, is largely a charade.

One way to answer this question is to conduct a detailed analysis of actions states have taken in addressing accepted recommendations. UPR Info has done some work in this area, for example, through soliciting quantitative and qualitative information from stakeholders on the ground regarding its 2014 “Beyond Promises” study . Such an approach, however, is extremely complicated, time-consuming and expensive. Since the UPR’s inception, UPR Info has also taken an alternative approach. Through experience, we have identified certain patterns that have emerged with remarkable consistency over the life of the UPR, that give us a collective sense of different categories of recommendations. The way in which recommendations are phrased can be extremely revealing in terms of the intent of the recommending state. Are recommendations phrased in a “soft” way, which can make it easy for the SuR to accept the recommendation and later claim compliance? Or are they posed in more rigorous language, which requires specificity of action and accountability? Depending on the issue these dynamics may play out somewhat differently, but given the large-n nature of the data generated by the UPR, basic trends can be identified (2).

In order to provide an empirical basis for analyzing these questions UPR- Info has developed an action category scale which is housed in the database, and which groups recommendations made based on the verbs utilized in the recommendation language. We have focused on the key action verbs contained in the recommendation and disaggregated them into five types. Generally speaking, the categories reflect increasing levels of effort, including political and financial resource allocation, on the part of the state to implement.

  • Thus, a rating of 1 is for recommendations directed at non-SuR states, or calling upon the SuR to request technical assistance, or share information, all of which require in general no or limited SuR allocation of resources. They total less than 1% of all recommendations.
  • A rating of 2 is for recommendations to continue or maintain existing efforts (16%);
  • a rating of 3 is for recommendations to consider change (8%);
  • a rating of 4 is for recommendations of general action (i.e. address, promote, strengthen, etc.) (39%); and
  • a rating of 5 denotes recommendations calling for specific, tangible and verifiable actions (36%) (3).

Key Action Category Findings
What can we learn from analysis using the action category approach? First, that there has been a remarkable overall similarity in data produced throughout the three UPR cycles, which reinforces the validity of this instrument and the veracity of the overall trends noted below. These include:

  • Acceptance rates between categories vary. Categories 1, 2, and 4 are in general easier for States to implement. Recommendations associated with the most specific actions (Categories 3 and 5) received the lowest acceptance rates (55%), while the Categories 1, 2, and 4 emphasizing continuity of action, or actions of a general nature – which make it easier for SuRs to define and thus assert compliance with – had higher acceptance levels (88%). The former tend to be associated with higher costs (financial and/or political) than the latter.
  • Differences between Categories. The “softer” types of recommendations (Categories 1, 2, 4) total about 56 percent of recommendations versus 44 percent for the “harder” Categories 3 and 5. These proportions have remained remarkably similar throughout the life of the UPR. This suggests that there has not been a trend to make recommendations more targeted and specific, despite suggestions from a number of observers of the process that this would be helpful (4). On the flip side of the coin, of course, neither has there been a trend towards making recommendations “softer”, or easier with which for states to claim compliance.
  • Regional Distribution of Recommendations. Throughout the life of the UPR all regions except Africa have directed a plurality of recommendations to Asia, followed by Africa. Asia and African states make most recommendations to each other and their own region.
  • Distribution of Action Categories by Recommending State. The Western European and Other group has consistently been most active in making recommendations, the plurality of which have been in the “harder” Categories 3 or 5. By contrast, a large majority of the recommendations made by African and Asian states have been in Categories 2 and 4.
  • Regarding issues, the three most frequent topics were International Instruments, Women's Rights, and Rights of the Child. The information provided above is simply the tip of the analytical iceberg; this coding system can be used for a range of other types of analyses. We have noted elsewhere, for example, that more democratic states, even those in the global south, tend to make more action-oriented recommendations (5). Useful research can also be conducted on which types of issues tend to be associated with which action categories; this can be especially useful for states or NGOs focused on promoting progress on those particular issues. Additionally, it would be helpful to note which types of issues in the more action-oriented recommendations tend to have higher levels of acceptance by SuRs.

We suggest that, everything being equal, more action-oriented recommendations are preferable due to their greater specificity. We do recognize, however, that due to particular contextual factors relating to the SuR and the issue being addressed, this may not always be the case. Another challenge relates to the composition of the recommendations. Some recommendations are easier to code than others, given the choice of verbs and the context in which they are being used. UPR Info has continually engaged in internal quality control through cross-checking of recommendation codings and improvements in coding definitions. In addition, recent analysis undertaken by the independent analytics organization Huridocs indicates that a greater than 90% consistency in coding over the life of the UPR. In this regard UPR Info has recently developed a sophisticated algorithmic methodology which can even more effectively and efficiently code recommendations by issue type and action category.

The Action Category methodology provides a lens through which UPR recommendations can be analyzed and assessed. It can be used in many different ways and for various specific purposes by the end-user. We hope it will continue to be utilized by individuals and organizations interested in the UPR to promote and the human rights in general and their work in particular. It will continue to provide an overall picture of the UPR as the mechanism further develops, and hopefully becomes a stronger and more important tool in promoting human rights world-wide. Notes:

(1) UPR Info, “Beyond Promises”, 2014, https://www.upr-info.org/sites/default/files/general-document/pdf/2014_b...
(2) The database, and a full description of its functioning, including the action category coding process, can be found here
(3) More information on the action category scale is available at UPR Info-.org cite papers Edward McMahon (2012). The Universal Periodic Review: A Work in Progress. Friedrich-Ebert-Stiftung, http://library.fes.de/pdf-files/bueros/genf/09297.pdf and www.upr-info.org/IMG/pdf/Database_Action_Category.pdf
(4) See, for example, UPR Info https://www.upr-info.org/sites/default/files/general-document/pdf/2016_t....
(5) Friedrich Ebert Stiftung, “Evolution Not Revolution”, 2016, https://www.upr-info.org/sites/default/files/general-document/pdf/mcmaho....