Transnational Evaluation Systems As Relations of Ruling: Exploring Knowledge Production through the Everyday Actions of International Development Professionals

Monday, 16 July 2018: 10:30
Oral Presentation
Emily SPRINGER, University of Minnesota, USA
In an era of tenacious interest in ‘evidence-based decision making,’ international development organizations utilize data from evaluation systems to publicize success, demonstrate project efficacy, and claim impact to donors. Transnational evaluation systems measure the ‘success’ of development projects through a process of downward-moving policies from funders and upward-moving data from the project site, aggregated to demonstrate a return-on-investment. This bureaucratic system is only made possible through the joint effort of diverse development workers — from data collectors in rural areas abroad to evaluation directors in donor countries. How do evaluation systems, and the bureaucratic processes of which they are a part, coordinate the behavior of people irrespective of temporal, spatial, and cultural dimensions? And what are the possibilities for individuals to resist, negotiate, and recreate these systems?

This paper is based on 57 interviews with managers, evaluation advisers, and consultants in East African field offices and headquarters of a bilateral donor, who, through labor at multiple localities, animate the evaluation system of an agricultural development initiative. Utilizing an institutional ethnographic approach sharpens analysis of the governing power exerted through evaluation systems, while being attentive to the constitutive power of everyday actions of development workers as they animate and make evaluation ‘happen.’ This paper argues that institutional ethnography provides rich insight into the agency/structure debate by focusing on metrics as a form of global governance within transnational bureaucracies. Evaluation systems induce diverse professionals to work together to turn the particularities of social life in varied communities into fact-like knowledge digestible to funders and policy-makers. In doing so, I suggest that evaluation systems shore up quantitative knowledge, deprioritize transformative development agendas not easily measured, and narrow the space for learning, despite official stated goals to the contrary.