Community Science Principal Associate and CEO David Chavis was an invited presenter at the International Conference on the Empirical Study of Evaluation Utilization sponsored by CLEAR-LA (www.clear-la.org) on May 23–24 2016, in Mexico City. CLEAR-LA is part of the global partnership of regional Centers for Learning on Evaluation and Results (CLEAR, www.theclearinitiative.org). The conference was attended by over 100 policymakers and evaluators from across Latin America and hundreds of others through live streaming. It was the kickoff event for National Evaluation Week in Mexico as declared by the Mexican President, Enrique Peña Nieto. Over 40 evaluation events were held throughout Mexico that week.

Dr. Chavis presented on how the tensions between rigorous and useful evaluation are not inherent and are most often the result of the lack of understanding of the appropriate methods for evaluation, especially those pertaining to complex situations, and the reluctance of evaluators to see themselves as part of a collaborative learning process with other stakeholders. The key points of his presentation included:

What Makes Evaluation Useful?

  • Is conducted rigorously (based on scientific principles—systematic, reflects implementation across changing contexts, responsive to culture and capacity, evaluator is part of the solution);
  • Engages all stakeholders, including those most affected by the issue, to guide the evaluation and their understanding of the context of the people, organizations, and communities;
  • Is adaptive—promotes collaborative learning and decision making;
  • Builds capacity for using data and generating knowledge;
  • Communicates according to the audience; and,
  • Promotes equity and justice.

Ways to Make Evaluation Useful

  • Methods that are appropriate and ensure that findings are as defendable as possible;
  • Facilitates stakeholder engagement processes that also pay attention to power inequities; Logic modeling as a planning tool;
  • Continuous reflections that inform and help institutionalize strategy improvement procedures and practices;
  • Cultivation of a learning culture; and
  • Attention to disparities and injustice.

Discussion following the presentation largely addressed the misconception that evaluators can be objective (i.e., free of bias) as exemplified by past research that was considered objective, yet promoted racist and sexist conceptions of the times as well is the researchers. Evaluation integrity was presented as a more useful and honest approach and one endorsed by the American Evaluation Association, whereby the evaluators’ biases or advocacy are presented clearly and that research methods, as well as rules of evidence, are transparent and data is appropriately made available.

More information on the conference can be found at http://www.clear-la.org/home/2016/04/sieesue/