As funders, communities, and evaluators become more knowledgeable about the root causes of racial and ethnic disparities in health, education, income, and other conditions of well-being, we begin to realize how community and systems change interventions are necessary to affect these root causes. In consequence, we evaluators find ourselves re-examining our roles, training, and competencies. Evaluations of these types of interventions do not only generate more knowledge or inform investments—at best, they also help strengthen communities and promote equity. In our evaluation of place-based work at Community Science, we see how evaluators play the following roles in addition to carrying out their technical responsibilities.

  1. Change agents—in order to promote and monitor progress toward social justice and equity;
  2. Negotiators—as part of having to facilitate and navigate relationships in the community;
  3. Capacity builders—when we train and coach organizations to use research, evaluation, and data; and
  4. Cleaners—when we are engaged to “fix” an evaluation that went awry for a variety of reasons.

Evaluators as change agents. Information generated through evaluation is used to advise policies, strategies, and programs to end racial and ethnic disparities. When we actively promote an improved or alternative course of action based on the results, we have the potential to promote social justice and equity, allowing us to assume the role of change agent. This suggests that evaluators have to be vigilant on how the inquiry process is designed and implemented, especially in order to orient the intervention within a broader context and through a systems lens so as to lift up structurally racist policies and practices that might have contributed to the inequity. This role, however, can be challenging in the following ways.

  • Some funders, policymakers, and evaluators may perceive the role of change agent as a conflict of interest because the evaluator is no longer objective, but rather has a vested interest in ensuring that the evaluation serves to advance equity—regardless of whether the effort fails or succeeds (McClintock, 2003).
  • Evaluators are typically not trained as change agents.
  • Evaluators are expected to be competent in evaluation but not always in the subjects that the evaluation is dealing with, such as obesity prevention, economic development, and especially structural racism, even when the effort is designed to end racial and ethnic disparities.
  • Evaluators of color can be frustrated and emotionally drained as they feel the weight of taking care of their health, career, and advocating for their communities as they also take on the task of educating others about racism, oppression, and how these issues show up in evaluation.

Evaluators as negotiators. Funders tend to invest in communities without sufficient attention or resources to address the turf issues or value differences that frequently exist among organizations in a community or place. Competition for resources and power, as well as where an organization is situated in a system, can affect how it engages with the evaluation and the evaluator. Evaluators may find themselves caught in the web of complicated relationships—among organizations that compete for grants; between established and emerging organizations, with the former having more power than the latter; and among organizations with different values or that serve different constituencies. The evaluator faces the challenge of answering some formidable questions: Who represents or speaks for the community? To whom is the evaluator accountable? How can the various participants reach consensus? (Leviton, 2003). It can be difficult, if not impossible, for evaluators to remain separate from the dynamics that can affect the evaluation’s implementation, outcomes, and impact. Consequently, evaluators find themselves having to play the role of facilitator, broker, negotiator, and conflict manager on top of their role as evaluator. This role is essential, yet challenging, because:

  • Community is complex. People belong to multiple communities, one community can be nested in another one, communities are organized differently depending on a multitude of factors, and communities are made up of formal and informal institutions that can be difficult for an outsider to discern (Chavis & Lee, 2015). Evaluators are not trained in understanding or navigating this complexity.
  • Conflict is inevitable and unpleasant for everyone involved, from the funder to the community stakeholder and evaluator.
  • The amount of time necessary to have genuine community engagement in evaluation is often not accounted for in the timeline or in resources put forth by the funder and evaluator.

Evaluators as capacity builders. In order to solve complex social problems, organizations have to implement strategies directed at community and systems change. For organizations that are used to delivering services, this is a shift in thinking, while for organizations that may already be engaged in systems thinking, they struggle with identifying and collecting data on the right outcomes. As a result, evaluators have to work with the organizations to build their capacity to link their strategies to the desired outcomes (i.e., through logic modeling, a process that program implementers usually dislike); design and implement a monitoring and learning system that can bridge program, evaluation, and social change; and engage in a community of practice in order to promote the evaluation of peer exchange within what is usually perceived as a competitive environment for resources. The field of evaluation capacity building has grown tremendously in the past several years (Wandersman, 2014), yet we continue to struggle with the following in building the capacity of public, private, and nonprofit organizations to use research, evaluation, and data:

  • Limited staff in nonprofit organizations whose attention is focused on designing and implementing interventions and delivering services—evaluation responsibilities are viewed as taking away from these tasks;
  • Lack of role modeling and example setting by public and private funders who expect their grantees to use evaluation to learn and improve their work but don’t usually practice the same discipline;
  • Inadequate measures of evaluation capacity, which therefore make it difficult to justify the capacity building effort or demonstrate outcomes; and
  • Lack of training of evaluators as technical assistance providers, coaches, and trainers.

Evaluators as critical friends to communities and one another. The field of evaluation has been exploding in the last few decades as advanced information and technologies contribute to new evaluation models, proprietary tools, and marketing and branding strategies. Evaluators compete with each other for contracts and for the ability to influence funders, but they may end up paying more attention to promoting their models and tools than helping stakeholders embrace and use evaluation. As a result, evaluation consumers find themselves with choices about what method is best for their intervention, how much it will cost, how it will benefit them in the long run—much like buying a car— and usually in order to comply with their funders’ requirements. These consumers are not equipped with the knowledge to make informed choices and, consequently, may end up making the wrong decision. Then fast-forward a couple of months, and another evaluator enters as the cleaner with the challenging task of fixing what went wrong. This situation happens more frequently than people would like to think. Chavis (2003) discussed how, as evaluators, we can be our own worst enemies in perpetuating the above situation, because we tend to:

  • Brand our approaches without explaining in plain English what the consumer actually gets and, sometimes, even put our needs first before the communities’ needs.
  • Insist that objectivity and rigor mean we should separate ourselves from the messiness of the real world and, thus, limit our involvement to problems about measurement and analysis.
  • Work alone rather than collaboratively with one another—a situation that is also encouraged by the “fad” culture of funders—which is more effective for generating ideas and solutions to the complex problems we are supposed to help solve.
  • Forget that we are part of the change process and play a role as change agents.

So, how do we avoid the cauldron?

We cannot—because, as Saul Alinsky, the major thought leader in Community Organizing, asserted, “Change means movement. Movement means friction. Only in the frictionless vacuum of a nonexistent abstract world can movement or change occur without that abrasive friction of conflict.”

Evaluators need to speak truth to power. In fact, we need to understand and address power and our role in promoting change or the status quo. As change agents, capacity builders, facilitators and negotiators, and critical friends—roles that are not mutually exclusive—and as long as we work in communities and attempt to evaluate interventions designed to ameliorate social problems, we will always find ourselves in a cauldron where conflicts are inevitable. Rather than avoid or dismiss them, we need to learn how to deal with them.

Community Science has learned a lot about our role in evaluation, sometimes the hard way. Nevertheless, we remain optimistic that by continuously raising the hard questions—at the risk of sounding like a broken record—and challenging ourselves and our colleagues, we can develop a fireproof suit that will reduce the heat.

The American Evaluation Association’s Guiding Principles exist to help us remember that our job as evaluators is to make the world better, and a just and equitable world is a better world. Reflecting on the quote by Alinsky, we do not exist or operate in a vacuum, and we are not value-free. A commitment to address the root causes of inequity and injustice does not make us any less truthful or precise in our analysis.

So, what can we do? Perhaps we can start by:

  • Requiring evaluators to attend professional development activities that teach about the community, change management, conflict management, cultural competency, structural racism, advocacy, adult education, and coaching techniques.
  • Requiring higher education and training institutions to incorporate that knowledge and skillset into their evaluation curriculum.
  • Inviting the American Evaluation Association to design and support a community of practice that shares knowledge on how the Guiding Principles for Evaluators can be operationalized and followed in real-world settings.
  • Acknowledging and making explicit, rather than shying away or keeping implicit, the power that evaluators possess.
  • Holding funders, community leaders, stakeholders, and ourselves as evaluators accountable to the higher standard of advancing equity and not being afraid to engage in conflict, to collaborate with others, and/or to walk away from an idea or contract that could potentially do more harm than good.

References

Chavis, D. (2003). Looking the enemy in the eye: Gazing into the mirror of evaluation practice. Harvard Family Research Project, IX (4).

Chavis, D., & Lee, K. (2015). What is community anyway? In P. Tamber, B. Kelly, L. Carroll, & J. Morgan (Eds.), Communities Creating Health. London: Pritpal S. Tamber Ltd. in association with Stanford Social Innovation Review.

Leviton, L. (2003). Commentary: Engaging the community in evaluation: Bumpy, time consuming, and important. American Journal of Evaluation24 (1), 85-90.

McClintock, C. (2003). The evaluator as scholar/practitioner/change agent. American Journal of Evaluation, 24 (1), 91-96.

Wandersman, A. (2014). Moving forward with the science and practice of evaluation capacity building: The why, how, what, and outcomes of ECB. American Journal of Evaluation, 35 (1), 87-89.