Tom Kelly, M.P.H. In continuation of celebrating Community Science’s 20th anniversary, we will commemorate those that have been major contributors to the mission success of Community Science by conducting interviews detailing their contributions. The second contributor in the series is Tom Kelly, Vice President-Knowledge, Evaluation & Learning at Hawaii Community Foundation and formerly the Associate Director for Evaluation at the Annie E. Casey Foundation. Tom was interviewed by Nour Elshabassi, Research Assistant.
NE: In order to set the stage for our readers, can you tell us a little about yourself and how you came to be involved in the evaluation profession?
TK: I won’t give you the long story, but what’s interesting is the news this morning was talking about the 25th anniversary of the Rodney King riot. Exactly 25 years ago, my plane landed at Los Angeles International Airport (LAX) at about 8:00 p.m., just as the riot started on what was my first solo evaluation project. I did an evaluation of legal representation of abused and neglected children in California and was having my first client meeting in Los Angeles just as the riots began. This anniversary reminded me of when I first implemented an evaluation by myself.
I had entered evaluation a few years earlier as a research assistant in a small business-consulting firm, a women-owned firm in D.C. I learned the business in a practical way and then went to graduate school at night. Prior to working, I did not know about evaluation but came in from child welfare experience, and began with an evaluation project for the United States Department of Health and Human Services (US DHHS). And that was my entry into the business 27 years ago. And like I said, my first solo project was in California, holed up in my LAX hotel room for several days during the riots.
I’ve been practicing evaluation nonstop since. I studied for my masters part-time at night from 1992 and 1995 at George Washington University in Washington, D.C., while I continued working as a consultant. I worked for two 8(a) small business firms: one women-owned and the other an African-American-owned firm. I worked mostly on evaluation capacity building projects, evaluation design, management information systems design, and many process and implementation evaluations. These were predominantly for child and family programs, in a variety of ways—public health, substance abuse prevention, youth development, child welfare, and Head Start. In the late ’90s I did state evaluation projects mostly on Aid to Families with Dependent Children (AFDC) and Medicaid waiver studies before entering philanthropy when I started at the Annie E. Casey Foundation in 1999 as an evaluation officer. I have been managing evaluations inside foundations since ’99–it has been 17 years.
NE: What insights or experiences led you to become involved in the way that you did?
TK: A few things. First, there is an interesting intellectual part of evaluation that intrigued and challenged me that is both the theory and design; obviously, the theory of change; logic model building; the testing of hypotheses; the use and interpretation of data to actually get answers. All that came first.
The second was my work in community change evaluation really opened up the bigger public good and benefit that evaluation could bring if it were practiced ethically and with intention focused on a greater purpose of positive change. Evaluation could actually contribute to a community’s own learning and change agenda, and could be practiced and contribute in a more positive way to community change than just an academically. Over time, I’ve seen that more and more—even if it’s within single agencies and agency programs—helping people improve their service practices, and helping people think about and adapt and frame changes in policies and systems that are helping communities.
It was seeing the utilization application of both evaluation and evaluative thinking, but also the use of evaluative data in actually shaping practice, shaping how people both consider problems and implement solutions. I think it was that orientation of evaluation in a very community-oriented, problem-solving way that is inspiring enough to keep you in the field and keep you doing the work—on top of the satisfaction I gain from the intellectual side of the profession.
NE: What changes have you seen in the field of evaluation of philanthropies and their work over the past 20 years?
TK: One, there has been an absolute growth. It was already on a path before I entered the sector, so we’re talking 1999 when I started in philanthropy. I did have some early foundation clients while I was consulting—healthcare conversion foundations were just starting and I conducted needs assessments for some early planning. It had grown in number—evaluation units inside foundations—and certainly now in community and family foundations that have started to include evaluation in their staffing. Foundation evaluation capacity has tackled some of its technical and utilization problems. I say some, but it really has made strides in evaluation capacity, use and evaluative thinking—including its integration with philanthropic strategy. But also in foundations’ ability to better communicate findings and think harder about how findings are used to influence. All of those things have come a long way in the field of evaluation generally, but I think philanthropy, as an active client of evaluation and as an active funder of evaluation, has been able to influence some of that. I definitely think the field has become more diverse, not just racially and ethnically but also the types of skillsets that have come into evaluation. The field still has a long way to go but has certainly made strides since I have been involved in evaluation. I think people are struggling to solve problems, consider new solutions, weigh options, consider alternative views, and evaluators have that skillset. There’s more demand, and I think that’s important. That said, and not to be negative, I think we still have a long way to go on broader capacity within government, especially local government. A longer way to go with smaller nonprofits and smaller agencies that have either not had the time or resources to spend building their own capacity. And especially on how we design, implement and use evaluation in ways that address and reduce inequity, not just document it. We’re not in a perfect world yet. I will still be introduced to nonprofits or local government departments that have not had positive or enriching evaluation experiences [and], therefore, are either still slightly burned on the experience or just don’t find it that useful.
NE: What do you think has been your contributions to evaluation and its use in philanthropy?
TK: The first is that I actually have had extremely positive collaborative experiences in evaluation, so all of my evaluation work, for the most part, I feel has been collaborative. So, I share in these contributions and successes with colleagues, consultants, and especially nonprofit and community partners, but there are a few I am most proud of. The first is the experience we had with building the local learning partnerships in the Annie E. Casey Foundation’s community change initiative, Making Connections that Casey funded for 10-plus years. I did that with a great set of colleagues and partners, including local evaluators and community members on evaluation teams. That building of community-generated local learning and evaluation capacity—capacity intended to contribute to real change in community—was an extraordinary experience. It was challenging, we had many missteps, but we learned a lot together. We also had many amazing successes. I recently just reconnected with several local learning partners and we all shared how we are still using and processing those experiences and really want to write more about them because I think the field could learn a lot more. I was at the GEO Learning Meeting this week focused on equitable evaluation and feeling proud that we were trying to do this 15 years ago without the benefit of our current knowledge and tools.
I think of the collaborative work on advocacy evaluation that I was able to be a part of with colleagues from Atlantic Philanthropies, The California Endowment, Innovation Network, Blueprint Research and Design, ORS Impact, and Julia Coffman now at the Center for Evaluation Innovation. We wanted—actually we need this in our foundations’ work–to push and actually develop more ways to frame, implement, and use evaluation of policy and advocacy. It’s amazing to see it exist as a field ten years later—the first book on advocacy evaluation was just published and there is now a topic group at American Evaluation Association (AEA) focused on advocacy evaluation with 100-plus members. When we started 10 years ago, there wasn’t much at all outside of academia. So, I do look at that as a wonderful collaborative success of advancing the field.
The third is just in the practice of how evaluators both think about how foundations staff use evaluation. Again, this is a collaborative—with both my evaluation colleagues in other foundations but also the consultants I’ve gotten to work with. And so how to think about how foundations approach evaluation and use evaluation has really been what I’ve concentrated on, not just on evaluation projects all these years. I’ve written and rewritten with Jane Reisman and Anne Gienapp at ORS Impact about how foundations can think about defining and measuring all their results, including their influence, leverage and learning. We’re always happy to see that it gets used and picked up often. It is always a work in progress but I am glad that people find the work useful.
Another one of my first projects as an evaluation manager was as the editor for the US DHHS Administration for Children, Youth and Family’s Program Managers Guide to Evaluation. Twenty years later, it’s still in print and appears in the Federal Register. It always amazes me when I see it referenced, because it was one of my early efforts, and it was intended as a step forward to making evaluation practical and understandable to program directors.
This has been a theme in my work—being practical—having started in consulting first and not in academic training. My career has been guided by extremely pragmatic choices. You have to do evaluations that achieve a particular goal and satisfy a particular client, and there are lots of choices to make along the way. In essence, when you don’t have the academic freedom of taking on multiple interesting questions, you’re really forced to say, “How will I maintain the evaluation’s integrity within constraints of time, budget, or client interest?” It has shaped how I’ve gone about evaluation. Sometimes it is a regret, and I wish I had spent more time in an academic study of evaluation. I appreciate AEA conferences and journals for helping me fill that gap, because there are lots of things I gain from theory that I didn’t learn early enough in my career that continue to help me in my work. But again, I have lived in a practice world of evaluation for a long time.
NE: How has Community Science contributed to the advancement of evaluation, particularly in philanthropy?
TK: I think two things in particular. The first is that Community Science’s orientation and experience in community and especially in community development, community organizing and community change brought with it a point of view and a learned perspective on how evaluation can contribute and should contribute to community change. They were more familiar with different models of change, both in their work with philanthropy but also with government, particularly community change and community capacity building. Having worked with them also on this challenge of what are the community capacities needed to implement and carry forth change. Their experience and perspective there contributed a lot. The second contribution is the long-standing work and orientation to how race and equity are not only examined and evaluated, but how they can be embedded in the evaluation itself. Community Science was doing that work well before many people were talking about it in foundation evaluations. So, I think those are two very specific but significant contributions they’ve made.
NE: Can you think of some examples of how Community Science’s work with you on community change initiatives made a difference and what that difference was?
TK: I’ll point to two different things at two different times. While we were doing the evaluation of Making Connections, we struggled with the proper, or rather the helpful, way to frame what is community capacity and how does one assess or measure it. I worked with Community Science to not only distill what we had learned and collected already in Making Connections, but what other organizations and initiatives have attempted to do in the past. So, it was an excellent distilling of some real definitions of how people were experiencing community capacity. And I do think that helps sharpen our evaluation of how to look and what to look for and how we might put the building of capacity into an evaluation that looked at the whole initiative and not just the outcomes. So that was a pretty helpful step, and was always a work in progress (see Scope, Scale, and Sustainability: What it Takes to Create Lasting Community Change [2009]). We were constantly learning new things as we went back and adapted, but at that point in time, it really helped us sharpen what we were trying to do in the evaluation.
A second thing happened actually after the Making Connections ended and when the Casey Foundation needed to think about its next iteration of community change. Community Science helped with both the debriefing of the lessons and the experiences and a review of the multiple theories of change that were in play, not only in past work but also in other foundation and community change work. That helped propel the foundation to its next iteration of what it wanted to do to fund effective community change. There were some lessons about some things not to do and things it wanted to strengthen, and that process of distilling and analyzing what were the successes, what were the challenges, how might the foundation’s role shift looking forward is what helped the foundation absorb and use the lessons of the initiative and evaluation, and translate that into a new and more effective strategy going forward (see Emerging Action Principles for Designing and Planning Community Change [March 2015]).
NE: In what ways have you seen Community Science’s approach to evaluation help advance efforts to build healthy, just, and equitable communities?
TK: As I stated earlier, I actually do think both Community Science’s orientation to the purpose for community change and the ethical demand of making sure that the change is wanted, needed, appreciated, and embraced by community, as well as driven, has helped advance efforts to build healthy, just, and equitable communities. It’s simply their orientation in how they go about their work. The second contribution to the evaluation field is their early point of view—not just adoption of how to integrate race and equity deeply in their work both as consultants and advisors particularly in philanthropy, I assume across their clients—and having that experience over multiple years has been critical for helping the philanthropic sector.