Recently, my colleague asked, “what is all this with ‘lived experience’ lately?” I was unsure how to answer because this term had already become a buzzword in my mind. I thought I knew what it meant, but then I couldn’t quite define it when asked so directly.

Here are a few of the definitions I found when I went looking:

These definitions started to give the term back its meaning for me — knowledge from direct, first-hand involvement experiences, not simply what someone has heard about or studied.

Over the last few years, I have seen an increase in funders asking for “lived experience” as part of project designs. For example, the federal agency of Substance Abuse and Mental Health Services Administration (SAMHSA) states, “Persons with lived experience and/or their family members should be closely involved in designing and carrying out all data and program evaluation activities.”

Rightfully so. For too long, research and public health programs have been designed by people who do not actually have the direct experience of living with specific health conditions or needing the services that programs intend to deliver to be helpful (people in the so-called ivory tower).

As an evaluator who is moving my practice towards evaluation in service of racial equity, I have had to acknowledge that I should not be the one driving the questions to determine the progress or success of a program, but that information should come from those affected by the issue, program, or designed service. After all, progress and success matter most to them.

Nevertheless, my colleague’s comment helped me hit pause on my understanding of “lived experience” and explore what I know and don’t know. The overarching questions that arose are:

  • How do we know when we have effectively involved people with lived experience in our evaluations?
  • How do we know we are meeting the intended outcomes to engaging people with lived experience?

There are implications when thinking of incorporating people with lived experience and although there are things that we are not always thinking about in practice, there are resources we can consult and put into practice. Those who study phenomenology and evaluators following Utilization-Focused Evaluation and Community Based Participatory Evaluation as well as other frameworks may be able to apply tenets and principles from these frameworks to answer the specific concerns and deeper questions I have posed here.

It Needs to Be Meaningful

Although my colleague had heard the term “lived experience” many times, it was not easy for her to understand its meaning and relevance, even though there were people in her field asking her to incorporate people with lived experience into her work. It may be because the term has characteristics of being a buzzword. A buzzword is defined as a word or phrase, often an item of jargon, which is fashionable at a particular time or in a particular context.

Incorporating the expertise that comes from a person’s lived experience into evaluation questions and data collection methods seems to be progressive in terms of moving power from evaluators to the people who have more direct experience with the subject of study. Undoubtedly, some people have a grounded, intelligent, and earnest sense of what it means to include “lived experience’ in their evaluations.

On the other hand, the term is often used with little explanation to the point where name dropping “lived experience” can elevate the description of a research or evaluation study because there is a sense that doing so is positive and desirable.

If the term becomes overly used with little understanding among those who adopt it, it runs the risk of becoming a fashionable buzzword. Soon we could find ourselves incorporating lived experience without really thinking about or understanding how to do so meaningfully because we don’t have a substantive, operational framework to guide us. It may become another checkbox among many other checkboxes in evaluation. In addition to a loss of meaningful application, checkboxes contribute to making things sound and look easier than they are. For example, engaging people with lived experience, a form of community engagement, would be undermined if there was not an understanding among people in power that this is not simply done, but rather involves the task of giving up some of their own power.

Buzzwords come and go. People follow them with little attachment when the next trendy word appears. Is “lived experience” in vogue or here to stay? If incorporating people’s lived experience is important to evaluation, let’s be intentional about centering its use in a meaningful way and not let it become a buzzword.

It Must Be Specific

Evaluators need definitions. We inherently measure the extent to which things are happening as intended, and thus need defining criteria.

Defining lived experience by saying it is about people with direct experience is not enough to know how to use it in practice in a way that indicates who, what, when, and how much. In order to have meaningful use and buy-in of lived experience, we need a clearer definition in order to operationalize it and measure its success.

This definition is appealing. People who have gained knowledge through direct, first-hand involvement in everyday events, rather than through assumptions and constructs from other people, research, or media.(Chandler & Munday, 2011)

But it could also essentially be applied to all people who have had experience. This would be all people.

There is work to do. To start, it would be beneficial to identify the type of lived experience. For example:

  • What specifically does the experience need to be about?
  • If it is related to a study of substance use treatment, is the lived experience that we are looking for from people seeking treatment or undergoing treatment? Furthermore, among that group, is it the specific lived experience of people of color experiencing discrimination by healthcare providers, who have no access to care, and who experience ongoing trauma due to racism?
  • Would the lived experience include family members of people who have helped individuals try to access care and navigate treatment?
  • Would it include treatment providers?
  • Does the experience need to have occurred in the past, or can it be an everyday experience?
  • Could it be a one-time experience, or does it need to have occurred multiple times?
  • Is it an experience that would have to occur for some time, and if so what length of time counts?

Once there is more clarity in the defining factors, we need to ask to what extent there is a collective understanding of the working definition. A collective understanding will ensure greater transparency and consistency in how lived experience is incorporated into evaluation activities and its successes measured.

It Must Consider Gain and Harm

When asking people with direct experience to be informants to those who are designing evaluations, we need to ask:

  • Who stands to gain?
  • Who is at risk of being harmed?

It is noble of evaluators to acknowledge that they should not be the sole determiners of evaluation design and that people with more direct experience with the topic have expertise.

However, enthusiastically inviting people with lived experience to participate on the evaluation team or in an advisory capacity may not play out as well as intended. Examining the structures in which evaluations are conducted is a necessary first step. Evaluation teams are often made up of people employed to conduct evaluations. If people with relevant lived experience are requested to participate, planning their involvement needs to consider that those people are not necessarily paid to do this work, and it very well may be in addition or in opposition to the work they do or to their daily activities. However important to them, involvement may also mean loss of income and/or time for other planned activities.

Furthermore, the context and structures in which the participation takes place must be considered. For example, if individuals with lived experience were invited to participate, we must ask:

  • Would it be feasible for them to get to the setting where the employed evaluators are?
  • Would they be able to be off work or find childcare during that time?
  • Would they be comfortable and feel safe?
  • Would they risk any harm or being retraumatized—physical, psychological, or legal—given the location, space, or others involved? What safeguards have been put into practice to minimize these risks?

Evaluators need to think about what is said and the language used.

  • Is the language full of jargon that would exclude, and perhaps tax, those unfamiliar and thus limit the expression of their lived experience expertise and full participation?
  • Does the language that relates to the area of lived experience convey respect, i.e., have the evaluators done their work to ensure they are using appropriate and person-centered language that will not harm the individuals?
  • Has the evaluation team checked their own biases and mental models around this particular type of lived experience?
  • Have those who are invited to share their lived experience aware of potential discomfort and have they had a chance to set boundaries on the topics discussed?
  • Does the evaluation consider potential translation and interpretation services for participants who are more comfortable expressing themselves in a different language?

With the positive intention of gaining diverse perspectives, evaluators may overlook the systems that people with lived experience have interacted with and how those play a role in their well-being. For example, if an evaluator puts together an evaluation advisory committee made up of people from various parts of the system related to mental illness, they also need to consider the potential experiences these people may have had with each other. A person with experience of untreated mental illness may have had challenging and harmful experiences with treatment providers and law enforcement. Evaluators ought to be intentional and informed when they bring people together in service of their evaluations so that they are not the creators of undue discomfort and power differentials for their own gain.

SAMHSA has created “Participation Guidelines for Individuals with Lived Experience and Family” in which they describe the need for informed consent to be provided to people with lived experience prior to their participation.

Evaluators can incorporate people with lived experience into their evaluations. However, if it is the evaluator who gains while those with lived experience risk harm, this effort tokenizes people with lived experience, and we can do better by considering reciprocity.

It Is More Than a Number

In evaluation, we look at amounts, both in process and outcome measurements. Incorporating people with lived experience into evaluation activities could be measured in terms of how it is implemented using process measures, and what it led to using outcome measures.

Process measures might include how many people with lived experience were involved. However, it is unclear if any factors have been developed to determine what number would be considered successful. It may be a particular proportion of the evaluation group. On the other hand, a successful number might be a minimum number, no matter the size of the evaluation group. Evaluators can ask:

  • How much representation from people of lived experience is needed to be considered meaningful input?
  • In terms of the rest of the evaluation team accepting the validity of their expertise, what number of people would be considered sufficient?
  • Given the variation in how people experience things and the range of perspectives among those with the lived experience, what factors for that particular lived experience need to be taken into consideration in order to determine the appropriate number to be able to be most representative of that variation?

While determining this number, we also need to acknowledge that a power imbalance may exist within the evaluation team in situations when most people do not have the particular lived experience. Those who bring their lived experience expertise may be questioned, judged, and even discredited by traditional evaluators or other stakeholders. When interacting with historically disenfranchised and excluded populations it is important to establish trust and put checks and balances in place to address any power differentials. People who traditionally have not had power often do not know what to do when it is finally passed to them. This may call for building capacity among the community of people with the lived experience.

Outcome measures, similarly, need careful consideration beyond the numbers. The experience of participation in the evaluation activities as well as the outcome identified by the persons with the lived experience ought to be part of the measurement strategy. Quantitative and qualitative methods may contribute to understanding this experience.

It is Not a Commodity

Ideally, evaluators are intentional and thoughtful about the people with lived experience who would be appropriate to engage in their current evaluation and would approach these individuals accordingly. In reality, however, convenience factors and existing relationships with people with lived experience often dictate participation. For example, evaluation teams that are under time pressure to get their project started might employ structures used in the past to describe the activities, recruit, and set up meetings. This may result in limiting the pool of people who might participate in the evaluation, leading us to question if the people with lived experience who can and do participate in evaluation activities may be different from those who cannot. The people who do participate may, in fact, have common characteristics that make it easier for them to engage. For example, they may be a subset of people who find it easier to take time out of the day when evaluation planning meetings or data collection events occur and can also more easily access the location or web platform used for the meeting. Furthermore, it may be for some common reason they are known to the evaluation team and/or their network of contacts. How might these individuals with common characteristics play a role in influencing the evaluation compared to the individuals who may not share these characteristics?

Rather than approaching the engagement of lived experience as if it is a generic commodity to fill slots where individual differences do not matter, evaluators can be thoughtful and intentional when they plan their evaluations in order for a variety of people (i.e., a diverse sample) with the particular lived experience to be able to participate, leading to richer and more nuanced input. For example, evaluators can create budgets and timelines which make possible a wider outreach period, rather than relying on known contacts. They can also increase accessibility and feasibility to attend by varying the time and location of activities, as well as considering factors of the meeting set-up such as the technology required, the language in which it is conducted, and accessibility supports provided.

It Is Not a One-time Thing

If there is an impetus to create structures in which people with lived experience have a continued and consistent voice in evaluation designs, evaluators need to think about longer-term sustained involvement and relationship. Sustained involvement is needed to potentially make changes in theories of change, measures of success, and data collection methods. There are examples of evaluators setting up a committee or advisory panel including people with lived experience for the purpose of meeting their particular evaluation goals and then dissolving the group once the evaluation is completed. This leaves the person with the lived experience in a place not too different from where they began in terms of the power they have to be able to have ongoing input. Furthermore, it also damages trust and can lead to difficulty of re-engagement. In these cases, the information they provide is used for one purpose. It is transactional. It is not transformative for them or others like them beyond the life of the evaluation project.

Given that many evaluators work under the constructs and timelines of grant-funded or contracted work, they may feel limited in their ability to influence how the people with lived experience can have continued involvement. However, evaluators can potentially use their role in developing work plans and budgets to explore making time and dedicating resources to setting up initial infrastructures for input that can be sustained beyond the project’s time frame, while educating funders about what it takes to engage people with lived experience in a meaningful way.

Initial Recommendations

  • Spend time determining why you are incorporating people with lived experience into your evaluations. What is the goal, and what are the benefits to the evaluation and to those people? Let’s not let it become another checkbox if lived experience truly has value.
  • Develop an operational definition for lived experience. Your definition may not be perfect but start somewhere and refine your definition using a continual learning process.
  • Do your homework to better understand the person with lived experience on the topic you are exploring. This will help you create a supportive, appropriate, and effective environment for their participation.
  • Plan for and conduct intentional recruitment and outreach to people with the lived experience that is relevant to the work you are doing. Often the same individuals are engaged over and over, and there is a need to prevent burn out or exploitation among those people.
  • Establish agreements for engaging people with lived experience that define processes, practices, and language to be used.
  • Apply your evaluation skills to understand what works well and what does not. Interview the individuals who have been serving as “people with lived experience” to discover what they identify as challenges, facilitators, and opportunities related to the issues you are evaluating and related to their engagement in the design and implementation of the evaluation.

Moving from evaluation dominated by people who have no direct (and sometimes no indirect) experience of the topic they are studying to acknowledging that expertise lies among people who have great insight and knowledge because of their experience with the topic is undeniably a step in the right direction.

However, let’s not be shy about looking at ourselves and reflecting on our own mental models, experiences, and personal and professional goals as we seek to make this shift.

References

Chandler, D. and Munday, R. (2011). A Dictionary of Media and Communication. Oxford University Press. https://doi.org/10.1093/acref/9780199568758.001.0001

Community Science and W.K. Kellogg Foundation (2021) Doing Evaluation in the Service of Racial Equity Practice Guides: A Three- Part Series. https://communityscience.com/webinar/organizational-effectiveness/evaluation-in-service-of-equity/

Farmer, J. (2021) Lived experience in evaluation: Power. Jo Farmer Consulting https://jofarmer.com/lived-experience-in-evaluation-power/

Farmer, J. (2021) Lived experience in evaluation: We’re not lab rats. Jo Farmer Consulting https://jofarmer.com/lived-experience-in-evaluation-were-not-lab-rats/

Feige, S. and Choubak, M. (2019). Best Practices for Engaging People with Lived Experience. Guelph, ON: Community Engaged Scholarship Institute. https://atrium.lib.uoguelph.ca/xmlui/handle/10214/17653

Masseau, T. (2020). Nothing about us without us: The time is now for a peer support movement in Arkansas. Disability Rights Arkansas. https://disabilityrightsar.org/nothing-about-us-without-us/

Shaw, I., Greene, J., and Mark, M. (2006). The SAGE Handbook of Evaluation https://doi.org/10.4135/9781848608078

Skelton-Wilson, S., Sandoval-Lunn, Zhang, X., Stern, and Kendall, J. (2022) Methods and Emerging Strategies to Engage People with Lived Experience. Office of the Assistant Secretary for Planning and Evaluation (ASPE). https://aspe.hhs.gov/reports/lived-experience-brief

Yang, E. and Stover, A. (2022) Putting Lived Experience at the Center of Data Science. MDRC https://www.mdrc.org/publication/putting-lived-experience-center-data-science

About The Authors

Annapurna Ghosh, M.P.H., Managing Associate, is a public health researcher, evaluator, and strategic planner for a range of programs with a primary focus on substance use disorder, HIV, and chronic diseases. She has conducted evaluations with state-level systems and individual organizations; developed evaluation plans and tools; analyzed and interpreted data (quantitative, qualitative, and social network data); conducted cost-effectiveness analyses; provided technical assistance to increase coordinated care for behavioral health, and facilitated strategic planning with community coalitions.

Danielle Gilmore, Ph.D., Senior Analyst, has expertise in critical theories and phenomenology, with a specialization in conducting culturally responsive and racial equity informed research and evaluation. She has extensive experience in community-based participatory research and addressing diversity, equity, and inclusion issues within the K-12 education policy space. She is a highly trained mixed-methods researcher experienced in both qualitative and quantitative analyses including thematic and grounded theory analysis and logistic regression.