Youth programs create meaningful change every day, yet translating that impact into clear, credible evidence remains a persistent challenge across the field. Funders, policymakers, and communities increasingly ask for proof of outcomes, while youth-serving organizations often describe transformation that unfolds in ways not easily captured by traditional metrics. The is a familiar tension, impact is happening, but it is difficult to measure, communicate, and sustain in ways that satisfy accountability and funding requirements and support learning.

This challenge is not confined to a single initiative or funder. It reflects broader structural conditions shaping how youth programs operate, how evaluation is designed, and how success is defined. Drawing from evaluation and learning partnerships with foundations, nonprofits, and government agencies, we see this pattern consistently across youth engagement, leadership, and out-of-school time initiatives. The issue is not whether youth programs and capturing outcomes matter, we know they do. Instead, the issue is how systems of measurement and storytelling can better reflect the depth, complexity, and long-term value of youth development work.

Where the Disconnect Emerges
In practice, youth-serving organizations describe impact in rich, relational terms. Programs report strong youth engagement, increased confidence and belonging, social-emotional growth, and trusted relationships between staff and participants. These outcomes are visible to educators, families, and young people themselves, yet they are difficult to translate into standardized indicators or short-term performance measures.

Many organizations rely on attendance tracking and anecdotal evidence as their primary data sources, not because they lack interest in evaluation, but because they lack the capacity, tools, or support to do more. Organizations frequently express interest in surveys, behavior tracking, and mixed methods approaches that combine quantitative and qualitative data but struggle to move from intention to implementation and in a culturally and contextually responsive manner.

At the same time, youth programs operate within funding environments that emphasize reporting and accountability, often across multiple funders with differing expectations and requirements. Limited staff time and expertise are stretched across service delivery, administration, and data collection. As a result, organizations may spend significant energy collecting information that doesn’t fully reflect the outcomes funders seek, while also lacking the space to translate what they are learning into a compelling, coherent narrative.

Why Measuring Youth Impact Is Structurally Difficult
Across our experience supporting youth-serving initiatives, several recurring barriers help explain why youth impact remains hard to demonstrate:

  • Limited capacity for evaluation, learning, and continuous improvement. Many youth organizations do not have dedicated staff or resources to design tools, manage data systems, analyze findings, track outcomes over time, and translate insights into knowledge to support the continuous improvement of their organization and programs.
  • Data access challenges. Privacy rules, data-sharing agreements, and administrative hurdles often limit access to school or system-level data, leaving programs with an incomplete picture of youth outcomes.
  • Administrative burden. Multiple reporting requirements and disconnected data systems pull staff away from direct service, reflection, and program improvement.
  • Compliance-driven evaluation cultures. When evaluation is framed primarily as accountability and compliance, organizations may feel pressure to report only positive results, rather than surface challenges, adaptations, or emerging insights that support learning.

These barriers are not failures of individual programs. They are structural features of the systems in which youth programs operate. Addressing them requires shifts in measurement tools and how funders and grantees understand evaluation as a shared responsibility.

What Youth Programs Can Do to Strengthen Measurement and Storytelling
When supported by funders and partners, youth programs can take concrete steps to bridge the gap between their work and how impact is communicated. Across initiatives, organizations have adapted their practices in ways that offer broader lessons for the field:

  • Define what success looks like. Focusing on a small set of mission-aligned outcomes helps avoid data overload and keeps measurement purposeful.
  • Measure meaningfully. Selecting tools that reflect desired change and collecting data from the start and at appropriate timepoints allows programs to capture outcomes and growth more accurately. Qualitative evidence such as youth reflections, quotes, and photos adds important context.
  • Connect data to lived experience. Pairing numbers with real examples from youth and families makes impact more relatable and compelling.
  • Involve youth in evaluation. Engaging young people in defining outcomes, shaping questions, or interpreting findings strengthens relevance and insight.
  • Embrace a learning culture. Programs that feel supported are more willing to surface challenges, adjust mid-stream, and improve practice.
  • Leverage partnerships. Peer learning and collaboration help organizations share tools, align quality standards, and amplify collective voice.

Taken together, these practices help build a stronger foundation for demonstrating impact over time, while ensuring that evaluation remains aligned with the day-to-day work of supporting young people.

Toward a More Collaborative Learning Model for Impact
When evaluation is treated as a collaborative and learning process, it shifts from being a reporting obligation to a shared tool for learning and improvement. Stronger measurement and communication emerge when funders and youth programs align early on what success looks like and how progress will be understood. Co-designing metrics and evaluation plans helps ensure that data serves both learning and accountability, while building shared ownership from the outset.

Ongoing, open communication further supports this work. Regular, informal check-ins create space for organizations to share early insights, surface challenges, and receive feedback before issues escalate or expectations drift. These conversations help normalize revision and pivoting and keep evaluation grounded in real-time program conditions rather than reporting that only looks backward.

Partnership also matters in how evaluation is carried out. When funders, youth-serving organizations, and evaluators work together, evaluation questions are more likely to reflect program context and the lived experiences of the youth and families involved. This collaborative approach strengthens relevance and credibility while reducing the sense that evaluation is something done to programs rather than with them for their own impact, evolution, and sustainability.

Emphasize Shared Storytelling and Advocacy
Finally, shared storytelling and advocacy extend the value of evaluation beyond individual grants. When funders and youth organizations work together to share their stories supported by evaluation findings, the potential impact increases. Funders can use stories to educate and influence their peers’ grantmaking strategies in a region. Grantees can use the evaluation findings to support collective action and align around common advocacy agendas. Shared storytelling can also encourage and support collaboration among grantees and other nonprofits in the region.

A Case Study: How One Funder Began to Shift the Conditions
Our work with Dogwood Health Trust’s After 3PM initiative in Western North Carolina (WNC) offers one example of how these challenges can be addressed when funders intentionally invest in the conditions that support learning and impact. Rather than treating evaluation as a standalone requirement, Dogwood focused on strengthening the broader ecosystem in which out-of-school time (OST) programs operate.

To support grantees’ ability to demonstrate and sustain impact, Dogwood prioritized several interconnected strategies:

  • Provide multi-year funding. Longer-term support offered stability, allowing organizations to focus on program quality and longer-term outcomes rather than short-term survival.
  • Be flexible with funding. Flexible resources enabled grantees to address real-time needs such as transportation, meals, and staff development, strengthening responsiveness and resilience.
  • Invest in data infrastructure and technical assistance. Support was intentionally built directly into the initiative through technical assistance and working groups facilitated by the evaluation and capacity building partners.
  • Strengthen workforce supports. Investments in staff development and well-being helped reduce turnover and improve consistency, benefiting both program quality and data continuity.
  • Encourage peer learning and collaboration. Regional convenings, communities of practice, and working groups created space for shared problem solving and collaboration across organizations.
  • Promote a learning mindset. By normalizing challenges and honest reflection, Dogwood fostered transparency and continuous improvement, resulting in more grounded insights into program outcomes and funder impact.

Dogwood’s approach illustrates that demonstrating youth impact is not simply a technical exercise. It depends on the stability, trust, and learning conditions it creates in the communities it serves.

In Dogwood’s After 3PM initiative, the shift from evaluation as compliance-driven to a shared learning strategy that strengthens the regional OST ecosystem was evident after the initiative’s first year (in part due to the foundation’s intentional use of evaluation). By pairing estimates of impact with qualitative insights, Dogwood elevated field-level lessons about what it takes to deliver equitable access to youth programming, particularly in rural and disaster-impacted communities, while reinforcing trust and transparency between itself and its grantees.

Through joint communications, convenings, and coordinated advocacy, Dogwood also created platforms for shared storytelling that elevated the lessons and strengthened the regional case for youth programs. This approach reduces pressure to overstate outcomes, supports honest dialogue about challenges, and strengthens advocacy by combining measurable results with compelling stories of opportunity, youth leadership, and well-being. As a result, evaluation functions as accountability and learning as well as infrastructure for ecosystem growth, making impact clearer, more credible, and more sustainable over time.

Looking Ahead
Strengthening how youth programs measure and communicate impact is complex work, but it is essential. The goal isn’t simply to produce data or compelling stories, but to generate credible and diverse evidence that reflects the full scope of what young people gain through participation in these programs.

Experiences like Dogwood’s After 3PM initiative demonstrate what is possible when funders intentionally invest in organizational capacity, evaluation, shared learning spaces, flexible funding, and collaborative infrastructure. Over time, such investments help position youth programs as essential community infrastructure that supports young people, families, and broader systems of well-being in addition to being service providers. When impact measurement is approached as a shared journey rather than a compliance task, youth programs are better equipped to adapt, learn, and thrive, and their contributions become easier to see, understand, and sustain.

About The Author

Carlos Anguiano, Ph.D., Director, is an educational psychologist whose professional interests and commitments are rooted in his passion to ensure that every child has fair access to high quality education, from their formative years to adolescence and young adulthood. His research training combined with his cultural experiences enable him to work effectively with parents, youth, educators, and community leaders from different backgrounds.