Program Assessment 101—Interview with Dr. Samantha Matlin, Scattergood Foundation
Photo courtesy of ArtWell.
This fall, the Bartol Foundation will be piloting a new series of trauma-informed training workshops for teaching artists. We’ve partnered with our co-working neighbors at the Thomas Scattergood Behavioral Health Foundation to plan assessment tools for this program. Read our Q&A with Dr. Samantha Matlin, Director of Evaluation and Community Impact at the Scattergood Foundation, to learn about the considerations involved in planning a program assessment.
In case you missed it, be sure to also check out this interview with Mindy A. Early, lead facilitator of our upcoming trauma training pilot series.
Tell us a little bit about the Scattergood Foundation. How did you come to work with the Bartol Foundation to plan assessment of the pilot on trauma–informed practice for teaching artists?
The Thomas Scattergood Behavioral Health Foundation has been a health conversion foundation since 2005. We’ve had a long-term commitment around behavioral health and the moral treatment of individuals. Really since the beginning of the Foundation, we’ve had a focus on trauma, and later specifically more on Adverse Childhood Experiences (ACEs). Over the last few years, we’ve been doing more of this work as more people have become aware of the significant prevalence and impact of ACEs and trauma. We have also been part of the Philadelphia Adverse Childhood Experience Task Force and other community coalitions.
A few years ago, we worked with Bartol on some smaller trauma-informed training workshops for artists. Since that time, we have been an advisor and partner around these conversations. In terms of this specific pilot project, it’s really an extension of that history of working together in this area. With my role around evaluation and program planning, through conversations between the leaders of Bartol and Scattergood we talked about how we could support this pilot.
When thinking about how to assess a pilot program, what are the first questions you ask?
Pilot programs should be thought of as a learning opportunity, first and foremost. The focus should include the initial impact that you are hoping the program will achieve, but more about understanding what you are working to implement, and how that can be documented and understood so it can be improved upon in the future.
It’s also very important to consider who your participants are, especially when you’re implementing a program for the first time. You consider who are the participants, what is the contact going to look like, what methods will you be using to track any information—but more initially on the implementation side than on the outcomes. Some people talk about that as more of a formative evaluation, really asking questions and figuring out what kind of information you need to be able to understand how to improve the program and then track outcomes.
What are some ways to incorporate assessment into a pilot program that are not too cumbersome for a small organization?
I think assessment has to do with both the size of the organization and the size of the program. Sometimes pilots are smaller in scale even at large organizations, so it may not make sense to do something that’s too cumbersome then either.
Understanding who your program participants are is critical, but it doesn’t have to mean doing large-scale surveys. Assessment could take the form of asking questions in the beginning to understand participants’ baseline knowledge: How familiar are they with the topic? What kind of training have they experienced in the past? Or, in the case of teaching artists, how many students do they potentially reach through their teaching, or do they already have exposure to these kinds of concepts? This type of information can really shape what a program can look like and help you understand what kinds of changes to anticipate, that then can be measured.
In determining what type of data will be most useful, it’s important to consider how you plan to use the information. It’s often good to have some measurements and scales on quantitative surveys so that you can look at averages and even change over time, but I think that has to depend on the culture of the program participants and the organization. Qualitative data and narrative is really important, and this may be sufficient with a smaller group and even help inform learning with a larger program.
When can an organization do its own assessment of a pilot program, and when/for what purposes should they hire outside assistance?
That’s a great question, and I don’t think there is a black-and-white answer. A lot of it depends on why an organization is doing an assessment, and what kind of capacity they have to it themselves. If an organization has staff that are able to do an assessment, then that could make a lot of sense. It is important for an organization to be able to assess their programs as part of their work. But if the goal of the assessment of the program involves more rigorous evaluation research, that could be a reason to hire externally. There are still benefits of having someone do assessment that is part of that organization and closer to a program, because they can really understand that program in a different way.
Anything else you‘d like to add?
We’re really excited to be involved. It’s fantastic that Bartol is using the available information and training around trauma and ACEs to think about the benefit that can bring to teaching artists and students. Our role in helping to support learning around that is a pleasure to do.
Interview responses have been edited for length and clarity.