While evidence-based practice is a decision-making process that incorporates the best research evidence, the best clinical experience and family and client values; evidence-based programs are programs that have been standardised, systematised and rigorously evaluated.
According to Williams-Taylor , evidence-based practice is an
Approach, framework, collection of ideas or concepts, adopted principles and strategies supported by research. (p.4)
Evidence-based programs, on the other hand, are:
Programs comprised of a set of coordinated services/activities that demonstrate effectiveness based on research. Criteria for rating as such depend upon organization or agency doing the rankings. [Evidence-based programs] may incorporate a number of evidence-based practices in the delivery of services. [1, p.4]
A common feature of evidence-based programs is that they are identified on a list or registry of evidence-based programs. Cooney, Huser, Small and O’Connor  suggest that:
A program is judged to be evidence-based if (a) evaluation re-search shows that the program produces the expected positive results; (b) the results can be attributed to the program itself, rather than to other extraneous factors or events; (c) the evaluation is peer-reviewed by experts in the field; and (d) the program is “endorsed” by a federal agency or respected research organization and included in their list of effective programs. (p.2)
There are a variety of registries of evidence-based programs, each of which have a different criteria for inclusion. The EPISCenter requires evidence-based programs to have:
- Effectiveness demonstrated in rigorous scientific evaluations
- Effectiveness demonstrated in large studies with diverse populations or through multiple replications
- Significant and sustained effects
To be classified as “Well-supported by research evidence” on the California Evidence-Based Clearinghouse for Child Welfare the research needs to meet the following standards:
1. Multiple Site Replication and Follow-up:
– At least two rigorous randomized controlled trials (RCTs) in different usual care or practice settings have found the practice to be superior to an appropriate comparison practice.
– In at least one of these RCTs, the practice has shown to have a sustained effect at least one year beyond the end of treatment, when compared to a control group.
– The RCTs have been reported in published, peer-reviewed literature.
2. Outcome measures must be reliable and valid, and administered consistently and accurately across all subjects.
3. If multiple outcome studies have been published, the overall weight of the evidence supports the benefit of the practice.
4. There is no case data suggesting a risk of harm that: a) was probably caused by the treatment and b) the harm was severe or frequent.
5. There is no legal or empirical basis suggesting that, compared to its likely benefits, the practice constitutes a risk of harm to those receiving it.
6. The practice has a book, manual, and/or other available writings that specify components of the service and describe how to administer it. (Source)
It is recognised, however, that many effective social and community-based interventions are unlikely to feature in such databases, as testing by methods such as randomised controlled trials are often ethically and practically unsuitable. Evidence-informed practice frameworks and less scientific means of evaluation may be a more suitable method of assessing social and community interventions, particularly those taking place in complex social environments.
The criteria they use for inclusion in their profiles is thus:
- A pre- and post-test methodology (or higher) with at least 20 participants (in both the intervention and control groups) or high quality qualitative research that includes at least 20 participants or a combination of these.
- Documented information about the program is readily available including its aims, objectives and theoretical basis; a program logic or similar; a clearly articulated target group and its elements or activities and why they are important.
- A workbook or documentation that allows replication.
- The evaluation shows positive outcomes (with no significant negative effects reported).
- The program has been replicated or has potential to be replicated. (Click here for a more detailed description of the criteria.)
Once a program has been tested and shown to be effective, it also important to consider if it can be successful in real life conditions and also to consider that impact of changes to the original program. In particularly we needed to consider fidelity and adaptation. Fidelity is the “extent to which an enacted program is consistent with the intended program model” [3, p. 202], while adaptation involves modifying or enhancing a program, without compromising the core components .
It is important to recognise the just because a program has been shown to work in one context (e.g., an urban, middle class area) or with a particular group (e.g., mothers), it does not necessarily mean that it will work in another context (e.g., a rural farming community) or with another group (e.g., fathers) [5-8]. While there is debate about the extent to which programs can be adapted without impacting their effectiveness, some research suggests that “sensitivity and flexibility in administering therapeutic interventions produces better outcomes than rigid application of manuals or principles” [7, p. 14].
In some ways, registries of evidence-based programs are like a chemist or drug store. Just because the products they list or sell are evidence-based, it doesn’t mean that they are appropriate for all contexts. Blase, Kiser, and Van Dyke  suggest a number of broad factors that need to be considered before selecting an evidence-based program:
- Needs of individuals; how well the program or practice might meet identified needs.
- Fit with current initiatives, priorities, structures and supports, and parent/community values.
- Resource Availability for training, staffing, technology supports, data systems and administration.
- Evidence indicating the outcomes that might be expected if the program or practices are implemented well.
- Readiness for Replication of the program, including expert assistance available, number of replications accomplished, exemplars available for observation, and how well the program is operationalised
- Capacity to Implement as intended and to sustain and improve implementation over time.
Once a program is selected it is also important to consider program fidelity (staying true to the original program design) and adaptation (ensuring the program is appropriate for the context). Read more about fidelity and adaptation here.
Evidence-based programs can play an important role in providing support to families and communities, but there is still a need for practitioner wisdom and a need to listen to the insights, experiences and wishes of the people we work with.
If you liked this post please follow my blog, and you might like to look at:
- More posts in the “What is…?” series
- Program fidelity and baking a cake
- What is evidence-based practice?
- Research evidence for family (and community) workers
- What are program logic models?
- Williams-Taylor, L. (2007). Research review – Evidence-based programs and practices: what does it all mean? Boynto Beach, FL: Children’s Services Council of Palm Beach County. Available from http://www.evidencebasedassociates.com/reports/research_review.pdf
- Cooney, S. M., Huser, M., Small, S., & O’Connor, C. (2007). Evidence-based programs: An overview. What Works, Wisconsin – Research to Practice Series(6). Available from http://www.human.cornell.edu/outreach/upload/Evidence-based-Programs-Overview.pdf
- Century, J., Rudnick, M., & Freeman, C. (2010). A Framework for Measuring Fidelity of Implementation: A Foundation for Shared Language and Accumulation of Knowledge. American Journal of Evaluation, 31(2), 199-218. doi: 10.1177/1098214010366173 Available from http://aje.sagepub.com/content/31/2/199.abstract
- Aarons, G., & Palinkas, L. (2007). Implementation of Evidence-based Practice in Child Welfare: Service Provider Perspectives. Administration and Policy in Mental Health and Mental Health Services Research, 34(4), 411-419. doi: 10.1007/s10488-007-0121-3 Available from http://dx.doi.org/10.1007/s10488-007-0121-3
- Bernal, G., Jimenez-Chafey, M. I., & Domenech Rodriguez, M. M. (2009). Cultural Adaptation of Treatments: A Resource for Considering Culture in Evidence-Based Practice. Professional Psychology – Research & Practice, 40(4), 361-368.
- Lau, A. S. (2006). Making the Case for Selective and Directed Cultural Adaptations of Evidence-Based Treatments: Examples From Parent Training. Clinical Psychology: Science and Practice, 13, 295-310. doi: 10.1111/j.1468-2850.2006.00042.x Available from http://onlinelibrary.wiley.com/doi/10.1111/j.1468-2850.2006.00042.x/abstract
- Levant, R. F. (2005). Report of the 2005 Presidential Task Force on Evidence-Based Practice: American Psychological Association. Available from https://www.apa.org/practice/resources/evidence/evidence-based-report.pdf
- Walsh, C., Rolls Reutz, J., & Williams, R. (2015). Selecting and implementing evidence-based practices: A guide for child and family serving systems (2nd ed.). San Diego, CA: California Evidence-Based Clearinghouse for Child Welfare. Available from http://www.cebc4cw.org/files/ImplementationGuide-Apr2015-onlinelinked.pdf
- Blase, K., Kiser, L., & Van Dyke, M. (2013). The Hexagon Tool: Exploring context. Chapel Hill, NC: National Implementation Research Network, FPG Child Development Institute, University of North Carolina. Available from http://implementation.fpg.unc.edu/sites/implementation.fpg.unc.edu/files/resources/NIRN-TheHexagonTool_0.pdf