Evidence-informed practice, evidence-based programs and measuring outcomes

This post is based on a workshop on evidence-informed practice, evidence-based programs and measuring outcomes that Alan Hayes, Jamin Day and I facilitated for the Combined Upper Hunter Interagencies. The slides from the workshop are above or you can download them here. It was quite a long workshop so this is a long post. It includes:

  1. An introduction to evidence-based practice
  2. How the nature of complex problems suggests a slightly different approach is appropriate
  3. An introduction to evidence-informed practice
  4. What we mean by “evidence”
  5. Measuring outcomes
  6. An introduction to evidence-based programs (including a brief discussion of adaptation)
  7. Conclusion
  8. References

Introduction

There is an increasing emphasis on evidence-informed practice and evidence-based programs in family and community work. In Australia and elsewhere, government and other funders increasingly require family services to adopt evidence-based programs. For example, Communities for Children [1] —a federally funded program in 52 disadvantaged communities across Australia with a focus on improving early childhood development and wellbeing of children from birth to 12 years—now requires that 50% of the funds for direct service delivery should be used to “purchase high quality evidence-based programs” (p. 11). Another example is the NSW Targeted Earlier Intervention Program Reform [2] where the first service reform principle is:

Services are evidence informed and targeted to need – Commissioned services are clearly targeted to meet the needs of individual children, young people and families based on a sound understanding of what works best (p. 7).

Although there is no universally accepted definition of evidence-based practice in social work and family work [3, 4], it is generally described as a decision-making process that incorporates:

  1. The best research evidence
  2. The best clinical experience
  3. And is consistent with family and client values [4-7].

Evidence-based practice (Source: Walsh, Rolls Reutz & Williams)[4]

Complex problems

This model of evidence-based practice largely came from medicine, which is quite a different context to family and community work. The types of problems and issues impacting on the families and communities we work with are often multifaceted, confusing and hard to define—they are complex problems. Glouberman and Zimmerman[8] highlight the difference between simple problems, complicated problems and complex problems by comparing following a recipe (a simple problem), sending a rocket to the moon a (complicated problem) and raising a child (a complex problem). While I hesitate to call raising a child a “problem”, it is a useful comparison.

A simple problem:
Following a recipe
A complicated problem:
Sending a rocket to the moon
A complex problem: Raising a Child
The recipe is essential Formulae are critical and necessary Formulae have a limited application
Recipes are tested to assure easy replication Sending one rocket increases assurance that the next will be OK Raising one child provides experience but no assurance of success with the next
No particular expertise is required. But cooking expertise increases success rate High levels of expertise in a variety of fields are necessary for success Expertise can contribute but is neither necessary nor sufficient to assure success
Recipes produce standardized products Rockets are similar in critical ways Every child is unique and must be understood as an individual
The best recipes give good results every time There is a high degree of certainty of outcome Uncertainty of outcome remains
Optimistic approach to problem possible Optimistic approach to problem possible Optimistic approach to problem possible

Evidence-informed practice

The nature of complex problems suggests that we can’t give definitive answers about the best way to approach many issues facing families and communities, and so we prefer to use the term evidence-informed practice rather than evidence-based practice to emphasise that there is a different approach. Because we’re also committed to strengths-based and family-centred practice, we believe that practitioners need to do more than ensure that any interventions or approaches are consistent with family and client values: they need to actively incorporate the experience and insights of the families or communities they work with.

We also changed best clinical experience to practitioner wisdom to be more consistent with the language of the family and community sector.

We thus suggest that evidence-informed practice is a decision-making process that incorporates:

  1. Research evidence
  2. Practitioner wisdom and experience
  3. Family experience and insights.

Evidence-informed practice

Although evidence-based practice and evidence-informed practice are often presented as linear 5 step processes, generally undertaken by individual workers, Debbie Plath [3, 9] argues that they are better understood as a cyclical process involving organisational processes. Her five phases are:

  1. Define and redefine practice questions
  2. Gather evidence from a range of sources
  3. Critically appraise the evidence for its relevance and reliability
  4. Make practice decisions regarding principles, interventions, programs and practices
  5. And evaluate the evidence-based practice process and client outcomes, again using a variety of sources of evidence.

She also recognises that organisational processes play an important role in engaging staff and moving through phases of evidence-informed practice.

Evidence-informed practice cycles (Source: Debbie Plath[3])

This cyclical process has similarities with action research cycles of Observe, Reflect, Plan, and Act that many family and community practitioners are familiar with.

Action research cycles

What do we mean by “evidence”?

“Evidence” is a contested term and has varying connotations. The traditional approach to evidence in evidence-based programs and practice involves a hierarchy of evidence which places greater value on systematic reviews, randomised control trials and quasi-experimental designs [5, 10].

Hierarchy of evidence

Some authors, however, argue for a more inclusive approach to evidence. Webber & Carr[11] suggest that evidence can be conceptualised in a “more inclusive and non-hierarchical” manner that:

Equally values practice wisdom, tacit knowledge and all forms of knowing. It is thereby viewed as integrative, viewing practice and research less in opposition but more in support of one another. In particular, evidence-informed practice respects the role of practice research. (p. 19)

Rather than the hierarchy of evidence, Epstein [12] proposes a Wheel of Evidence in which “all forms of research and information-gathering and interpretations would be critically assessed but equally valued” (p. 225).

Wheel of evidence

Wheel of evidence (Source: Epstein [12])

Humphreys and her colleagues [13] propose a knowledge diamond that also suggests a less hierarchical approach to knowledge by giving equal prominence to research evidence, practitioner wisdom, lived experience and policy.

The knowledge diamond (Adapted from Humphreys et al.[13])

Measuring outcomes

A crucial aspect of evidence-informed practice is using data and evidence from a range of sources to inform decisions, and ensuring we measure the impact of our work. Results-Based Accountability (RBA) [14, 15] has some helpful ideas and approaches. Here I will just focus on measuring outcomes, but it also helpful in using data for planning.

RBA highlights the difference between population accountability and performance accountability.  (Note that RBA uses the term accountability to suggest that services need to be accountable for what they do.) Population accountability is about the well-being of whole populations (e.g., all families in NSW or all children in Muswellbrook). Performance accountability is about the welling being of the people a service or program works with (e.g., families being supported by a particular family support service, or parents attending a parenting education program). While a program can be held responsible for the outcomes of their program (performance accountability), population accountability is not the responsibility of any one agency or program. The only way we can have an impact at a population level is by a whole lot of services and agencies and the community working together (e.g., through collective impact processes).

We thus need to be clear about the difference between measures that relate to the whole population (e.g., the rate of domestic and family violence in NSW or Muswellbrook, or the total number of Aboriginal children in care who are placed with Aboriginal families), and performance level measures that relate to the people we work with (e.g., the rate of domestic violence in the families our service works with or the number of Aboriginal children in care that our service places with Aboriginal families).

When we are measuring the outcomes of services, the focus is often on performance level measures, while collective impact approaches generally focus on population level measures and seek to develop partnerships that can have an impact at the population level.

The other thing I really like about RBA is that it introduces some simple questions that help us think about developing outcome measures. In thinking about measures, we can measure Quantity (How much did we do?), and we can measure Quality (How well did we do it?)

We can also think about our Effort (How hard did we try?) and  the Effect (Is anyone better off?)

If we put these together we get four quadrants

  1. How much did we do?
  2. How well did we do it?
  3. How much change did we produce?
  4. What quality of change did we produce?

This can be simplified into three basic questions:

  1. How much did we do?
  2. How well did we do it?
  3. Is anyone better off?

Many of the measures that services traditionally use to measure their work are based on the first two questions. While these are important, that don’t tell us much about the impact of our work. We really need to think about the third, and harder, question: Is anyone better off?

The other thing I want to emphasise is the importance of strengths-based measurement. In the slides (Slides 25–35) I give some examples of measures. The first two (The Achenbach System of Empirically Based Assessment (ASEBA) and the Franklin County/North Quabbin Youth Health Survey (FCNQ-YH) focus very much on deficits. The ASEBA includes 113 questions about what could be wrong with your child. There are also two open-ended questions about “the best things about your child” and “your child’s favorite hobbies, activities, and games, other than sports”. Notice how the responses in the ASEBA are:

  • Very True or Often True
  • Somewhat or Sometimes True
  • Not True (as far as you know).

The best you can say is that your child does not display that negative characteristic but you might be wrong. (The inference is that you don’t know how bad your child is!) It’s important to emphasise that the ASEBA plays a role in helping to determine mental illness, but it clearly isn’t strengths-based.

Most of the questions in the Youth Health Survey (FCNQ-YH) also focuses on negatives. There are some positive questions (e.g., “Yesterday, how many times did you eat vegetables?”) and some fairly neutral ones (e.g., “Where do you usually sleep?”), but the large majority focus on problem behaviour. My concern is that it sends a poor message to the young people completing the survey  by creating a very negative picture of how youth are seen and promoting a sense of fear or negative expectations (by inferring that all the negative behaviour is common place).

The third example (The Communities the Care Youth Survey) is more strengths-based but still addresses some of the issues explored in the other Youth Health Survey. It includes many more positive behaviours and suggests that young people might have taken a stand against violence, and alcohol and other drugs. It creates a much more positive portrayal of young people while still asking about some of the social issues faced by youth.

If we are to be strengths-based, it’s important that we use measures that are consistent with our approach. Unfortunately there are very few freely available measurement tools that are brief, strengths-based and user-friendly. Researchers need to develop more tools that can easily be used by practitioners as part of their practice. One such tool, that many practitioners find is useful, is the The Parent Empowerment and Efficacy Measure (PEEM).

Evidence-based programs

As discussed at the start, well as an increasing emphasis on evidence-informed practice, there is also an increasing emphasis on evidence-based programs. It’s important to differentiate between evidence-based practices and evidence-based programs. Evidence-based practices are built on theory and research but are not a complete, systematised program of intervention. Evidence-based programs are a collection of practices or activities that have been standardised so that they can be replicated, have been rigorously evaluated and are usually endorsed by a respected independent department or research organisation [16, 17].

Once a program is selected it is important to consider program fidelity (staying true to the original program design) and adaptation (ensuring the program is appropriate for the context). Some practitioners are told, or are under the impression, that they should not make any changes to an evidence-based program. If we do not implement an evidence-based program as it was designed and tested, it is no longer evidence-base. But at the same time it can be appropriate to make adaptations to make it more appropriate to the context.

O’Connor, Small and Conney [18] discuss acceptable and unacceptable (or risky) adaptations to evidence-based programs. Acceptable adaptations include:

  • Changing language – translating and/or modifying vocabulary
  • Replacing images to show children and families that look like the target audience
  • Replacing cultural references
  • Modifying some aspects of activities such as physical contact
  • Adding relevant, evidence-based content to make the program more appealing to participants.

Unacceptable or risky adaptations include:

  • Reducing the number or length of sessions or how long participants are involved
  • Lowering the level of participant engagement
  • Eliminating key messages or skills learned
  • Removing topics
  • Changing the theoretical approach
  • Using staff or volunteers who are not adequately trained or qualified
  • Using fewer staff members than recommended.

For more on fidelity and adaptation, see my post on Program fidelity and baking a cake.

Just because a program has been shown to work in one context, there is no guarantee that it will work in another. We wouldn’t walk into a chemist or drug store and pick a medicine simply because it is evidence-based. We would need to make sure that it is appropriate for our particular circumstance. Likewise, we need to ensure that the programs we select are appropriate for the context we are working in.

We need to make sure that programs can be successful in real life conditions. There is a difference between efficacy and effectiveness. Efficacy involves demonstrating that a program can work under controlled (often ideal) conditions, whereas effectiveness involves demonstrating that it works under the conditions typically encountered in the field.

In thinking about evidence-based programs it’s important to remember that:

  • Evidence-based programs may not be effective in some contexts (e.g., a program tested in an USA city may not be effective in a rural Aboriginal community in South Australia).
  • Not all effective initiatives have a strong research evidence-base or are on lists of evidence-based program. There are many effective programs (often developed by small services) that have not been evaluated at a level that would allow it to be called an “evidence-based program.”
  • Regardless of the programs we are offering, it is wise to obtain feedback from participants and to measure its impact.

Conclusion

For practitioners there are a number of positives about a focus on evidence-informed practice.

  1. We want to be as effective as we can, and using evidence from a range of sources helps to ensure that the work we do is appropriate and effective.
  2. We need to be sure that that the families and communities we support are better off because of our work. There have been popular programs that have been shown to be ineffective or even harmful (e.g., Scared Straight programs).
  3. Evidence-informed practice encourages critical reflection that leads to better practice and innovation.

While there is clearly a need for practitioners to develop skills in evidence-informed practice, measuring outcomes, and in selecting and implementing evidence-based programs, there is also a need for researchers and academics to become better at supporting practitioners and providing easily accessible summaries of research evidence that identify implications for practice.

If you liked this post please follow my blog, and you might like to look at:

  1. Rethinking the roles of families and clients in evidence-based practice
  2. What are program logic models?
  3. Strengths-based measurement
  4. Playgroups as a foundation for working with hard to reach families
  5. What is asset-based community-driven development (ABCD)?
  6. Engaging Aboriginal fathers

If you find any problems with the blog, (e.g., broken links or typos) I’d love to hear about them. You can either add a comment below or contact me via the Contact page.

References

  1. Department of Social Services. (2012). Communities for Children facilitating partner operational guidelines. Australian Government.  Retrieved from https://www.dss.gov.au/sites/default/files/documents/09_2014/cfc_fp_operational_guidelines_-_v_1_1_5_september_2014.pdf
  2. Family and Community Services, NSW. (2016). Targeted Earlier Intervention programs: Reform directions – local and client centred.  Sydney:  Retrieved from https://www.fams.asn.au/sb_cache/associationnews/id/42/f/TEI%20Program%20Reform%20Directions%20-%20local%20and%20client%20centred%20%28002%29.pdf
  3. Plath, D. (2017). Engaging human services with evidence-informed practice. Washington, DC: NASW Press.
  4. Walsh, C., Rolls Reutz, J., & Williams, R. (2015). Selecting and implementing evidence-based practices: A guide for child and family serving systems (2nd ed.). San Diego, CA: California Evidence-Based Clearinghouse for Child Welfare. Available from http://www.cebc4cw.org/files/ImplementationGuide-Apr2015-onlinelinked.pdf
  5. Centre for Community Child Health. (2011). Evidence-based practice and practice-based evidence: What does it all mean? Policy Brief: Translating early childhood research evidence to inform policy and practice(21). Available from http://ww2.rch.org.au/emplibrary/ecconnections/Policy_Brief_21_-_Evidence_based_practice_final_web.pdf
  6. Gray, M., Plath, D., & Webb, S. A. (2009). Evidence-based social work a critical stance. Hoboken: Taylor & Francis.
  7. Shlonsky, A., & Ballan, M. (2011). Evidence-informed practice in child welfare: Definitions, challenges and strategies. Developing Practice: The Child, Youth and Family Work Journal(29), 25-42.
  8. Glouberman, S., & Zimmerman, B. (2002). Complicated and complex systems: What would successful reform of Medicare look like? Ottawa: Commission on the Future of Health Care in Canada. Available from http://publications.gc.ca/collections/Collection/CP32-79-8-2002E.pdf
  9. Plath, D. (2014). Implementing evidence-based practice: An organisational perspective. British Journal of Social Work, 44, 905-923. doi: 10.1093/bjsw/bcs169
  10. Hall, J. C. (2008). A practitioner’s application and deconstruction of evidence-based practice. Families in Society: The Journal of Contemporary Social Services, 89(3), 385-393. doi: doi:10.1606/1044-3894.3764 Available from http://familiesinsocietyjournal.org/doi/abs/10.1606/1044-3894.3764
  11. Webber, M., & Carr, S. (2015). Applying research evidence in social work practice: Seeing beyond paradigms. In M. Webber (Ed.), Applying research evidence in social work practice. London: Palgrave.
  12. Epstein, I. (2009). Promoting harmony where there is commonly conflict: Evidence-informed practice as an integrative strategy. Social Work in Health Care, 48(3), 216-231. doi: 10.1080/00981380802589845
  13. Humphreys, C., Marcus, G., Sandon, A., Rae, K., Wise, S., Webster, M., & Waters, S. (2011). Informing policy with evidence: Successes, failures and surprises. In K. Dill & W. Shera (Eds.), Implementing evidence-informed practice: International perspectives. Toronto: Canadian Scholars’ Press.
  14. Friedman, M. (2005). Trying hard is not good enough: How to produce measurable improvements for customers and communities. Victoria, Canada: Trafford Publishing.
  15. Friedman, M., Smith, B., & Handley, S. (2008). Neighbourhood centres and results accountability: A conversation with Mark Friedman. LOCAL: The newsletter for community development in NSW, Autumn, 4-28. Available from https://www.lcsansw.org.au/documents/item/136
  16. EPISCenter. (2015). Defining evidence based programs.   Retrieved 5 September, 2016, from http://www.episcenter.psu.edu/ebp/definition
  17. Cooney, S. M., Huser, M., Small, S., & O’Connor, C. (2007). Evidence-based programs: An overview. What Works, Wisconsin – Research to Practice Series(6). Available from https://fyi.uwex.edu/whatworkswisconsin/files/2014/04/whatworks_06.pdf
  18. O’Connor, C., Small, S. A., & Cooney, S. M. (2007). Program fidelity and adaptation: Meeting local needs without compromising program effectiveness. What Works, Wisconsin – Research to Practice Series(4). Available from http://fyi.uwex.edu/whatworkswisconsin/files/2014/04/whatworks_04.pdf

 

About Graeme Stuart

Lecturer (Family Action Centre, Newcastle Uni), blogger (Sustaining Community), environmentalist, Alternatives to Violence Project facilitator, father. Passionate about families, community development, peace & sustainability.
This entry was posted in Families & parenting, Working with communities and tagged , , , , . Bookmark the permalink.

One Response to Evidence-informed practice, evidence-based programs and measuring outcomes

  1. kim4zzz says:

    Very useful post Graeme, The results based accountability table would be so useful reporting to grants and planning for next time in my community development projects.
    But lose the rainbow background. So distracting!

    Like

I'd love to hear what you think!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.