Research evidence for family (and community) workers

(Created with Wordle)

(Created with Wordle)

[Updated 2 June 2017]

There are a range of reasons we might want to use research evidence as family workers or community workers. A quite inadequate reason, but potentially a motivating one, is that funding bodies are increasingly expecting the organisations they fund to provide evidence-based programs and practice, and more and more community based organisations are wanting to incorporate evidence-based practice into their day-to-day work.

But really the main reason evidence is important for practitioners is that it can help us achieve the best possible outcome for the individuals, families and communities we work with, and it can help us critically reflect on our practice. Research evidence can also help us ensure that when we design a new program or project, that we have a solid foundation for our planning.

In this post I will be discussing research evidence in the context of evidence-based programs and practice when working with families. While the focus is on families, it also applies to work with communities.

Essentially, research evidence tells us what has worked (or not worked) for other people and so this is useful information for us as practitioners. As a profession, we need to move beyond simply saying “we just know it works”: we need to think critically about what we do and how we do it. We can no longer rely just on satisfaction surveys, anecdotal stories or intuition. This isn’t to say that these aren’t important and don’t have their place, but they are not enough on their own.

The Knowledge Diamond reminds us that there are a variety of sources of knowledge including:

  1. Research evidence
  2. The lived experience of the people you work with
  3. Practitioner wisdom
  4. The perspective of policy.
The knowledge diamond (Adapted from Walsh, Rolls Reutz, & Williams, 2015)

The knowledge diamond (Humphreys et al., 2011)

There are many types of evidence for example:

  1. Research evidence found in peer-reviewed journals or books
  2. Program data (e.g., characteristics of the families you work with) and feedback from families
  3. Demographic data for the communities you are working with
  4. Relevant theory
  5. Practitioner wisdom and experience (including your own)
  6. Expert opinion (what people with lots of experience say)
  7. Dr Google (there is some great material out there – we just need to be selective)
  8. Personal experience

In this post I’m focusing on research evidence, but in our work it’s important that we also consider other sources of knowledge as well.

Evidence in evidence-based programs

Evidence-based programs have demonstrated that they work and there is generally an expectation that evaluations of programs have included:

  • Rigorous, systematic, and objective procedures which obtain reliable and valid knowledge about the impact of the program
  • The collection and analysis of adequate data to justify the research conclusions
  • Appropriate measures that substantiate claims about improved outcomes as the result of participation in the intervention (Williams et al., 2015).

Evidence based programs are usually also expected to have repeatedly and consistently demonstrated positive outcomes without negative effects and to have been standardised and systematised (e.g., through clear documentation or training) so that they can be replicated.

When talking about evidence in evidence-based programs, people sometimes talk about the evidence hierarchy. The idea is that the higher up the pyramid the research is, the stronger the evidence.

Hierarchy of evidence

Hierarchy of evidence

At the top of the pyramid are systematic reviews which look at a number of studies on a topic and provide a summary of the available evidence. Usually inclusion in a systematic review is based on the quality of the study and often they only include studies involving research methodologies higher up the evidence hierarchy.

Randomised Controlled Trials (or RCTs) involve testing for differences in people who receive an intervention (e.g., people who participated in a parenting program) and those who do not. Participants are randomly placed in the experiment group (and thus receive the intervention) or the control group (and do not receive the intervention) to reduce any systematic bias which could affect the results. Having a control group can increase our confidence that any changes in the experiment group, that are not reflected in the control group, are due to the intervention.

The real world of working with families does not always lend itself to RCTs and quasi experimental studies offer an alternative. Rather than randomly assigning people to a control group, the people receiving the intervention are compared to people in a naturally occurring comparison group (e.g., people on a waiting list for the program being studied), or two groups can be offered similar interventions, but with some significant differences, to compare the outcomes.

Single-case designs involves the multiple testing of a particular characteristic or characteristics before and after an intervention to test the impact of an intervention. Multiple testing means that the case can be it’s own control (e.g., an intervention might be introduced then withdraw then reintroduced. If improvements stop with the withdrawal of the intervention and then resume with the reintroduction, there is strong evidence that the improvement was due to the intervention). More details are available from a discussion paper prepared by the What Works Clearinghouse.

Pre- and post- test studies do not involve a control or comparison group, but a relevant measure is done before and after an intervention to see what change there has been. Of course any changes could be due to other factors (e.g., children maturing) and are not necessarily the result of the intervention. The advantage of a control group is that if these types of changes did not occur in the control group, we can reasonable assume that the change are probably due to the intervention.

And finally there is research that only involves qualitative methods (e.g., in-depth interviews). Qualitative methods can often add real value to the other forms of research, and provide very useful information (e.g., what aspects of an intervention parents find most helpful), but they are generally not considered as effective in demonstrating the overall impact on participants.

For more discussion on the above types of studies see Australian Institute of Family Studies and Williams et al., (2015). It is also important to recognise that there are also criticisms of the hierarchy of evidence (see for example Gray, Plath and Webb, 2009) and calls for a broader understanding of what should be considered evidence in working with families and communities.

When creating registries or lists of evidence-based programs, a high standard of evidence is usually required.

In the California Evidence-Based Clearinghouse for Child Welfare (CEBC) register of evidence-based programs, there are five levels of programs depending on how strong the research evidence is.

  1. Well-Supported by Research Evidence
  2. Supported by Research Evidence
  3. Promising Research Evidence
  4. Evidence Fails to Demonstrate Effect
  5. Concerning Practice

To be classified as “Well-supported by research evidence” the research needs to meet the following standards:

1. Multiple Site Replication and Follow-up:
– At least two rigorous randomized controlled trials (RCTs) in different usual care or practice settings have found the practice to be superior to an appropriate comparison practice.
– In at least one of these RCTs, the practice has shown to have a sustained effect at least one year beyond the end of treatment, when compared to a control group.
– The RCTs have been reported in published, peer-reviewed literature.
2. Outcome measures must be reliable and valid, and administered consistently and accurately across all subjects.
3. If multiple outcome studies have been published, the overall weight of the evidence supports the benefit of the practice.
4. There is no case data suggesting a risk of harm that: a) was probably caused by the treatment and b) the harm was severe or frequent.
5. There is no legal or empirical basis suggesting that, compared to its likely benefits, the practice constitutes a risk of harm to those receiving it.
6. The practice has a book, manual, and/or other available writings that specify components of the service and describe how to administer it. (Source)

You can see there are high expectations in terms of the research evidence.

The Australian Institute of Family Studies (AIFS) also maintains a list of evidence-based programs which was initially established to help Communities for Children providers meet the requirement that 30% (soon to be 50%) of programs they offered were evidence-based. While it was set up for Communities for Children, it is relevant in other contexts as well. For their register, AIFS is willing to consider a broader range of evidence than the previous example. They require

  1. A pre- and post-test methodology (or higher) with at least 20 participants (in both the intervention and control groups) or high quality qualitative research that includes at least 20 participants or a combination of these.
  2. A workbook or documentation that allows replication
  3. The evaluation shows positive outcomes (with no significant negative effects reported)
  4. The program has been replicated or has potential to be replicated. (Click here for a more detailed description of the criteria.)

As you can see there are similarities with the CEBC register, but the main difference is in the level of evidence accepted.

The criteria for inclusion as an additional program, which can contribute towards the requirements for evidence-based programs in Communities for Children, but are not on the official register, are also fairly broad. To be included, programs need:

  1. A theoretical and/or research background to the program
  2. A program logic
  3. Evidence that activities in the program generally match good practice in meeting the needs of the target group
  4. An evaluation (with at least 20 participants) which has shown that the program has positive benefits for the target group
  5. Staff members need to be qualified and/or trained to run the program.

AIFS recognises there can be challenges in identifying appropriate evidence-based programs and their approach to listing programs is based on a degree of flexibility in order to allow for a balance between research and practical considerations.

AIFS also provide an example of how the hierarchy of evidence can help in thinking about designing evaluation.

In this example, they start with a pre- and post- test design and ask “Can we improve on this?” There are three suggestions for improvements:

  1. By collecting data from different locations
  2. By collecting data from several groups over extended periods or
  3. By adding a comparison group.

Adding a comparison group, would move the research up the hierarchy and provide a stronger design for the research. The research could also be further improved by adding a control group.

The important thing from this discussion is to recognise that there are different levels of evidence and that policy makers and funding bodies are generally going to prefer higher levels of evidence. But as you can see from the approach taken by AIFS there can be some flexibility.

Evidence in evidence-based practice

Evidence-based practice is built on the intersection between research evidence, practitioner wisdom and experience, and the experience and insights of families.

Figure 3: Evidence-based practice. (Adapted from Walsh, Rolls Reutz, & Williams [10])

Figure 3: Evidence-based practice. (Adapted from Walsh, Rolls Reutz, & Williams, 2015)

In a recent discussion with family workers about creating program logic models, it was clear that they generally relied on practitioner wisdom and took into account the specific context of the families they worked with, and relied less on research evidence. I suspect this is often the case for family and community workers in their approach to practice more generally. While practitioner wisdom and the experience of families are really important and need to be part of the process of critically reflecting on practice, there can also be value in considering research evidence.

In this context, it is important to recognise the benefits of different types of research. Each type of research (e.g., in the hierarchy of evidence) has its advantages and disadvantages and provide different types of insights.

The type of evidence which will be useful depends on your purpose and the context in which you are working. How much time and what resources you have for searching for research evidence are also important.

If you are considering what programs you want to recommend at a national or organisation level, you are likely to want to put in more time and effort then if you are looking at an one off event for parents, and you are likely to want to look at more rigorous research evidence. If you are planning a one-off event, you might have a look for ideas from a range of sources and might not worry as much about rigorous research.

If you are creating a new program for a specific context (where there are no existing programs), as well as looking at some of the evidence higher up the hierarchy, you might want to look at qualitative research that can give you a more in-depth understanding of how a program was implemented, the type of practices that were used and the reaction of participants.

Looking for research evidence can take a lot of time, (particularly if you get side-tracked looking at interesting papers) and in the previous post there are some suggestions for places to look for research evidence.

Assessing research evidence

When you look at evidence there are a range of things to consider in assessing its usefulness and reliability. There is lots of information about academic approaches to assessing the reliability and trustworthiness of research (see for example Assessing quality in qualitative research and Assessing research quality) but here I want to take a more practical approach. You might want to ask:

  1. Is the research relevant to your context? But remember, that even if it is in a different context, it might still provide some useful information.
  2. Can you trust the method, results and their conclusions? Make sure that they don’t make generalisations that are unsupported by the evidence, or skew the results by only selecting easy to work with families. For example if a parenting program has only been trialled with mothers, and an evaluations claims that their results show that it works with parents (which includes fathers), they are making a claim that has not necessarily been supported by the evidence.
  3. Does it use appropriate measures and does it actually measure what they are trying to change? (See below.)
  4. How strong is the evidence? It can be worth considering the hierarchy of evidence as one of the things you consider, particularly if you are looking for key programs to adopt organisation wide.
  5. Is it from a reliable source? If an organisation is promoting their own expensive resource, you might want to check it out more than if it comes from an unbiased source. (Of course, just because an evaluation has been done in-house doesn’t mean that you can’t trust it, but it does means that you need to be a bit more careful.)

Measures

In thinking about measures I find the following material from Results Based Accountability very helpful. We can measure Quantity (How much did we do?) and Quality (How well did we do it?)

We can also measure our effort (How hard did we try?) and our effect (Is anyone better off?)

If we put these together we get four quadrants

This can be simplified into three basic questions:

  1. How much did we do?
  2. How well did we do it?
  3. Is anyone better off?

It’s important to recognise that it’s easiest to measure How much did we do? (e.g., by counting the number of participants or number of home visits) and then How well did we do it? (e.g., through workshop feedback sheets, or family satisfaction surveys). It is harder to measure Is anyone better off? But of course it’s this last question (Is anyone better off?) that really matters.

When you look at evaluations of programs, think about what they are actually measuring and think about whether or not they are really measuring what difference they made rather than how much they did or how well they did it.

Conclusion

Working with families involves addressing a range of complex problems and we cannot rely on research to give us all the answers. But nor can we rely exclusively on our experience and intuition. Considering research evidence can add a new dimension to your work practice and associated critical reflection on what you do.

Please add your comments below about how you try to incorporate research evidence into your practice and what you find helps to do so.

This post came from a project I’m working on supporting nine children and parenting support programs in regional and rural NSW to enhance their capacity to implement evidence-based programs and practice. The project was funded by the Department funded by the Department of Social Services through the Children and Families Expert Panel. You can see other posts relating to this work at https://sustainingcommunity.wordpress.com/resources-for-students/expert-panel-caps/.

If you liked this post please follow my blog (top right-hand corner of the blog), and you might like to look at:

  1. A literature review on supported playgroups
  2. What are program logic models?
  3. 12 principles of a problem solving approach to conflict resolution
  4. “I try and make it feel more like a home” – families living in caravan parks
  5. A community engagement reading list
  6. What’s your parenting style?

References

Gray, M., Plath, D., & Webb, S. A. (2009). Evidence-based Social Work A Critical Stance. Hoboken: Taylor & Francis.

Humphreys, C., Marcus, G., Sandon, A., Rae, K., Wise, S., Webster, M., & Waters, S. (2011). Informing policy with evidence: successes, failures and surprises. In K. Dill & W. Shera (Eds.), Implementing evidence-informed practice: international perspectives. Toronto: Canadian Scholars’ Press. Available from https://books.google.com.au/books?id=sL2KPUCxzLQC

Parker, R., & Robinson, E. (2013). Planning for evaluation I: Basic principles Australian Institute of Family Studies. Available from https://aifs.gov.au/cfca/publications/planning-evaluation-i-basic-principles

Walsh, C., Rolls Reutz, J., & Williams, R. (2015). Selecting and implementing evidence-based practices: A guide for child and family serving systems (2nd ed.). San Diego, CA: California Evidence-Based Clearinghouse for Child Welfare. Available from http://www.cebc4cw.org/files/ImplementationGuide-Apr2015-onlinelinked.pdf

Williams, K. E., Berthelsen, D., Nicholson, J. M., & Viviani, M. (2015). Systematic literature review: Research on supported playgroups. Brisbane: Queensland University of Technology. Available from http://eprints.qut.edu.au/91439/

About Graeme Stuart

Lecturer (Family Action Centre, Newcastle Uni), blogger (Sustaining Community), environmentalist, Alternatives to Violence Project facilitator, father. Passionate about families, community development, peace & sustainability.
This entry was posted in Being an academic, Families & parenting and tagged , , , , , , , . Bookmark the permalink.

2 Responses to Research evidence for family (and community) workers

  1. This is an extremely well written over view of gathering research evidence with something for everyone no matter the size or shape of organisation! Obvious meeting of academic and practical implementation mind – pleasure to read!
    JMT

    Liked by 1 person

I'd love to hear what you think!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s