Strengths-based measurement and collective impact

Sign - Changed Priorities Ahead
(Photo: Addison Berry )

Data driven approaches like collective impact often prioritise shared measurement and collecting data, particularly quantitative measures, and do not consider the impact of what questions they ask, how they collect data, and who is responsible for interpreting the data.

If we adopt a strengths-based approach to collective impact it is important that we think about strengths-based approaches to measurement. The questions we ask in collecting data and measuring impact, not only give strong messages about how we see the communities and people we work with, but they also actually shape how we see them.  

The questions we ask in trying to measure the impact of our work can send messages about how we see the community and, if we are not careful, they can help reinforce negative perceptions. Here I will consider the difference between two youth surveys used in collective impact initiatives (one which is clearly deficit-based and one which is more strengths-based) before discussing some other issues relating to a strengths-based approach to measuring impact.

A deficit-based survey

The first, a youth health survey, is a 114 question survey used as part of a collective impact initiative in the USA. Only 20 of the questions (18%) were positive, 16 (15%) were neutral and 78 (68%) were negative. Of course, “positive,” “neutral” and “negative” are subjective terms and how I have classified some of the questions could be debated. For example, is the question, “Have you ever had sexual intercourse?” negative, positive, or neutral? Your answer will partly depend on your values, and it might depend on how old the student is. (I suspect more people would say that asking a 11 year old would be more negative than asking an 18 year old.) Likewise questions about being transgender might be considered as positive, neutral or negative depending on your values. (I classified them both as neutral).

The neutral questions asked about things like the student’s age, race (in Australia it probably would have asked about culture not race), weight and school grades. At times the neutral questions could identify a risk factor depending on their response. For example:

  • How old were you when you had sexual intercourse for the first time? (Q. 18) After the option “I have never had sexual intercourse” that ages started from 11 years or younger.
  • On an average school day, how many hours do you play video or computer games or use a computer for something that is not school work? (Q. 95) The options went up to “5 or more hours per day.”

Most of the questions were negative in that they asked about illegal, unsafe or socially unacceptable behaviour; or were about risk factors. For example:

  • During the past 30 days, on how many days did you carry a gun? (Q. 21)
  • Has anyone ever had sexual contact with you against your will? (Q. 28)
  • During your life, how many times have you used any form of cocaine, including powder crack, or freebase? (Q. 66)
  • Do you receive a free or reduced price lunch at school? (Q. 92)

There was even a section (Q. 14–19) that was introduced as being about safety—“The next 6 questions ask about safety”—but were all about unsafe behaviour in a car (e.g., During the past 30 days, how many times did you drive a car or other vehicle when you had been drinking alcohol?) I would have expected questions about safety to be about things like how safe they felt.

The positive questions asked about positive actions by, or beliefs of, the young people, or positive aspects of their community. Half of the positive questions (9% overall) were about positive responses to negative issues. For example:

  • My school is prepared to help a student who might be thinking about killing him/herself. (Q. 40)
  • During the past 12 months, how often did you talk with your parents or other adults in your family about sexuality or ways to prevent HIV infection, other sexually transmitted diseases (STDs), or pregnancy? (Q. 103)

Six of the positive questions (5% overall) were about their eating habits or physical activity. For example:

  • Yesterday, how many times did you eat vegetable? (Q. 90)
  • During the past 12 months, on how many sport teams did you play? (Q. 97)

Only three of the questions (3%) implied that the young people lived in a positive context and did not asked about negative issues (although one was still about getting support for a problem):

  • During the past 7 days, on how many days did you eat dinner together with your parents or guardian? (Q. 93)
  • Is there at least one teacher or other adult in your school that you can talk to if you have a problem? (Q. 113)
  • Can you talk with at least one of your parents or other adult family members about things that are important to you? (Q. 114)

The survey clearly focused on the problems in the community and could help demonstrate the extent of some of the problems faced by young people in the community. It would not provide details of the many strengths and protective factors that were also present in the community. When I read the survey, I wondered what impression the students doing the survey had about how the collective impact initiative viewed their community. To me, it gave a clear impression that it was seen as a “problem” community with many social problems.

It is worth noting that the 2018 survey used by this initiative included many more positive questions, largely based on the survey I am about to discuss.

A more strengths-based survey

The second survey, a Communities that Care Youth Survey, is also used in collective impact initiatives and explored many of the same issues but did it in a more positive way. It is a very long survey (229 questions) and has a higher proportion of positive questions, including many that implied the students could live in a supportive, caring environment. Over a third of the questions (38%, 87 questions) were positive, 11% (25 questions) were neutral and 51% (117 questions) were negative.

As with the previous survey, the neutral questions were largely about demographic information. Some of the neutral questions asked about their lives in a way which could indicate protective or risk factors. For example:

  • Putting them all together, what were your grades like last year? (Mostly F’s; Mostly D’s etc) (Q. 9)
  • How many times have you changed schools (including changing from elementary to middle or middle to high school) since kindergarten? (Q. 106)

Once again decisions had to be made which category questions belonged in. For example there were three questions (Q. 95) which asked “When I am an adult I will”:

  • Smoke cigarettes
  • Drink beer, wine or liquor
  • Smoke marijuana.

I classified the alcohol as neutral (as it is legal) and the other two as negative (as they are illegal and/or unhealthy).

Most of the negative questions related to alcohol and drug use, but there were also questions about violence, problems at school and crime.

Some of the questions asked about their attitudes towards drug and alcohol use. For example there were a series of six questions (Q. 50) that asked “How much do you think people risk harming themselves (physically or in other ways) if they” in relation to a number of behaviours including:

  • Smoking one or more packs of cigarettes per day?
  • Trying marijuana once or twice?
  • Smoking marijuana regularly (once or twice a week)?

Other questions recognised the ability of young people to take a stand in relation to issues they might face. For example there was a series of questions (Q. 28) that asked “How wrong do you think it is for someone your age to:” and then asked about 10 behaviours such as:

  • Take a handgun to school?
  • Steal something worth more than $5?
  • Smoke cigarettes?
  • Smoke marijuana?

Some of the questions presented a dilemma and asked the students what they would do. For example;

It’s 8:00 on a weeknight and you are about to go over to a friend’s home when your mother asks you where you are going. You say, “Oh, just going to go hang out with some friends.” She says, “No, you’ll just get into trouble if you go out. Stay home tonight.” What would you do now?

  • Leave the house anyway
  • Explain what you are going to do with your friends, tell her when you’d get home, and ask if you can go out
  • Not say anything and start watching TV
  • Get into an argument with her (Q. 40)

Unlike the deficit survey, most of the positive questions were about significant strengths of the young person, their family, their school or their neighbourhood. As indicated above, some of the questions asked about their beliefs and attitudes. Other questions asked about their friends; how well they did at school; how hard they tried at school; and whether they were involved in activities like religious services, clubs, and volunteering. Throughout the survey there were also questions about their family, school and neighbourhood. For example:

  • My teacher(s) notices when I am doing a good job and lets me know about it. (Q. 14)
  • I feel safe at my school. (Q. 17)
  • There are lots of chances to be part of class discussions or activities. (Q. 21)
  • There are lots of adults in my neighborhood I could talk to about something important. (Q. 99)
  • There are people in my neighborhood who are proud of me when I do something well. (Q. 102)
  • If I had a personal problem, I could ask my mom or dad for help. (Q. 129)
  • My parents give me lots of chances to do fun things with them. (Q. 131)

While this survey addresses many of the same issues as the first survey, it creates a very different impression of how the collective impact initiative sees the community. It recognises the community has strengths and resources that can be drawn on to create change.

Collaborative approaches to measuring impact

As well as the focus of the questions, it is also important to consider the methods used to collect data. If a bottom-up, strengths-based approach is adopted to collective impact, it is helpful to adopt collaborative approaches to data collection that promote co-design and co-production, such as participatory action research, appreciative inquiry, collecting stories of most significant change, digital storytelling, or photo voice.

In a three volume guide to evaluating collective impact, Preskill, Parkhurst, and Juster1, 2, 3 highlight the importance of community engagement:

In addition to these “experts” and individuals in formal leadership positions, it is critical that CI initiatives thoughtfully engage the people whose lives are most directly and deeply affected by the targeted problem1, p. 16

Their approach, however, focuses largely on collective impact partners and the extent of community engagement is largely dependent on who are included as the “partners” in the initiative. As discussed in Collective impact and community engagement some collective impact initiatives are quite top down and have a focus on government agencies and professional community services rather than adopting a more bottom up approach that starts with community members. If the collective impact partners who are leading the initiative do not actively involve community members, then the research is likely to be quite top-down.

For example under continuous communication, they recommend collective impact initiatives ask:

To what extent and in what ways does cross-initiative communication help to build trust, assure mutual objectives, and create common motivation?3, p. 18

The outcomes and indicators they recommend for this questions are:

Structures and processes are in place to engage CI partners, keeping them informed and inspired

  • Working groups (or other collaborative structures) hold regular meetings
  • Members of working groups or other collaborative structures attend and participate actively in meetings
  • Partners communicate and coordinate efforts regularly (with and independently of backbone staff)
  • Partners regularly seek feedback and advice from one another
  • Timely and appropriate information flows throughout the cascading levels of linked collaboration
  • Partners publicly discuss and advocate for the goals of the initiative

Structures and processes are in place to engage the CI initiative’s external stakeholders, keeping them informed and inspired

  • The CI initiative engages external stakeholders in regular meetings and integrates their feedback into the overall strategy
  • The CI initiative regularly communicates key activities and progress with external stakeholders

Based on these outcomes and indicators, communication with community members and those most affected by the initiative is considered only if they are included as CI partners or external stakeholders. There are no questions, outcome or indicators about listening to the community, the extent and effectiveness of communication with marginalised sections of the community or promoting communication between community members.

Overall, in volume 3 3(which provides sample questions, outcomes and indicators) only one of the questions (out of 17; 6%), one of the outcomes (out of 49; 2%) and five of the indicators (out of 149; 3%) specifically mention the community:

  • To what extent and in what ways does the backbone infrastructure engage community members and other key stakeholders to ensure broad-based support for the initiative? (Question)
  • Formal actors and organizations demonstrate increased responsiveness to community needs (Outcome)
  • Community members are aware of the CI initiative’s goals and activities (Indicator)
  • Partners and the broader community understand and can articulate the problem (Indicator)
  • Community members are engaged in CI-related activities (Indicator)
  • The CI initiative actively solicits and acts on feedback from community members and other external partners (Indicator)
  • CI initiative has supporters who can champion the strategy with the broader community (Indicator)
In addition, six of the indicators (4%) are about the “target population” or “targeted audiences.”
  • Members of the target population help shape the common agenda (Indicator)
  • There is a perceived sense of urgency and a call to action among targeted audiences (Indicator)
  • The population or issue(s) targeted by the CI initiative are viewed as a priority among system actors (Indicator)
  • The population or issue(s) targeted by the CI initiative receive greater attention from system actors (Indicator)
  • Formal actors/organizations serving the target population report increase in staff motivation (Indicator)
  • Policies are implemented equitably for the CI initiative’s target population (Indicator)
The community is largely treated as the object of the initiative (having things done TO them) or advising the collective impact partners, rather than being active participants in the change process. They are engaged to ensure support, they are aware of the initiatives goals and activities, they provide feedback, or are seen as a priority by the “system actors” (which are likely to be professional services rather than community members). There are very few indicators which ensure the community play a more active role: The only ones are:
  • Community members are engaged in [not lead] CI-related activities
  • Members of the target population help shape [not lead] the common agenda
  • Partners and the broader community understand and can articulate the problem [but not necessarily involved in defining the problem]

We need measures that have a greater focus on the active involvement of the community and those most affected by the initiative (e.g., ones that build on the revised five conditions of collective impact).

Final words

In this and the previous three posts (What is collective impact?, Collective impact and community engagement and A strengths-based approach to collective impact), I don’t want to suggest or imply that collective impact is fatally flawed or of little use. I actually think it has a lot to offer but believe initiatives have to be built on a very strong foundation of community engagement, strengths-based practice, equity and social justice. Unless this is the case, collective impact will replicate traditional ways of imposing change on communities rather than working with them to build on their existing strengths and resources to create community-led change.

If you liked this post please follow my blog, and you might like to look at:

  1. What is collective impact?
  2. Collective impact and community engagement
  3. A strengths-based approach to collective impact
  4. Strengths-based measurement
  5. Rethinking the roles of families and clients in evidence-based practice
  6. What are complex problems?

If you find any problems with the blog, (e.g., broken links or typos) I’d love to hear about them. You can either add a comment below or contact me via the Contact page.


  1. Preskill, H., Parkhurst, M., & Juster, J. S. (2014a). Guide to evaluating collective impact. Part 1: Learning and evaluation in the collective impact context. Collective Impact Forum.  Available from
  2. Preskill, H., Parkhurst, M., & Juster, J. S. (2014b). Guide to evaluating collective impact. Part 2: Assessing progress and impact. Collective Impact Forum.  Available from
  3. Preskill, H., Parkhurst, M., & Juster, J. S. (2014c). Guide to evaluating collective impact. Part 3: Sample questions, outcomes, and indicators. Collective Impact Forum.  Available from

About Graeme Stuart

Lecturer (Family Action Centre, Newcastle Uni), blogger (Sustaining Community), Alternatives to Violence Project facilitator, environmentalist, father. Passionate about families, community development, peace, sustainability.
This entry was posted in Working with communities and tagged , , , , , . Bookmark the permalink.

I'd love to hear what you think!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.