On the morning of Wednesday, 1 November 2017, SIMNA in partnership with ACTCOSS presented: ‘I’m doing an impact evaluation, what evidence do I need?’. We were fortunate to hear from Scott Bayley – Principal Specialist in Performance Management and Results at DFAT. Scott presented an engaging session on impact evaluation. Scott was extremely candid in sharing his own successes and shortcomings in various community measurement and evaluation settings to the sold-out session.
Off the bat it was refreshing to depart from the usual ‘method debate’ (quantitative versus qualitative), instead being introduced to critical multiplism – a context-specific strategy, as opposed to a fixed method, focused on investigating causal inferences. Also, before delving into the nuts and bolts of an impact evaluation, we were introduced to three key requisites that underpin a successful measurement and evaluation project.
- First, a performance leadership posture that facilitates evidence-driven performance improvement.
- Second, program management skills that in addition to the technical know-how are equally focused on building consensus and communicating achievements to relevant stakeholders.
- Third, a rigorously constructed Theory of Change – which in essence is a description and/or illustration of how and why a desired change is expected to occur, which in turn usually feeds into an outcomes framework.
Sharing a quick mix of datasets that showcased relationships that ranged from comical to complex; as a group we had the opportunity to learn first-hand how correlation does not imply causation. This brought us to the essential evidentiary criteria needed for a trustworthy and credible impact evaluation.
In essence, this is demonstrating an association between participating in the program and the expected outcomes detailed in our Theory of Change, establishing a chronological time order, and ruling out alternative explanations. Perhaps the most interesting part of the session was when we had an opportunity to dissect the various alternative explanations that are often neglected in impact evaluation reports.
A memorable example was distinguishing evaluation tools that may conflate program participant satisfaction with positive outcomes – when the reality may be far more nuanced! Developing a strong evidence-base to inform an outcomes-focused model is a priority highlighted in our ACT Community Services Industry Strategy 2016-2026, so it was a valuable opportunity to collectively deepen our understanding in this space.