Good evaluation practice generally requires that clear project and program objectives, baselines, metrics and data collection and analysis methods be in place from project commencement to ensure that data are captured reliably. Causal links between actions and outcomes, when coupled with relevant data, should be sufficiently direct to allow reliable (preferably quantifiable) deductions to be drawn about project and program effectiveness and efficiency. However there are situations where the conditions are far from this ideal but when it is nevertheless important to objectively evaluate outcome performance and to find ways to improve programs. This article outlines an approach to manage evaluations where: baseline data is deficient; cause–effect relationships are complicated; and project objectives are complex. The approach was applied to evaluate a program that provided public funding to support a diverse portfolio of community-based, on-ground invasive animal control projects. The approach used: explicit ex post theorising to distil testable hypotheses about effectiveness and project operation; mixed-methods data gathering and analysis; triangulation of different types of evidence; expert data gatherers; and careful attention to the policy objectives of the evaluation.
All Science Journal Classification (ASJC) codes
- Geography, Planning and Development
- Management, Monitoring, Policy and Law