Beyond process – evaluating policy outcomes
Hon. Steve Maharey
12 July 2000 Speech Notes
Beyond process – evaluating policy outcomes
Address to the New Zealand Impact Evaluation Forum. National Library Auditorium, Wellington.
E nga mana
E nga reo
E nga iwi
Tena koutou katoa
Distinguished guests, people of other languages, people from the four corners of the world. Greetings to you all.
It is a pleasure for me to open the New Zealand Impact Forum this morning. I want to issue a particularly warm welcome to our overseas colleagues joining us for this very important event.
Programme Evaluation as a threat and as an opportunity
Let me make some general comments about programme evaluation in the NZ public sector, and let me be blunt.
I don't think that programme evaluation is as central to the policy process as it should be.
My sense is that we have a capacity and a capability problem, but I also sense that the professional culture and the political culture have not been conducive to the development and sustenance of an evaluation culture.
Evaluation is threatening to those – within politics and the public service – who advance policies on the basis of great leaps of faith. It is a sad fact for some people that the world sometime acts in a counter-theoretical fashion. Programme evaluation is sometimes an unwelcome empirical antidote to those who build policy on heroic assumptions.
making is, at best, a most uncertain science. Even with the
most robust and comprehensive systems of policymaking there
will always be uncertainties – weak links in the
implementation chain, questionable assumptions about cause
and effect, the mysteries of human behaviour.
And policy made with the best of intentions sometimes doesn't work. Programme monitoring and programme evaluation should tell us what works, what doesn't, and why.
The politician or public servant that finds that the outcomes he or she has promised are not eventuating may not welcome the evaluation report that brings these bad tidings.
I am also of the view that programme evaluation can also pose a threat because it has the potential to democratise the policymaking process.
A sound and robust evaluation strategy and framework should, in my opinion, allow for stakeholder involvement – at the very least to permit the accurate identification and ranking of preferred outcomes (which may differ across different stakeholder groups).
In my assessment some of the best evaluations are guided by reference or steering groups that bring together stakeholders (and provide them an investment in the evaluation process and the results of the evaluation).
Turning specifically to the subject of this forum – impact evaluation – the Labour-Alliance Coalition Government has already indicated its desire to shift the focus of public policy to one with a strong outcomes bias.
Clearly there are implications for the accountability regime within the public sector. It is relatively straight-forward to design a contract for the delivery of outputs. It is altogether different when one is contracting for outcomes, particularly given lags between outputs and outcomes.
It is easy enough to purchase places on a vocational education and training programme for at risk youth. One training place = one output.
But what is the preferred outcome - further education and training, a job?
Over what time-period should we be measuring outcomes - 3 weeks or 12 months?
Is the fact that someone in a VET programme has picked up a unit standard on the National Qualifications Framework an outcome in its own right?
Is enhanced personal capacity an outcome, or is clear evidence of matching resulting in enhanced capability the outcome we are after?
I sense that you will spend some time over the next few days asking questions like these, and I very much hope that you will make some progress towards providing some answers.
Whatever the accountability regime we eventually arrive at – and for my part I very much hope that it is one that moves us beyond the contractualism of the present model – a shift to an outcomes focus suggests the need for an enhanced public sector capability. There will be little point in moving to an outcomes focus without a capacity to capture information about those outcomes.
This forum has been jointly sponsored by The Department of Work and Income, the Department of Labour through the Labour Market Policy Group and the Ministry of Social Policy and will focus on assessing the impact of government-funded employment and welfare interventions.
The forum was mooted as a capacity building measure within the Employment Evaluation Strategy coordinated by LMPG, DWI and MSP.
Let me say how delighted I am that the two Departments and the Ministry have cooperated on this initiative. I do hope that this silo-busting behaviour is mirrored in all the interactions between the three agencies, and elsewhere in the public service.
Aims of the forum
This forum will improve our collective understanding of the ways impact evaluations can best be conducted in the context of employment and welfare interventions in New Zealand.
The forum will also assist in identifying priority areas for capacity building in this area, and invite the Chief Executives of the three agencies involved in the sponsorship of this event to furnish me with a report on how we might improve our evaluation capacity and capability.
Increasing interest and emphasis in New Zealand on determining the outcomes of government programmes has begun to generate considerable debate both within and between government agencies on the most appropriate methodologies to determine the impact of interventions in the aforementioned areas.
However, to date this discussion has occurred on an ad hoc basis for specific projects. It is timely to now develop a more strategic approach to significantly advance this debate within the New Zealand context and hence this forum.
Outcomes of the
This forum will focus on the key issues surrounding impact evaluation theories and methodologies, and will provide the initial input for developing impact evaluation guidelines specific to the New Zealand context.
These guidelines will be produced by and for evaluators and policy makers in the New Zealand public sector to provide guidance on the issues that need to be considered when selecting an impact evaluation design.
The guidelines will not impose a rigid framework on agencies when selecting an impact assessment design. Instead, it is intended the guidelines will remind agencies of the variety of approaches to impact evaluation, each with their own merits and constraints depending on the conditions under which the approaches are to be utilised.
In the longer term I very much hope that this forum will help develop a tradition of New Zealand government agencies consistently offering robust, timely, and cost-effective evaluation best practices.
We need a culture of evaluation underpinned by a high level of capacity and capability.
For my part I welcome this forum, and I am proud of the fact that the initiative to convene it has come from three government agencies within my sphere of portfolio responsibilities.
Programme evaluation is about protecting the integrity of the policy process.
In that sense, while forums such as this will, quite appropriately, focus on the detail of method and methodology, getting it right at that level will meet a higher level objective.
That is to ensure that the public service is positioned to meet its traditional mission – to provide free, frank, and fearless advice, and to do so with the greatest possible transparency.
I wish you well for this forum – while the initial outcome of this event will be progress towards a set of impact evaluation guidelines, it may be best to see those guidelines, and the increased capacity and capability they will encourage, as intermediate outcomes.
What is the ultimate outcome? – if we get it right the ultimate outcome is as fundamental as ensuring the integrity of public policy, of governance, and indeed of our democratic system.