By Greg Power
Anyone who has had a serious bout of food poisoning will be familiar with the experience of going to see a doctor when you think you are dying, then having to provide a ‘sample’ that is sent off for analysis, only to receive a letter from the hospital two weeks later – by which time you are already back on your bike – telling you exactly how ill you had been, and warning you to be more careful washing your hands in the future.
It’s this ‘after the event’ wisdom that seems to characterise most monitoring and evaluation of international support – it is often highly detailed, based on thorough analysis and arrives far too late to be of use. It still seems to be a ritual that only once a project is over does a verdict arrive on whether it worked or not.
It seems obvious that for adaptive programming to work monitoring needs to be integral to the delivery of the project – so that there is a constant process of analysing, assessing and adapting. The problem is that the logframe is not an ally of adaptation, and despite the best efforts of various donor agencies, most programmes still default to traditional project design documents.
It was partly for these reasons that Global Partners Governance (GPG) developed its own KAPE® methodology over the last decade through its work to support governance institutions in some of the world’s most complex and fragile political environments. Standing for knowledge-application-practice-effect, KAPE sets out the stages of the project, starting with the initial advice, support and guidance to our partners (knowledge). Then working with them in order to apply that knowledge to manage practical problems (application). The crucial stages though in getting and maintaining institutional change are in ensuring that those new patterns of behaviour are repeated over time (practice) and replicated across the institution to improve the performance of the institution as a whole (effect).
KAPE thus provides a logic of change, but more importantly it also provides a means of capturing progress in a way that fosters adaptation.
KAPE is principally concerned with behaviour as the route to long-term meaningful change, and through the KAPE chain we can tell how far we are progressing at any given moment during the project.
For example, following initial activities to impart ‘knowledge’, it will be quickly clear if our interlocutors are applying that knowledge at work. If they are not, it suggests a problem, perhaps a misunderstanding on our part about the nature of the problem or a failure to engage at the right level. Similarly, if partners only use the new techniques once, but fail to utilise those new skills or procedures repeatedly (in other words if new ‘practices’ fail to emerge) it may suggest a lack of understanding, relevance or suitability. All of this means it will be evident, at the time, when the behavioural and cultural change that underlies lasting reform is missing.
This matters because when the expected movement from one stage to the next does not occur, it forces us to question the reasons for that failure, which in turn ensures we adapt what we do next. The point of reflecting is not just to identify the factors limiting impact, but to go back and question the assumptions that underpin the project logic so that it remains relevant. Rather than persevere with something that is patently not working, we can alter, adapt and look for new entry points. On occasion this has meant that we will stop working entirely with a specific parliamentary committee, ministry or certain politicians because we know we do not have enough traction to make change happen.
However, this does not mean that we are also changing direction. The point is that we will simply look for new routes and new entry-points to achieve those same strategic objectives, and still be able to show progress towards them. Critically, it is this understanding and analysis that we need to use to justify to our funders any change to the project itself.
The virtue of the KAPE chain is that (unlike a logframe) it is less concerned with reporting on the specifics of process, than it is with tracking strategic progress towards the bigger programme objectives of behavioural and political change. As Rachel Kleinfeld has pointed out, in politics, it is impossible to know what the most salient indicators will be until they have happened. So, rather than start by making educated guesses that try to capture everything in a logframe, instead we look for evidence of strategic movement from knowledge to application to practice to effect.
The point is that political and institutional reform is a process that is messy, unpredictable and haphazard – and change takes time. This is the nature of politics itself. Working in this way is inevitably a process of what has been described as ‘muddling through’, and seizing opportunities when you can. But the point of KAPE is to emphasise that it is perfectly possible to innovate, experiment, and respond, while still having a clear sense of strategy, purpose and progress.