Monitoring and learning in politically smart, adaptive programmes

Forum DDD Discussion Monitoring and learning in politically smart, adaptive programmes

This topic contains 0 replies, has 1 voice, and was last updated by  Taylor Brown 2 years, 1 month ago.

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • #346

    Taylor Brown
    Participant

    We recently hosted a workshop on Monitoring and learning in politically smart and adaptive programmes.
    A summary of this workshop can be found here:

    This workshop brought together a small group of researchers, consultants and other development professionals to explore the ways in which monitoring, evaluation and learning (ME&L) can enhance rather than constrain politically smart, locally led interventions. In particular we looked at:
    -How to monitor and measure outcomes without over-specifying expected results ex-ante;
    -How to apply an adaptive, ‘learning by doing approach’ while simultaneously delivering measurable results and being accountable to funders;
    -How to monitor, evaluate and learn lessons about crucial but less tangible outcomes of interventions; and
    -How the political economy incentives facing development agencies can support or constrain the use of these approaches more widely.

    We explored eight case studies of programmes that have developed innovative and effective approaches to ME&L. From these cases, we identified a range of approaches and tools that can improve our ability to track results over time and give us the operational space and evidence required for innovation and learning.

    This is an area that clearly needs more reflection and analysis. We will be looking at these and other case studies in the coming months, so it would be great to get your thoughts on other projects warrant further analysis and what other tools and approaches we should explore.

    #359

    Lucia
    Participant

    Thanks for this interesting post. I usually find myself working in “logframed” programs, and have become adept at making useful monitoring and learning happen within that context. Here are some of the things I have learned:

    * it doesn’t have to be in the logframe – if there is a good result, governments and development partners will rarely ask “where does this sit in the logframe?”. So I have long ago stopped feeling boxed or limited by the logframe. I do take an interest, because logframes are a reflection of the thinking at the time a program was designed, but then I move on. As an example, in the logframe of the Ethiopia Social Accountability Program 2 (ESAP2) there is nothing about the fact that social accountability (SA) needs to have service improvement results. It is all about application of SA tools and approaches, and numbers of people trained. Yet one of my main monitoring questions has been: what difference is this all making in terms of better services for vulnerable groups and women? We have inserted such questions into the monitoring protocol, where they serve as a Theory of Change. The head of M&E has recently organised a mini-research to develop an evidence base of the service improvement changes that have happened across 5 sectors in 223 districs of Ethiopia.

    * write, shoot and share it widely – it is not good enough to produce reports for development partners. It is important to get stories out in the open. Stories of experiences that are happening in the program. We do this in many different ways: we have a Facebook page with 4000+ followers: Ethiopian development professionals. We share questions that make them think, photos from our monitoring visits, a weekly Q&A with one of the stakeholders, etc. Sometimes they share and ask back. hat is how we developed a relationship with one of the Universities in the Country, that is now considering starting a Center of Excellence on SA. When we discovered that our partners were poor writers, we trained those interested in the use of participatory video. Within one year we have 100+ short video clips from stakeholders talking about their experience in their own language (Ethiopia has over 50 different ethnic groups). We have opened a You Tube channel to share all this with a wider audience. When we discovered that the quarterly reports from partners lacked specific details, we introduced Most Significant Change stories. We can see interesting patterns coming out of these stories. To stimulate such documentation and identification of experiences worth sharing we created an award for best PV, best story and best champions. This helps us capture and share experiences from 116 organisation, working across Ethiopia in 223 districts.

    * bring multiple stakeholders together in large scale learning events – I have learned that capacity emerges in relationships. What really helps is creating the space where people can talk about experiences to other stakeholders, develop a new understanding and take different perspectives into account. I’m not talking about presentations, but about formulation the right questions that trigger sharing and reflection. One approach we use is to benchmark various figures that are anyhow collected in our M&E system (the logframe I referred to above), for instance numbers and types of participants in interface meetings between citizens and their local government. Such graphic comparisons across projects, can be great conversation starters. The trick is not to cover too much ground, but to go in-depth. We have 3-4 topics in a two day learning event, and they are all part of one overarching theme. We prepare these topics and the bi-annual theme with our monitoring team, people that go out regularly to visit projects to learn from what is happening and to provide on-the-job support where needed. We regularly take half a day or so to talk about what has been striking us in these visits. This drives the formulation of questions and the theme to explore.

    * be very selective with tactical technical inputs – It is important that stakeholders in a program also get new insights or food for thought” that help them take a step back, develop a reflective stance on they way they are approaching an issue. To break through the initial “toolification” of social accountability in Ethiopia, I tried to bring out the process that appeared to happen, and facilitated reflection on how different stakeholders were differently engaged in this process. (Theme: citizens in the drivers seat) The next step was to work on new notions of power, for which we used a power analysis tool. (Theme: the right people on board) More recently, we started looking into the future. We organised a mindful exercise in which various stakeholders were helped to “feel each other, the relationships and the tensions in the room”. This created an athmosphere in which vulnerable person could sit at the table with a senior government official and have a helpful conversation. (Theme: stimulating the SA movement) Because these learning events are large scale – 500 to sometimes 750 people from across Ethiopia are brought together every 6 months, the themes of the learning events start to “buzz” around the country.

    (I have also used this in

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.

One thought on “Monitoring and learning in politically smart, adaptive programmes

  1. Pingback: Monitoring and learning in politically smart and adaptive programmes | Doing Development Differently