Showing posts with label TheoryofChange. Show all posts
Showing posts with label TheoryofChange. Show all posts

Friday, 8 November 2013

Looking for a tool to analyse and 'compare' policies? Check our our lessons from conducting a QDA

By Elise Wach

As part of our team efforts to maintain a reflective practice and share learning to others, one of our latest ‘Practice Papers in Brief’ provides some insights from conducting a Qualitative Document Analysis (QDA) on policy documents for the rural water sector.

The QDA was undertaken as part of the Triple-S (Sustainable Services at Scale) initiative, for which the Impact and Learning Team (ILT) at IDS facilitates learning. 

Qualitative Document Analysis (QDA) is a research method for systematically analysing the contents of written documents.  The approach is used in political science research to facilitate impartial and consistent analysis of written policies. 

Given that Triple-S is aiming to change policies and practices in the rural water sector, the initiative decided to undertake a QDA on policy documents at the international level in order to understand trends and progress in the sector and also to engage development partners in identifying possible changes to policies and practices to move the sector closer to achieving ‘sustainable services at scale.’  Later, we decided to expand this to ‘practice’ documents as well. 

Consistent with Triple S’s ‘theory of change’, generating discussion on these issues and catalysing change was just as much of a priority as generating reliable evidence about policy trends.

In the paper, we discuss the strengths and weaknesses of the methodology, and provide some pointers that might be helpful if it is a tool you might consider using.

Overall, we found that the QDA exercise provided useful information about trends and gaps in the rural water sector, helped to refine the Triple-S engagement strategy, and served as a useful platform for engagement with partner organisations.

Some of our lessons related to issues of defining our 'themes' and scoring, inclusion criteria for documents, unclear or zero scoring, and the relationships between the research team and the organisations included in the review.

Next week, Triple-S will be kicking off another QDA for the Ghana Workstream, to analyse government rural water policies, and will incorporate many of the lessons that we’ve learned on QDA so far.  We’ll also be conducting another round of QDA at the international level next year to analyse the ways in which the rural water sector policies have shifted over the course of the Triple-S project, and to understand what to focus on moving forward.

Elise Wach is Monitoring, Evaluation and Learning Adviser with the Impact and Learning Team at the Institute of Development Studies

Other blogs by Elise on Impact and Learning: 

Wednesday, 30 October 2013

The revolution will not be in open data

By Duncan Edwards

I’ve had a lingering feeling of unease that things were not quite right in the world of open development and ICT4D (Information and communication technology for development), so at September’s Open Knowledge Conference in Geneva I took advantage of the presence of some of the world’s top practitioners in these two areas to explore the question:

How does “openness” really effect change within development? 


Inspiration for the session came from a number of conversations I’ve had over the last few years.

My co-conspirator/co-organiser of the OKCon side event “Reality check: Ethics and Risk in Open Development,” Linda Raftree, had also been feeling uncomfortable with the framing of many open development projects, assumptions being made about how “openness + ICTs = development outcomes,” and a concern that risks and privacy were not being adequately considered.

We had been wondering whether the claims made by Open Development enthusiasts were substantiated by any demonstrable impact. For some reason, as soon as you introduce the words “open data” and “ICT,” good practice in development gets thrown out the window in the excitement to reach “the solution”.

A common narrative in many “open” development projects goes along the lines of “provide access to data/information –> some magic occurs –> we see positive change.” In essence, because of the newness of this field, we only know what we think happens, we don’t know what really happens because there is a paucity of documentation and evidence.

It’s problematic that we often use the terms data, information, and knowledge interchangeably, because:
  • Data is NOT knowledge 
  • Data is NOT information 
  • Information is NOT knowledge. 
Knowledge is what you know.

It’s the result of information you’ve consumed, your education, your culture, beliefs, religion, experience – it’s intertwined with the society within which you live.

Understanding and thinking through how we get from the “openness” of data, to how this affects how and what people think, and consequently how they might act, is critical in whether “open” actually has any additional impact.

Can applying a Theory of Change help us answer this question?


At Wednesday’s session, panellist Matthew Smith from the International Development Research Centre (IDRC) talked about the commonalities across various open initiatives. Matthew argued that a larger Theory of Change (ToC) around how ‘open’ leads to change on a number of levels could allow practitioners to draw out common points. The basic theory we see in open initiatives is “put information out, get a feedback loop going, see change happen.” But open development can be sliced in many ways, and we tend to work in silos when talking about openness. We have open educational resources, open data, open government, open science, etc. We apply ideas and theories of openness in a number of domains but we are not learning across these domains.

We explored the theories of change underpinning two active programmes that incorporate a certain amount of “openness” in their logic.

Simon Colmer from the Knowledge Services department at the Institute of Development Studies outlined their theory of change of how research evidence can help support decision-making in development policy-making and practice. While Erik Nijland from HIVOS presented elements of the theory of change that underpins the Making All Voices Count programme, which looks to increase the links between citizens and governments to improve public services and deepen democracy.

Both of these ToCs assume that because data/information is accessible, people will use it within their decision-making processes. They also both assume that intermediaries play a critical role in analysis, translation, interpretation, and contextualisation of data and information to ensure that decision makers (whether citizens, policy actors, or development practitioners) are able to make use of it. Although access is theoretically open, in practice even mediated access is not equal – so how might this play out in respect to marginalised communities and individuals?

What neither ToC really does is unpack who these intermediaries are. What are their politics? What are their drivers for mediating data and information? What is the effect of this? A common assumption is that intermediaries are somehow neutral and unbiased – does this assumption really hold true? 

What many open data initiatives do not consider is what happens after people are able to access and internalise open data and information. How do people act once they know something?

As Vanessa Herringshaw from the Transparency and Accountability Initiative said in the Raising the Bar for ambition and quality in OGP session, “We know what transparency should look like but things are a lot less clear on the accountability end of things”.

There are a lot of unanswered questions. Do citizens have the agency to take action? Who holds power? What kind of action is appropriate or desirable? Who is listening? And if they are listening, do they care?

Linda finished up the panel by raising some questions around the assumptions that people make decisions based on information rather than on emotion, and that there is a homogeneous “public” or “community” that is waiting for data/information upon which to base their opinions and actions.

So as a final thought, here’s my (perhaps clumsy) 2013 update on Gil Scott Heron’s 1970 song “The Revolution will not be televised”:

“The revolution will NOT be in Open data,
It will NOT be in hackathons, data dives, and mobile apps,
It will NOT be broadcast on Facebook, Twitter, and YouTube,
It will NOT be live-streamed, podcast, and available on catch-up
The revolution will not be televised”

Heron’s point, which holds true today, was that “the revolution” or change, starts in the head. We need to think carefully about how we get far beyond access to data.

This blog was originally published as an Open Knowledge Foundation blog. Duncan Edwards is ICT Innovations Manager at the Institute of Development Studies. Follow Duncan on Twitter. 


Other blogs by Duncan on Impact and Learning

Wednesday, 7 August 2013

Open data and increasing the impact of research? It's a piece of cake!

By Duncan Edwards

I talk to a lot of friends and colleagues who work in research, knowledge intermediary, and development organisations about some of the open data work I’ve been doing in relation research communications. Their usual response is “so it’s about technology?” or “open data is about governance and transparency, right?”. Well no, it’s not just about technology and it’s broader than governance and transparency.

I believe that there is real potential for open data approaches in increasing the impact of research knowledge for poverty reduction and social justice. In this post I outline how I see Open Data fitting within a theory of change of how research knowledge can influence development.

Every year thousands of datasets, reports and articles are generated about development issues. Yet much of this knowledge is kept in ‘information silos’ and remains unreachable and underused by broader development actors. Material is either not available or difficult to find online. There can be upfront fees, concerns regarding intellectual property rights, fears that institutions/practitioners don’t have the knowhow, means or time to share, or political issues within an organisation that can mean this material is not used.

What is “Open data”? What is “Linked Open Data”? 

The Open Knowledge Foundation says “a piece of content or data is open if anyone is free to use, reuse, and redistribute it — subject only, at most, to the requirement to attribute and/or share-alike.”

The Wikipedia entry for Linked Data describes it as“a method of publishing structured data so that it can be interlinked and become more useful. It builds upon standard Web technologies such as HTTP and URIs, but rather than using them to serve web pages for human readers, it extends them to share information in a way that can be read automatically by computers. This enables data from different sources to be connected and queried…. the idea is very old and is closely related to concepts including database network models, citations between scholarly articles, and controlled headings in library catalogs.

So Linked Open Data can be described as Open Data which is published in a way that can be interlinked with other datasets. Think about two data sets with country categorisation – if you publish these as linked data, you can then make the link between related content between different datasets for any given country.

For more definitions and discussion on data see Tim Davies post "Untangling the data debate: definitions and implications".


Why should Open Data be of interest to research producers? 

The way in which the Internet and technology has evolved means that instead of simply producing a website from which people can consume your content, you can open up your content so that others can make use of, and link it in new and exciting ways.

There are many theories of change which look to articulate how research evidence can affect development policy and practice. The Knowledge Services department at the Institute of Development Studies (IDS) works with a theory of change which views access to, and demand for, research knowledge, along with the capacity to engage effectively with it, as critical elements to research evidence uptake and use in relation to decision-making within development. Open Data has significant potential in relation to the ‘access to’ element of this theory of change.

Contextualisation and new spaces 

When we think about access to research knowledge – we should go beyond simply having access to a research document. Instead we must look at whether research knowledge is available in a suitable format and language, and whether it has been contextualised in a way which makes sense to an audience within a given environment.



I like to use a Data cake metaphor developed by Mark Johnstone to illustrate this - if we consider research outputs to be the data/ingredients for the cake, then we organise, summarise and catalogue this (i.e. add meta-data) to ‘bake’ into our information cake. We then present this information in a way in which we feel is most useful and “palatable” to our intended audiences with the intention they will consume it and be able to make use of new knowledge. It’s in this area that Open Data approaches can really increase the potential uptake of research – if you make your information/ content open it creates the possibility that other intermediaries can easily make use of this content to contextualise and present it to their own users in a way that is more likely to be consumed.

Essentially by opening up datasets of research materials you can reduce duplication, allow people to reuse, repurpose, remix this content in many more spaces thereby increasing the potential for research findings to be taken up and influencing change in the world.

While I see significant benefits in researchers making their outputs available and accessible in an open manner we must redress the dominance of knowledge generated in the global North. We need to continue to invest in the strengthening of intermediaries at local, national, and international levels to make use of research material and Open Data to influence positive change.

Duncan Edwards is the ICT Innovations Manager at the Institute of Development Studies (IDS) - you can follow him on Twitter: @duncan_ids

NOTE: an admission on Open Access - The original article this post is based on, “Davies, T. and Edwards, D. (2012) 'Emerging Implications of Open and Linked Data for Knowledge Sharing in Development', IDS Bulletin 43 (5) 117-127”, published in the IDS Bulletin: “New Roles for Communication in Development?”. Ironically, considering it’s subject matter, is only partially open access (two free articles per issue). But you can access this article as green open access in Open Docs - http://opendocs.ids.ac.uk/opendocs/handle/123456789/2247

Wednesday, 2 November 2011

Exploring the black box together: evaluating the impact of knowledge brokers

Cartoon by Sidney Harris (2007)
By Catherine Fisher

I love this cartoon! 

It seems to capture the idea of the "black box" that lies between the activities knowledge brokers and intermediaries undertake and the outcomes and impacts they seek to achieve. That’s not to say that they don’t achieve outcomes in the real world, rather that the pathways by which their work brings about change are difficult to unpack and evaluate.

The Knowledge Broker’s Forum (KBF) has started exploring this "black box" of how to evaluate the impact of knowledge brokers and intermediaries in an e-discussion running from 31 October until 9 November. I am (lightly) facilitating this discussion, along with Yaso Kunaratnam from IDS Knowledge Services.

If you would like to participate, you can sign up on the forum's website, it's open to anyone with an interest in this area.

Challenges in evaluating impact

We know there are a lot of challenges to evaluating impact of knowledge brokering. Some challenges stem from the processes (psychological, social and political) in which knowledge and information bring about change, the contested nature of the relationship between research and better development results, and the challenges of identifying contribution to any changes in real world contexts. This is particularly challenging for actors that seek to convene, facilitate and connect rather than persuade or influence.

As well as these quite high level challenges, there are the very practical issues around lack of time and resources to dedicate to effectively understanding impact. These challenges are explored in a background paper (PDF) I prepared as food for thought for those taking part in the e-discussion.

Being an e-discussion amongst 400+ knowledge brokers from all over the world, I am not sure yet where discussions will go, but I am hoping that it will shed some light on the following areas:

Breadth and depth of impact and outcomes  

How far do people go to identify ultimate outcomes of knowledge brokering work? I feel we can certainly go beyond immediate impact (e.g. personal learning) to push towards what that resulted in, however I wonder if it is meaningful to start looking at human development and wellbeing indicators. It will be interesting to see how far others are going.

Understanding behaviour change

If knowledge brokering is about behaviour changes that ensure greater engagement with research evidence, how are people defining those behaviour changes and are how are they measuring them? Are we too easily impressed with stories of information use when these could in fact hide some very poor decision-making behaviours?

Opportunities for standardisation of approaches and data collection

If people have come up with ways of doing this, is there any appetite for standardising approaches to enable greater comparison of data between different knowledge brokering initiatives? This would help us build a greater understanding of the contribution of knowledge brokers beyond the scope of any one broker’s evaluation.

I’ll also be interested to explore and challenge some of my assumptions – in particular that building some kind of theory or map of change is an important starting point for defining and then seeking to evaluate impact. This has been discussed previously on this blog and is a hot topic at the moment.

Our discussion will face challenges – not least the huge variety of types of knowledge brokering and contexts in which it is undertaken may mean there is not enough common interest. But I am sure that there is a lot of experience in the group that can be brought to bear on these questions and, in 10 days time, we will have a better idea of what is known, who is keen to explore this further and and hopefully how we could move forward to develop our understanding in this area.

Wednesday, 14 September 2011

Exploring evaluation approaches: Are there any limits to theories of change?

By Chris Barnett

I’m in Brussels co-facilitating a course on evaluation in conflict-affected countries, with Channel Research. We are exploring new and alternative approaches to evaluation, building on recent experiences of multi-donor evaluations in South Sudan and the Democratic Republic of Congo (DRC). The South Sudan and DRC evaluations are part of a suite of evaluations that sought to test the draft OECD Guidance on Evaluating Conflict Prevention and Peacebuilding Activities.

While the context is very specific, I’m hoping that the discussions will raise some interesting issues around the way we approach evaluation and particularly how we use theories of change. The term “theory of change” is a much overused phrase at the moment, and one that seems to have different meanings to different people. In this case it is being defined as, “the set of beliefs [and assumptions] about how and why an initiative will work to change the conflict” (OECD Guidance, page 35). Duncan Green, in his blog, also helpfully points out the difference between a theory of change (a classic, linear intervention logic, or results chain, used as a basis for logical frameworks) from theories of change (such as a range of theories about the political economy of how and why change occurs).

Photo courtesy of Jon Bennett
Working in conflict-affected states poses many challenges for evaluation, not least the changing context, instability and insecurity. It most cases it is not feasible to set up a controlled experiment and maintain it over a reasonable period of time. Not only are there the cost and ethical issues of distributing benefits randomly, but also the sheer technical difficulty of maintaining a robust counterfactual in a context where there is so much change. It is not impossible of course (e.g. IRC’s evaluation of Community Driven Reconstruction in Liberia); just often not appropriate or feasible.

Hence, the OECD Guidance focuses on a theory-based approach to evaluation (NB: Henry Lucas and Richard Longhurst, IDS Bulletin 41:6, provide a useful overview of different evaluation approaches). At the heart of the OECD Guidance is the need to identify theories of change, against which to evaluate performance.

But in South Sudan and DRC we found a number of limitations to this approach:

1. Firstly, we found it challenging to apply a theory of change approach to the policy or strategic level. Most donors did not articulate a transparent, evidence-based rationale for intervening – sometimes intentionally so, given the dynamic and sensitive context. This meant that reconstructing theories of change for evaluation purposes became highly interpretive and open to being challenged – particularly when drawing out differences between actual and de facto policies.

2. Secondly, we found that different theories of change existed at different levels. As one moved down from the headquarters level to the capital city, and onto local government and field levels, then views differed about the drivers of conflict and the theories of change necessary to address these. This presented the evaluator with a dilemma – and sometimes wrongfully placed as arbiter to different perspectives and realities.

3. Thirdly, while lots of activities contribute to conflict prevention and peacebuilding, many were not be explicit about such objectives. Again, the reconstruction of the de facto theories of change against which to assess performance becomes highly interpretive and more open to being challenged.

So do we hope to do this week? We will be exploring alternatives to such Objective or Goal-Based evaluations that seek to assess performance against the stated (or reconstructed) theories behind an intervention. Rather, we’ll explore some Goal-Free alternatives – where data is gathered to compare outcomes with the actual needs of the target audience, using reality as the reference point rather than a programme theory. After all, in many walks of life we do not “evaluate” performance against the stated objectives: When we assess whether a car is good or not, we do not consider whether the design team fulfilled its objectives! Rather, we are interested in whether it fulfils our needs.