Thursday 29 September 2011

Rethinking development in an age of scarcity and uncertainty


By Emilie Wilson

Last week, I was in York attending the European Association of Development Institutes (EADI) and the Development StudiesAssociation (DSA) joint conference, whose thematic focus is the title of this blog.

Downloadable from www.ids.ac.uk
There is something in the word "rethinking" which might suggest an attitude of "it's broken, let's go back to the drawing board and start again". IDS used the term "reimagining", which has a more inspirational and creative resonance to it, when developing research around exploring and responding to crises, the results of which have been captured in the latest IDS Bulletin.

From my experience at the conference, the broad assembly of academics, activists and policymakers reflected a spectrum of  approaches to "rethinking": from the radicals who want to upend the existing financial, governance and informational infrastructure on which development sits, the innovators who modify and improve on existing systems,  the philosophers who want to ask uncomfortable questions about the ethics of giving to poor countries over poor people or the politics of measurement, and the pragmatists who want solutions and answers to the problems they perceive..

The approach I try to bring to “rethinking” is one of learning: reflecting on past and current theories and practices, identifying areas of improvement and opportunities for innovation, and sharing this experience more widely. Hence the blog I wrote for the conference entitled “Let’s not throw out the baby with the bath water”.

As I say in this blog, I found the definition of Knowledge by Dr Sebastiao Mendonco Ferreira fascinating.  He contrasted the practice of managing knowledge as a resource with managing natural resources.  This was a useful way to focus our minds on ‘knowledge’, rather a slippery word, like an intellectual game of charades or a riddle ‘it’s intangible, non-rivalrous, non-erodible, human-made, both tacit and explicit, contained in receptacles such as human minds or embedded in machines, it’s unlimited’….would you have arrived at ‘knowledge’ after this description?

He went on to highlight the role of the internet for Knowledge ecosystems :- Will this increasingly custom-made and intuitive ‘web-environment’ help us develop the epistemic cultures and communities? Sebastiao suggests we need to address our limited ability to ‘absorb’ knowledge? A knowledge which is increasingly complex and sophisticated, and thus difficult to verify? This is one approach that IDS has taken through its development of a social networking platform for people working in development, Eldis Communities.

Read the rest of the original post.

You might also be interested in my colleague Yaso Kunaratnam’s post on: Rethinking the role of intermediaries in bridging policy, research and practice








Wednesday 14 September 2011

Exploring evaluation approaches: Are there any limits to theories of change?

By Chris Barnett

I’m in Brussels co-facilitating a course on evaluation in conflict-affected countries, with Channel Research. We are exploring new and alternative approaches to evaluation, building on recent experiences of multi-donor evaluations in South Sudan and the Democratic Republic of Congo (DRC). The South Sudan and DRC evaluations are part of a suite of evaluations that sought to test the draft OECD Guidance on Evaluating Conflict Prevention and Peacebuilding Activities.

While the context is very specific, I’m hoping that the discussions will raise some interesting issues around the way we approach evaluation and particularly how we use theories of change. The term “theory of change” is a much overused phrase at the moment, and one that seems to have different meanings to different people. In this case it is being defined as, “the set of beliefs [and assumptions] about how and why an initiative will work to change the conflict” (OECD Guidance, page 35). Duncan Green, in his blog, also helpfully points out the difference between a theory of change (a classic, linear intervention logic, or results chain, used as a basis for logical frameworks) from theories of change (such as a range of theories about the political economy of how and why change occurs).

Photo courtesy of Jon Bennett
Working in conflict-affected states poses many challenges for evaluation, not least the changing context, instability and insecurity. It most cases it is not feasible to set up a controlled experiment and maintain it over a reasonable period of time. Not only are there the cost and ethical issues of distributing benefits randomly, but also the sheer technical difficulty of maintaining a robust counterfactual in a context where there is so much change. It is not impossible of course (e.g. IRC’s evaluation of Community Driven Reconstruction in Liberia); just often not appropriate or feasible.

Hence, the OECD Guidance focuses on a theory-based approach to evaluation (NB: Henry Lucas and Richard Longhurst, IDS Bulletin 41:6, provide a useful overview of different evaluation approaches). At the heart of the OECD Guidance is the need to identify theories of change, against which to evaluate performance.

But in South Sudan and DRC we found a number of limitations to this approach:

1. Firstly, we found it challenging to apply a theory of change approach to the policy or strategic level. Most donors did not articulate a transparent, evidence-based rationale for intervening – sometimes intentionally so, given the dynamic and sensitive context. This meant that reconstructing theories of change for evaluation purposes became highly interpretive and open to being challenged – particularly when drawing out differences between actual and de facto policies.

2. Secondly, we found that different theories of change existed at different levels. As one moved down from the headquarters level to the capital city, and onto local government and field levels, then views differed about the drivers of conflict and the theories of change necessary to address these. This presented the evaluator with a dilemma – and sometimes wrongfully placed as arbiter to different perspectives and realities.

3. Thirdly, while lots of activities contribute to conflict prevention and peacebuilding, many were not be explicit about such objectives. Again, the reconstruction of the de facto theories of change against which to assess performance becomes highly interpretive and more open to being challenged.

So do we hope to do this week? We will be exploring alternatives to such Objective or Goal-Based evaluations that seek to assess performance against the stated (or reconstructed) theories behind an intervention. Rather, we’ll explore some Goal-Free alternatives – where data is gathered to compare outcomes with the actual needs of the target audience, using reality as the reference point rather than a programme theory. After all, in many walks of life we do not “evaluate” performance against the stated objectives: When we assess whether a car is good or not, we do not consider whether the design team fulfilled its objectives! Rather, we are interested in whether it fulfils our needs.

Thursday 8 September 2011

Consuming knowledge or constructing it: a response

By Catherine Fisher

In my colleague Sunder’s blog earlier this week entitled Consuming knowledge or constructing it: evidence from the field , he described the resistance that some of the civil servants that he had met in India felt towards engaging with research knowledge. 

Reading the post, I felt that the reasons given by the civil servants he met for not wanting to engage with research knowledge are extremely valid and justified, based on their understanding of research and  “research uptake” or “evidence based policy” agendas.  
The  reasons given for not engaging with research  reveal a set of assumptions about the nature of research knowledge and knowledge more broadly that stem from modernist ideas about knowledge that are explored in the paper  that my colleague Emilie discussed in the first of this series of blogs.

Entitled Changing conceptions of intermediaries: challenging the modernist view of knowledge, communication and social change, one section of the paper explores four key modernist assumptions about knowledge which it goes onto critique:  that there exists an objective reality, that the scientific method of enquiry is neutral, that knowledge can be stored, managed and transferred, and that communication is a linear process.   These seemed to chime with me when I thought about the assumptions that seemed to inform the responses of the civil servants Sunder interviewed.   While these ideas have been widely critiqued, lots of the assumptions remain, both for people involved in the world of research uptake, and on Sunder’s evidence,  in the minds of the people whom they are trying to influence.  

I think the civil servants seemed to be rejecting a set of assumptions about research and its role in decision making, which I paraphrase as:

1. Research knowledge is the only knowledge that is important

This idea has been championed since the enlightenment and contains ideas about how legitimate knowledge is produced: it privileges knowledge produced through certain scientific processes and marginalises other sources of knowledge eg that generated through experience, or from less powerful actors.  While the underlying tenets have merit and underpin movements such as “evidence based policy”,  few proponents of greater use of evidence in decision making would argue that decisions should be solely based on research based evidence with no regard of other kinds of knowledge or social, economic or political contexts.   However maybe this is what is conveyed!

2.  Research provides a pre-determined right answer
There are very few policy contexts in which there is a clear cut right answer, particularly in relation to social, economic and political issues, and so social science research is very rarely going to be able to provide a right answer.  As intermediaries, we are concerned with engagement with a multiple range of sources from which an answer can be constructed.  And again I doubt many people involved in research uptake would see that one piece of research will provide a policy answer

However, as Carol Weiss in the introduction to Fred Carden’s recent book “Knowledge to Policy: Making the most of Development Research  observes, for many policy makers in developing countries, their experience of research is that which accompanies policy prescriptions handed out by funding institutions, thus the idea that research promotes a correct one-size fits all approach could well have been reinforced by this experience.


3. Knowledge can be communicated in a linear way and consumed passively

Personally, I believe that knowledge can only be constructed in someone’s head, knowledge requires a “knower” and is the sense someone make of information when they analyse and understand it according to their previous experience, belief systems and even what mood they are in today.  Thus I would argue with the title of the post but its been pointed out that my position may be a little extreme and not necessarily shared by many in the industry in which I work!  However I think that everyone who works in the knowledge industry needs to dispel the myth that knowledge can be transferred or passively consumed in ways that bear no reference to the consumer – people have knowledge and this shapes how they interpret new knowledge (research knowledge or otherwise)  and what they do as a result. 

So I would conclude that resistance to use of evidence in decision making is entirely reasonable if you have a modernist understanding of research knowledge.  Or indeed if you have been subject to such ideas from others/ believe that the person you are talking to shares those ideas!  

For me 2 areas emerge:

To what extent are the people who promote research uptake/evidence based policy etc propagating ideas about research and its role in change that are not useful?   Do those of us involved in promoting greater use of research in decision making need to be clearer with ourselves how we see the role of research vis a vis other knowledge and in the context of political realities and consider what messages we are sending. 

I am increasingly thinking it would be valuable if people generally (including me) were more aware of their own decision making processes - a kind of meta-cognition that allows them to be more aware of the how they process information and ideas, accepting, rejecting and interpreting  and applying them in line with their previous experiences, knowledge and beliefs.  In keeping line with the work of IDS colleagues in the Participation Power and Social Change Team, I suspect that this can be encouraged through greater reflective practice .   Another angle on this issue is idea of “evidence literacy” I am working with INASP to explore this through its PERii: Evidence-Informed Policy Making programme and will be posting updates to this blog. Watch this space.

Tuesday 6 September 2011

Consuming knowledge or constructing it? - Evidence from the field

By Sunder Mahendra

I heard an interesting comment in India recently from an academician and a consultant to local government – someone who has influence on policy processes. “Knowledge is not valued in policy making circles. Knowledge is sacrificed at the altar of representative democracy”. In speaking of knowledge, he meant evidence collected through research.

The example he went on to give was: if a bus stop is demanded by residents of a locality, the local elected representative will direct the authorities concerned to set it up immediately. The authorities follow these instructions as a routine and set up a bus stop, not wanting to take up issue with the elected representative. The decisions are reactive. Neither the elected representative nor the local authorities refer to the history and geography of the locality. There is little research or evidence that is accessed to consider the viability of the location of the bus stop, or the best design and construction parameters. The incentives for responding to the problem quickly are different for the elected representative and the authorities, but nevertheless both have an incentive to respond quickly. While the example may be an extreme case and related to service delivery, it illustrates some of the core culture and attitudes towards knowledge or evidence that permeate much of the decision-making circles.

I interviewed 20 policy actors in India recently during a study of information eco-system of policy actors. Among them three civil servants had this to say when asked what research/evidence they access and use before making decisions:-

  • Civil servant 1: “we talk to people affected by the issue, take decisions and recommend policy”
  • Civil servant 2 (an academician deputed to a decision-making role): “we commission research through experts where gaps in existing knowledge (in terms of what they already know) exist and put forward the recommendations for consultations”
  • Civil Servant 3: “commission research on the issue, put the recommendations up for discussion in committees and follow committee’s recommendations”

The civil servants I met indicated that most decision-making takes place at meetings and through elaborate mechanisms of consensus decision-making, within committees. They recognise reaching decisions by consultation and consensus as an accumulative process that assumes knowledge to be something collectively constructed through interaction (rather than something possessed by an individual or by an organisation). They were suggesting what they perceive to be an alternative approach to knowing.

Policy makers, in this case the civil servants, in reality are challenging the primacy of research knowledge or the ‘evidence base’ that is promoted by agencies and organisations. They are in effect reclaiming their right to construct their own knowledge through their own means and seek recognition of their knowledge. The commissioning of research on issues through NGOs or experts they know, rather than referring to any existing body of research knowledge, is an extension of their own knowledge construction.  This is a self constructed knowledge, one on which they have a perceived control over, instead of a received knowledge passively gained through external sources.

Their resistance to research knowledge promoted as an evidence-base seems to rest on the assumption that there are no predetermined ‘right’ answers to any given problem.  Rather, that answers and solutions, are constructed and context specific. What might be the ‘right’ solution to a problem in one time and place is unlikely to be the right solution in another time and place. While not denying the existence of right solutions – or of truth – they argue that the question relating to a particular problem in every context may be the same, but the answers may be different.

The source of knowledge - be it local or international - is not so much of an issue with the policy actors. They appear to be less concerned with whether locally generated research knowledge is more relevant to local contexts than international research. The idea that research knowledge as an evidence base is something superior to their own constructed knowledge - constructed through consultations and discussions - is what they appear to be uncomfortable with. Their belief in their own methods of accumulating knowledge on a specific issue or “knowing”, even stops them from actively looking for evidence from neighbouring governments on similar issues. Similarly, there is little motivation for them to seek research evidence from other sources.  

This poses obvious challenges for knowledge intermediaries in terms of identifying who to reach or for stimulating demand for research. Whilst policy makers will continue to be a target audience, the question that arises is whether it is a good use of resources to put efforts into directly reaching policy makers? Would it be more useful to focus energies and resources in identifying their networks and people who they engage with to set the discourse? It is important for intermediaries to recognise that many societies in Asia and Africa are relationship-based rather than rule-based ones. Peer influence, societal norms play a far greater role in influencing individual behaviour. An analysis of the context that intermediaries engage with is critical to understanding how people and systems operate and what barriers and opportunities that presents for engaging more effectively.