Featured

Intersectionality for Evaluation Professionals

by Sophie Nicholas

In this blog post, we hope to provide some food-for-thought on how to incorporate intersectionality more as a research or evaluation professional in the non profit sector. But before we dive into how to do intersectionality better, it’s important to know and understand what intersectionality is- and isn’t.

What is intersectionality?

Some think of intersectionality as a methodology for research; a theoretical lens; or a way of thinking about the world.

One explanation of intersectionality comes from ‘Disentagling Critical Disability Studies’ author Goodley, where intersectionality is described not simply as bringing together identity markers like disability, race, sex, age, economic position, but considering how different identities ‘support or unsettle the constitution of the other”.

Let’s take a common social bias: that disabled women are asexual or un-feminine. Arguably, this bias reaffirms the existence of rigid and sexist ideas of femininity and the female body as women who aren’t disabled are seen as an archetype for the female body (sexism). This bias simultaneously denies disabled women their freedom to be feminine, and to express themselves sexually in ways they value (ableism). And, this is just one of many examples of how intersectional identities experience distinct and multi-layered forms of discrimination and bias.

Crenshaw gives her seminal analogy of intersectionality, stemming from the black Feminist movement, asking readers to imagine traffic at an intersection. She explains how discrimination is like traffic, and gives the example of a black woman harmed both for being black and/or female. Her injuries due to discrimination (or traffic in Crenshaw’s analogy), could be as a result of discrimination based on sex, on race, or both.

In evaluation and research, what an intersectional approach tends to have in common, no matter how defined, is at least some of the following considerations:

1. Considering bias- within society, as an individual, as part of a group 

(In what ways might I be biassed as a researcher with X background/identity/language/opinions?)

2. Questioning how social identities and structures interact and change each other 

(Has X person experienced this programme activity the same as X person? Have they experienced the same barriers?)

3. Challenging dominant forms of thought as natural truths 

(Does X language or X approach actually benefit everyone? Why is this belief held?)

4. Questioning power dynamics and structural inequities

(How might my evaluation processes be reinforcing unfair power dynamics or structural inequities? E.g. my survey is in English in a context where English is not the first language)

5. Embracing complex narratives and being open to multi-faceted experiences

(How can I share my findings in a thematic way whilst making sure I illustrate unique opinions of underrepresented individuals?)

What it’s not

Hopefully you have a clearer understanding of what intersectionality is and what it can mean for people – but it’s also important to be clear on what intersectionality isn’t to avoid harm coming to any stakeholders during your evaluation work.

First, taking an intersectional approach shouldn’t be boiled down to labelling, or creating an environment of competition over aspects of identity. This kind of discourse is best to be avoided to prevent a competitive narrative and to prevent feelings of shame, guilt, and/or exclusion from whoever it is you’re conducting research or evaluation with.

Second, taking an intersectional lens in your work shouldn’t be a tick-box exercise. If in doubt about whether you are implementing it in a meaningful way, don’t go it alone. Read up on, and even consult with intersectional feminists, actvists, writers, and thinkers, and learn how to sincerely and meaningfully implement this lens into your evaluation process.

What is means to consider intersectionality in evaluation

To consider contextual intersections deeply, and actually adapt research or evaluative processes to make sure your target demographic/s/communities are represented, heard and meaningfully involved- is to be intersectional. So, in the beginnings of any evaluative process, making sure a diversity of identities is at the table is crucial in your methodology and planning.

One of the most important steps in evaluation in this sense is to conduct a context review where you’re working, taking the time to include ‘intersectionality’ as well as a mix of identity markers e.g. gender, ethnicity, age etc. in your reading search as keywords, and making sure you have a holistic, contextualised, and multi-layered picture of the project or programme you’re evaluating. 

Second, ask local communities, those with lived experience of the issue or theme you are exploring, and/or feminist, activist, grassroots and local experts to guide you with gaps you might be missing in your methodology or appraoch, in questioning your own assumptions, and ideally, to lead and/or facilitate aspects of the research.

The best-case scenario for an intersectional evaluator would be to learn from, share with, and make accessible to the best of your ability, the evaluative process with diverse groups and those with lived experience of the issue or theme you aim to explore. Plus, if possible in your timeframe and budget, having members of this group at the table, leading in the process of design, collection, analysis, and dissemination. In short, taking a foundational approach by building on people’s lived experience from the very beginning.

In terms of qualitative research, this might involve focus groups, sessions and meetings with lived experience stakeholders, interviews e.g.- but with questions that allow for intersectional exploration which means asking specifically and sensitively about the experiences of those from different identity groups.

However you go about collecting data, to be intersectional, one approach is for data to be disaggregated by relevant identity markers (depending on the context and stakeholders you are working with). Let’s take surveys. If you send out a survey asking for opinions and experiences of a programme’s impact without disaggregating data, you may be unable to tell of any intersecting discrimination or stories that sit at these intersections as a result of focusing on singular aspects of identity. You might risk missing, ignoring, or devaluing people’s experiences. 

After all, intersectionality is committing to making sure those who are most marginalised are front and centre.

It’s for everyone!

Intersectionality is for everyone – not only your stakeholders. Evaluators, researchers, NGOs and other organisations also have a lot to gain. Adopting an intersectional lens is like opening doors into lots of worlds; worlds that allow for a more holistic, richer and more nuanced evaluation of impact. Alongside your board members, funders, partners, and staff, it also holds you accountable to your genuine stakeholders.

An intersectional lens can help lived-experienced stakeholders and their supporting organisations flourish, providing a breath of fresh air to theoretical and methodological framing. Ultimately, it can better your evaluation work, making sure the positive outcomes and impacts you hope to achieve are truly beneficial to all. 

Want more information?

Check out these resources on intersectionality and research

Intersectional Approaches to Research

Intersectional Approach to Data

Intersectionality: A Tool for Gender and Economic Justice

bell hooks: Feminism is for everybody

Towards gender transformative change

Be sure to read our blog: Top tips for getting the most out of that evaluation report


Rights Evaluation Studio provides a range of services including project design, strategy, monitoring, evaluation and impact assessment. Please get in touch if you would like to discuss how we can help you to review, update or develop monitoring and evaluation systems that work for your organisation.

Top tips for getting the most out of that evaluation report

So, you have just completed an external evaluation process report – what now?

by Patrick Regan

To help ensure an external evaluation is not just another dusty report in a filing cabinet, there are a few steps organisations can take to help ensure they get the most out of their external evaluation reports. You may not need to follow all the suggestions in this blog, but it is worth thinking about which might be of value for your organisational context and learning goals.  

An evaluator’s recommendations should not be seen as something the organisation is obliged to implement or agree with – they should serve as a jumping-off point for further reflection and considering if the proposed recommendations are feasible or useful.  

In this way, the independent perspective and expertise of the evaluator can be paired with the in-depth knowledge and experience of the team implementing the organisation’s work – and hopefully, key learnings are extracted, and recommendations are meaningfully engaged with.


1. Communicate Findings to Stakeholders

Many stakeholders would have taken time and energy to participate in the evaluation process so its best practice to make sure findings are communicated back to them.  

Not only does sharing findings and learnings help to show that you value their contribution and engagement, but it gives them the chance to object to findings which relate to any impact claimed and to learn themselves from the findings, making the process more of a conversation and exchange rather than an extraction. Plus, it makes them more likely to participate in future evaluations- a win-win.  

Communicating these findings could be in the form of a blog, email, video, or even a meeting where stakeholders are present and can exchange their perspectives. 

2. Reflect and Respond

You should review the recommendations proposed and organise them. This could be organised into: 

  1. Recommendations you support  
  1. Identifying any recommendations, you do not support 
  1. Recommendations you would like to implement but do not currently have the resources/capacity to 
  1. Recommendations you support and can realistically implement 

This process might also lead to additional ideas or internal recommendations which might respond to the evaluation findings in different ways. 

3. Make an Action Plan

Once you have identified the recommendations you plan to implement, it can be useful to create a brief action plan which clearly identifies who is responsible for implementing each recommendation, and a timeline to implement the recommendation. This action plan (and updates on it) can be used as a key paper for your staff team, senior management or board members to review at meetings to ensure accountability of implementation.  

See this useful template and guide

4. Develop a Management Response

The main purpose of drafting a ‘Management Response’ is to create an internal formal document to help contextualise the findings and recommendations should someone be looking back onto the report in years to come.  

It is common practice to develop a short (1-2 page) management response to an external evaluation summarising the key learnings that are of most interest and significance for your organisation. It should also highlight any evaluation findings which you think are ill-informed or not truly reflective of the programme evaluated.  

Finally, your response should document your intended next steps in response to the evaluation (I.e., which recommendations you will implement, which you would like to implement but cannot, which you do not deem suitable to implement).  

You might also choose to document new evaluation questions which arise as a result of these findings, or areas in which you would like to know more about, as this will help to frame and inform future evaluation exercises and help future external evaluators best understand how they can add the most value.  

A management response is particularly useful for evaluations which will be shared with existing or potential donors, so that the organisation’s perspective on the findings is documented and their intention to learn from the findings is shared.  


Remember, the idea is to make sure your report gets shared and meaningfully engaged with- but most of all, to find a balance that works for you between reflection, objection, and action, when considering your recommendations. 

Be sure to read our Blog: Accountability in Evaluation – Accountable to Who?  


Rights Evaluation Studio provides a range of services including project design, strategy, monitoring, evaluation and impact assessment. Please get in touch if you would like to discuss how we can help you to review, update or develop monitoring and evaluation systems that work for your organisation.

Accountability in evaluation – accountable to who?

Being accountable, and being held responsible for the results of our projects and activities is one of the key pillars that frame monitoring and evaluation practice. The growing and varied acronyms for evaluation professionals sometimes include an ‘A for Accountability, e.g. a MEAL manager. Before you get hungry, MEAL stands for Monitoring, Evaluation, Accountability and Learning.  

As a consultant focusing on the evaluation of human rights programmes, the concept of accountability is harmonious with the lexicon of holding governments and state actors accountable for their human rights violations. Promoting accountability is surely a good thing, and having a dedicated role to focus on accountability sounds like a good thing to have at an organisation…right? Although conceptually this makes sense, this brief article highlights some of the risks incurred by organisations when accountability processes and structures are not carefully thought through.  

Organisations should be asking themselves, “Who are we holding ourselves accountable to?”.  Many organisations will prioritise accountability to their donors, and this is where accountability could end up doing more harm than good – especially if you then structure monitoring, evaluation and learning plans which prioritise donor accountability. With a donor-first approach, you risk ending up with problematic indicators of success and diverting your attention away from what meaningful impact looks like for the people you are trying to support and the systems you are trying to influence.  This approach can then have a knock-on effect on the perceived value of monitoring and evaluation across the organisation – project delivery staff view the process as donor box-ticking instead of an opportunity to learn, improve and maximise the impact of their work. This in turn makes it harder to engage staff and stakeholders in the evaluation process, potentially limiting the quality (and quantity) of the data you have. In the end, a donor centred approach to accountability and MEL is unlikely to succeed.   

This situation can be exacerbated for human rights organisations where the operating environments can be incredibly complex, change can be hard to observe and the impact can, at times, be more abstract, resulting in an approach to learning and accountability made up of vanity metrics and meaningless indicators which do not provide any real opportunity to learn or gain insight.  

We should also be looking at accountability from the perspective of structural power imbalances. Most human rights funding comes from wealthier governments, family trusts and foundations, and individuals philanthropists, i.e. groups who already have a significant influence in the world. Therefore, is it ethical for us to prioritise accountability to them – does doing this perpetuate further inequalities? And do donors even want organisations to do this?  

So who should we be holding ourselves accountable to?   

This question can lead to a lot of (potentially useful) debate within organisations. If we are working in the public interest and seeking to advance the rights, protections and lives others, should we not be prioritising accountability to those we claim to be acting in the interests of?  

I would encourage organisations that are serious about improving their accountability, and looking to build a stronger culture of monitoring, evaluation and learning within their organisations to focus on holding themselves accountable to the groups, communities, individuals and organisations they are seeking to support. Rather than asking yourselves, “Did we deliver the activities and results promised to our donors?” start by asking: 

  • What will results and impact look like for those we are claiming to act in the interests of? 
  • What would realistic and meaningful outcomes be for these stakeholders? (and how much variation is there between these groups and individuals)? 
  • From the perspectives of these different groups, what would be a meaningful indicator or progress marker that these results are materialising? 
  • Are there ways we can involve these groups, or representatives from these groups, in the collection, analysis or interpretation of evaluation data? 

Developing a monitoring, evaluation and accountability framework from this starting point could make the entire process more meaningful, learning oriented and potentially promote more significant results.  

Organisations should also be motivated to hold themselves accountable to themselves. Organisations might want to consider:   

  • What information do we need as an organisation to know that we are pushing things in the right direction?  
  • How will we know things are going to plan?  
  • Are we satisfied with how effective and efficient our activities are?  
  • How are we responding to challenges and changes? 
  • What is the unique role of our activities and programmes in the wider eco-system and how important is our contribution?  

Holding ourselves accountable to those we are seeking to support, and being accountable to ourselves are likely to be harmonious. If accountability is happening at these two levels, and your MEL systems are structured accordingly, you should be well-positioned to propose a MEL approach to your donors that holds you accountable to them in the same way – meaning donor accountably does not become a driving force but is second nature to your own accountability practices.  In my personal experience, the majority of donors I have worked with are adaptable, flexible and willing to support your own MEL structures and strategies if they can see time and thought has gone into doing these things appropriately, and they can see they are working.   

Taking action  

In summary, I advocate for and encourage organisations to prioritise holding yourselves accountable to the groups whose interests you claim to represent, closely followed by internal organisational accountability – s simple recommendation, which is more complex to realise than it is to suggest. But putting the time into this process could help you to shift the axis of power, generate more useful and meaningful insights and learnings which flow effortlessly into your donor accountability structures.  

Some simple first steps in this process could be: 

  • Conduct a light touch accountability assessment, mapping your organisation’s accountability structures and processes, identifying the strengths and weaknesses of your current approach, and understanding who or what is being prioritised and why.  
  • Engage with individuals and groups that you are supporting and who are the target of your projects and activities – find out what success would mean to them, and what results would be meaningful for them? Consider if your conceptions of “impact” are coherent with theirs.  
  • Consider the extent to which you are involving these individuals and groups in the design of data collection tools, the collection of data, analysis, interpretation and sharing of findings. Is there room for greater engagement and involvement in these processes?    
  • Consider what mechanisms you have in place within your organisation to hold yourselves accountable at a result or outcome level. How are you monitoring your outcomes? and who is involved in using, interpreting, and responding to outcome monitoring information and data?   
  • Consider who is responsible for MEAL in your organisation – if you don’t have a dedicated individual, do these tasks end up with your fundraising team? If so, how is this affecting your organisation’s approach to accountability? 
  • Find out what data, information and results are most interesting and useful to your donors, and how much flexibility there is to explore different approaches or to reframe and reorganise your indicators and milestones to prioritise accountability to target groups – you might be surprised with their flexibility and openness to experiment.   

The accountability debate of course has many layers and complexities, and these are just a few thoughts and reflections based on my personal and professional experiences of accountability and evaluation.  

Rights Evaluation Studio provides a range of services including project design, strategy, monitoring, evaluation and impact assessment. Please get in touch if you would like to discuss how we can help you to review, update or develop monitoring and evaluation systems that work for your organisation 

Looking back and looking forward

It has been over a year and a half since I officially launched the Rights Evaluation Studio – and low and behold, it is only now (whilst on a plane, with no other distractions) that I have finally found time to bring our blog space to life. I am constantly encouraging the organisations I work with to make time and space to reflect on their work, achievements and challenges and I felt like it was time I did the same.  

The last 18 months have been unpredictable, exciting, and challenging to say the least (thanks for that, pandemic).  The pandemic brought both challenges and opportunities, and the experiences, learnings and people I have met during this time have filled me with inspiration.  Whilst I have missed the opportunities to get to meet NGO teams, colleagues, activists, researchers, experts and human rights defenders in person due to the travel and distancing restrictions,  I have been able to meet digitally to meet and reflect with some very interesting people over the past year. I have spoken with litigators and activists engaging in cutting edge human rights investigations, litigation and implementation work; journalists looking to shift the negative narratives and stereotypes affecting migration and minority groups; court officials and diplomats;  judges engaging in sentencing reform processes in Uganda; Roma rights activists seeking to document unconscious bias; and last but not least, dozens of committed and passionate NGO workers working to tackle today’s human rights challenges on national and global scales – these meetings are a constant source of drive and motivation. 

I’ve also been fortunate to be able to explore new ideas, methodologies and processes for tackling that forever unanswered impact measurement question (sorry, no magic bullet yet!); and have worked to bring the perspectives and voices of people with lived experience of human rights violations to the forefront of my evaluation work, which, without exception, continued to yield the most insightful evaluation data and perspectives for every evaluation I was involved in.

Personally, the year has been just as busy. In March 2021, I attended my graduation after securing a distinction in my master’s in Social Research and Law/Human rights. I spent months scraping data from court websites to build a dataset for a quantitative analysis exploring the relationship been judgment implementation and the rate of human rights violations lodged before the European Court of Human Rights – and a writing retreat in Portugal to give me time to focus on my thesis resulted in me moving there! (And yes, for some reason it seemed sensible to study for my masters, start a consultancy business and move countries at the same time – all with the backdrop of a pandemic to make it that little bit more complicated). 

“It is with a fresh breath of life that RES will continue to develop its offering to help support NGO’s to do what they do best!

Having had some time to decompress, it is with a fresh breath of life that RES will continue to develop its offering to help support NGO’s to do what they do best! Over the next year we want to continue to expand our services, provide resources and blog content and develop more creative methodologies for evaluation in human rights – so watch this space!  

To help bring RES into the next phase, I am also very pleased to announce that RES’s first official staff member (excluding me!) started this month – Hugh Atkinson, our Research and Evaluation Associate, will be helping to increase our capacity, working across a range of our evaluation projects. You can read more about Hugh here.  

“I would like to thank all of the wonderful individuals and organizations that I have gotten to work with over the past 18 months”

Finally, I would like to thank all of the wonderful individuals and organizations that I have gotten to work with over the past 18 months – thank you for your time, openness, expertise and hard work – it is a privilege to get to spend my days learning from each and every one of you! 

Do you have an impact measurement challenge you’d like us to help tackle? an evaluation challenge you’ve been trying to overcome? Let us know – we are always interested in understanding these challenges better and helping to find creative solutions to them. You can reach us on admin[at]rightsevaluation.studio  

Tackling the impact measurement challenge in strategic litigation

Over the last year or so, I’ve been working with the Digital Freedom Fund to develop and pilot a new framework for impact and outcome assessment for strategic litigation and digital rights.

You can check out my blog on their website which describes the process of developing the framework and our first steps in bringing it to life: https://digitalfreedomfund.org/tackling-the-impact-measurement-challenge/.

The framework is rooted in a methodology called “Outcomes Harvesting”. Using existing tools and methods and adapting them to a human rights context can help us to develop more meaningful approaches for the human rights field to better understand and demonstrate their impact. Check out the blog for further information or feel free to reach out directly if you would like to know more about this framework, or to discuss how the Rights Evaluation Studio can help your organisation to measure results and impact.