Media Monitoring for Monitoring and Evaluation

By Hugh Atkinson

One of the many avenues through which NGOs and human rights organisations seek to have an impact is through direct public communications or by distributing material to actors who can share their work to a wider audience or actors who can strategically drive change.  

This process of external relations and distribution is often long and complicated, and there are many factors which must be considered, such as the need to balance the interests of those stakeholders who have been involved in the work, verifying material for public communications, and having legal teams proof material prior to publication. So, what happens after a report, story, or video has been released to the public? 

As evaluators, it is essential to consider how publications and communications are being consumed, who is consuming them, what actions, if any, are taking place as a result of the work, and identifying key learnings that can be used to inform future distribution. 

These recommendations can serve as a useful starting point for any organisation looking to monitor media reports, articles, or publications that they have released, as well as work that they have contributed to or published through a third party. A lot of media monitoring can be conducted using free, open-source investigation tools, however, there are a vast range of advanced social listening and analytics platforms available on the market that can be tailored to more specific needs. 

1) Understanding your objectives 

Before starting your media monitoring and analysis, it is essential to establish exactly what you or your organisation is trying to achieve through publication/distribution. Examples could include raising public awareness of censorship, or putting pressure on a government to address human rights violations in their country. Having established what the key objectives are for distribution, you can then start collecting evidence of whether these objectives have been achieved. There is not catch-all method for media monitoring, and the approach taken will be informed largely by the specific objectives for each instance of distribution or publication. 

2) Develop your indicators 

Once you have established your objectives, it is important to develop clear and measurable indicators to help you understand whether the objectives are being met. These indicators could relate to the reach of a report for example, in which case you might set out indicators based on views, number of page visits, or readership. Another example could be audience engagement, in which case you might develop indicators relating to number of comments on an article, likes, or shares. 

3) Establish parameters 

It is vital to establish a structure or set of parameters that your media monitoring will be built on. Media monitoring and evaluation can be an endless task (there are always more articles, more tweets and more shares which could be of interest), and establishing these boundaries helps to make the process manageable. This includes deciding on the types of media platforms that will be monitored, whether you are looking for qualitative or quantitative insights (or both), and how long you wish to monitor for.  

For example, if an organisation is looking to understand how the UK public reacted in the to a recent documentary that it helped produce – the monitoring parameters might look something like this: 

Monitoring period 3 weeks following release date 
Platforms Twitter, Facebook, Reddit, blogs. 
Data types Quantitative – Viewership figures, numbers for different demographics.  Qualitative – sentiment analysis on social media posts, prominent themes of discussion emerging on social media, main topics of debate. 

4) Build a toolkit 

When you come round to conducting your media monitoring it is important to have a strong sense of the tools that you can use to both collect and analyse key data. This could be a combination of social media tracking tools such as TweetDeck or Crowdtangle, data visualisation platforms like NodeXL, and media alerts systems like Talkwalker or Google Alerts. Building a suite of tools will help address the different needs that you or your organisation might have for monitoring media content and help to identify useful learnings that can be implemented into your work. Best of all, these tools are all free (or at least have some free versions) and are openly accessible to any individual or organisation that wishes to use them.  

5) Emphasise learning 

It’s all well and good tracking the reach and reaction to media articles or publications, but understanding what this means is key. Consider how the insights gathered from media monitoring can be used to improve or adapt your distribution strategy or help to improve future publication of a similar nature/theme.  

Whilst you might only be able to scratch at the surface of the online or media discussion, using media monitoring data provides an opportunity for reflection on who you are reaching and how.  


Intersectionality for Evaluation Professionals

by Sophie Nicholas

In this blog post, we hope to provide some food-for-thought on how to incorporate intersectionality more as a research or evaluation professional in the non profit sector. But before we dive into how to do intersectionality better, it’s important to know and understand what intersectionality is- and isn’t.

What is intersectionality?

Some think of intersectionality as a methodology for research; a theoretical lens; or a way of thinking about the world.

One explanation of intersectionality comes from ‘Disentagling Critical Disability Studies’ author Goodley, where intersectionality is described not simply as bringing together identity markers like disability, race, sex, age, economic position, but considering how different identities ‘support or unsettle the constitution of the other”.

Let’s take a common social bias: that disabled women are asexual or un-feminine. Arguably, this bias reaffirms the existence of rigid and sexist ideas of femininity and the female body as women who aren’t disabled are seen as an archetype for the female body (sexism). This bias simultaneously denies disabled women their freedom to be feminine, and to express themselves sexually in ways they value (ableism). And, this is just one of many examples of how intersectional identities experience distinct and multi-layered forms of discrimination and bias.

Crenshaw gives her seminal analogy of intersectionality, stemming from the black Feminist movement, asking readers to imagine traffic at an intersection. She explains how discrimination is like traffic, and gives the example of a black woman harmed both for being black and/or female. Her injuries due to discrimination (or traffic in Crenshaw’s analogy), could be as a result of discrimination based on sex, on race, or both.

In evaluation and research, what an intersectional approach tends to have in common, no matter how defined, is at least some of the following considerations:

1. Considering bias- within society, as an individual, as part of a group 

(In what ways might I be biassed as a researcher with X background/identity/language/opinions?)

2. Questioning how social identities and structures interact and change each other 

(Has X person experienced this programme activity the same as X person? Have they experienced the same barriers?)

3. Challenging dominant forms of thought as natural truths 

(Does X language or X approach actually benefit everyone? Why is this belief held?)

4. Questioning power dynamics and structural inequities

(How might my evaluation processes be reinforcing unfair power dynamics or structural inequities? E.g. my survey is in English in a context where English is not the first language)

5. Embracing complex narratives and being open to multi-faceted experiences

(How can I share my findings in a thematic way whilst making sure I illustrate unique opinions of underrepresented individuals?)

What it’s not

Hopefully you have a clearer understanding of what intersectionality is and what it can mean for people – but it’s also important to be clear on what intersectionality isn’t to avoid harm coming to any stakeholders during your evaluation work.

First, taking an intersectional approach shouldn’t be boiled down to labelling, or creating an environment of competition over aspects of identity. This kind of discourse is best to be avoided to prevent a competitive narrative and to prevent feelings of shame, guilt, and/or exclusion from whoever it is you’re conducting research or evaluation with.

Second, taking an intersectional lens in your work shouldn’t be a tick-box exercise. If in doubt about whether you are implementing it in a meaningful way, don’t go it alone. Read up on, and even consult with intersectional feminists, actvists, writers, and thinkers, and learn how to sincerely and meaningfully implement this lens into your evaluation process.

What is means to consider intersectionality in evaluation

To consider contextual intersections deeply, and actually adapt research or evaluative processes to make sure your target demographic/s/communities are represented, heard and meaningfully involved- is to be intersectional. So, in the beginnings of any evaluative process, making sure a diversity of identities is at the table is crucial in your methodology and planning.

One of the most important steps in evaluation in this sense is to conduct a context review where you’re working, taking the time to include ‘intersectionality’ as well as a mix of identity markers e.g. gender, ethnicity, age etc. in your reading search as keywords, and making sure you have a holistic, contextualised, and multi-layered picture of the project or programme you’re evaluating. 

Second, ask local communities, those with lived experience of the issue or theme you are exploring, and/or feminist, activist, grassroots and local experts to guide you with gaps you might be missing in your methodology or appraoch, in questioning your own assumptions, and ideally, to lead and/or facilitate aspects of the research.

The best-case scenario for an intersectional evaluator would be to learn from, share with, and make accessible to the best of your ability, the evaluative process with diverse groups and those with lived experience of the issue or theme you aim to explore. Plus, if possible in your timeframe and budget, having members of this group at the table, leading in the process of design, collection, analysis, and dissemination. In short, taking a foundational approach by building on people’s lived experience from the very beginning.

In terms of qualitative research, this might involve focus groups, sessions and meetings with lived experience stakeholders, interviews e.g.- but with questions that allow for intersectional exploration which means asking specifically and sensitively about the experiences of those from different identity groups.

However you go about collecting data, to be intersectional, one approach is for data to be disaggregated by relevant identity markers (depending on the context and stakeholders you are working with). Let’s take surveys. If you send out a survey asking for opinions and experiences of a programme’s impact without disaggregating data, you may be unable to tell of any intersecting discrimination or stories that sit at these intersections as a result of focusing on singular aspects of identity. You might risk missing, ignoring, or devaluing people’s experiences. 

After all, intersectionality is committing to making sure those who are most marginalised are front and centre.

It’s for everyone!

Intersectionality is for everyone – not only your stakeholders. Evaluators, researchers, NGOs and other organisations also have a lot to gain. Adopting an intersectional lens is like opening doors into lots of worlds; worlds that allow for a more holistic, richer and more nuanced evaluation of impact. Alongside your board members, funders, partners, and staff, it also holds you accountable to your genuine stakeholders.

An intersectional lens can help lived-experienced stakeholders and their supporting organisations flourish, providing a breath of fresh air to theoretical and methodological framing. Ultimately, it can better your evaluation work, making sure the positive outcomes and impacts you hope to achieve are truly beneficial to all. 

Want more information?

Check out these resources on intersectionality and research

Intersectional Approaches to Research

Intersectional Approach to Data

Intersectionality: A Tool for Gender and Economic Justice

bell hooks: Feminism is for everybody

Towards gender transformative change

Be sure to read our blog: Top tips for getting the most out of that evaluation report

Rights Evaluation Studio provides a range of services including project design, strategy, monitoring, evaluation and impact assessment. Please get in touch if you would like to discuss how we can help you to review, update or develop monitoring and evaluation systems that work for your organisation.

What a year…

We want to celebrate the year 2022 marking our achievements, milestones, and partnerships.

We believe in sharing our successes to reflect on what we have achieved, but also to recognise all those who have been involved and take this opportunity to say a big thank you!

Next year, we will continue to help measure, demonstrate and improve the results and impact of human rights projects and programmes, and help organisations make better informed decisions for more meaningful and sustainable impact.

Top tips for getting the most out of that evaluation report

So, you have just completed an external evaluation process report – what now?

by Patrick Regan

To help ensure an external evaluation is not just another dusty report in a filing cabinet, there are a few steps organisations can take to help ensure they get the most out of their external evaluation reports. You may not need to follow all the suggestions in this blog, but it is worth thinking about which might be of value for your organisational context and learning goals.  

An evaluator’s recommendations should not be seen as something the organisation is obliged to implement or agree with – they should serve as a jumping-off point for further reflection and considering if the proposed recommendations are feasible or useful.  

In this way, the independent perspective and expertise of the evaluator can be paired with the in-depth knowledge and experience of the team implementing the organisation’s work – and hopefully, key learnings are extracted, and recommendations are meaningfully engaged with.

1. Communicate Findings to Stakeholders

Many stakeholders would have taken time and energy to participate in the evaluation process so its best practice to make sure findings are communicated back to them.  

Not only does sharing findings and learnings help to show that you value their contribution and engagement, but it gives them the chance to object to findings which relate to any impact claimed and to learn themselves from the findings, making the process more of a conversation and exchange rather than an extraction. Plus, it makes them more likely to participate in future evaluations- a win-win.  

Communicating these findings could be in the form of a blog, email, video, or even a meeting where stakeholders are present and can exchange their perspectives. 

2. Reflect and Respond

You should review the recommendations proposed and organise them. This could be organised into: 

  1. Recommendations you support  
  1. Identifying any recommendations, you do not support 
  1. Recommendations you would like to implement but do not currently have the resources/capacity to 
  1. Recommendations you support and can realistically implement 

This process might also lead to additional ideas or internal recommendations which might respond to the evaluation findings in different ways. 

3. Make an Action Plan

Once you have identified the recommendations you plan to implement, it can be useful to create a brief action plan which clearly identifies who is responsible for implementing each recommendation, and a timeline to implement the recommendation. This action plan (and updates on it) can be used as a key paper for your staff team, senior management or board members to review at meetings to ensure accountability of implementation.  

See this useful template and guide

4. Develop a Management Response

The main purpose of drafting a ‘Management Response’ is to create an internal formal document to help contextualise the findings and recommendations should someone be looking back onto the report in years to come.  

It is common practice to develop a short (1-2 page) management response to an external evaluation summarising the key learnings that are of most interest and significance for your organisation. It should also highlight any evaluation findings which you think are ill-informed or not truly reflective of the programme evaluated.  

Finally, your response should document your intended next steps in response to the evaluation (I.e., which recommendations you will implement, which you would like to implement but cannot, which you do not deem suitable to implement).  

You might also choose to document new evaluation questions which arise as a result of these findings, or areas in which you would like to know more about, as this will help to frame and inform future evaluation exercises and help future external evaluators best understand how they can add the most value.  

A management response is particularly useful for evaluations which will be shared with existing or potential donors, so that the organisation’s perspective on the findings is documented and their intention to learn from the findings is shared.  

Remember, the idea is to make sure your report gets shared and meaningfully engaged with- but most of all, to find a balance that works for you between reflection, objection, and action, when considering your recommendations. 

Be sure to read our Blog: Accountability in Evaluation – Accountable to Who?  

Rights Evaluation Studio provides a range of services including project design, strategy, monitoring, evaluation and impact assessment. Please get in touch if you would like to discuss how we can help you to review, update or develop monitoring and evaluation systems that work for your organisation.

Accountability in evaluation – accountable to who?

Being accountable, and being held responsible for the results of our projects and activities is one of the key pillars that frame monitoring and evaluation practice. The growing and varied acronyms for evaluation professionals sometimes include an ‘A for Accountability, e.g. a MEAL manager. Before you get hungry, MEAL stands for Monitoring, Evaluation, Accountability and Learning.  

As a consultant focusing on the evaluation of human rights programmes, the concept of accountability is harmonious with the lexicon of holding governments and state actors accountable for their human rights violations. Promoting accountability is surely a good thing, and having a dedicated role to focus on accountability sounds like a good thing to have at an organisation…right? Although conceptually this makes sense, this brief article highlights some of the risks incurred by organisations when accountability processes and structures are not carefully thought through.  

Organisations should be asking themselves, “Who are we holding ourselves accountable to?”.  Many organisations will prioritise accountability to their donors, and this is where accountability could end up doing more harm than good – especially if you then structure monitoring, evaluation and learning plans which prioritise donor accountability. With a donor-first approach, you risk ending up with problematic indicators of success and diverting your attention away from what meaningful impact looks like for the people you are trying to support and the systems you are trying to influence.  This approach can then have a knock-on effect on the perceived value of monitoring and evaluation across the organisation – project delivery staff view the process as donor box-ticking instead of an opportunity to learn, improve and maximise the impact of their work. This in turn makes it harder to engage staff and stakeholders in the evaluation process, potentially limiting the quality (and quantity) of the data you have. In the end, a donor centred approach to accountability and MEL is unlikely to succeed.   

This situation can be exacerbated for human rights organisations where the operating environments can be incredibly complex, change can be hard to observe and the impact can, at times, be more abstract, resulting in an approach to learning and accountability made up of vanity metrics and meaningless indicators which do not provide any real opportunity to learn or gain insight.  

We should also be looking at accountability from the perspective of structural power imbalances. Most human rights funding comes from wealthier governments, family trusts and foundations, and individuals philanthropists, i.e. groups who already have a significant influence in the world. Therefore, is it ethical for us to prioritise accountability to them – does doing this perpetuate further inequalities? And do donors even want organisations to do this?  

So who should we be holding ourselves accountable to?   

This question can lead to a lot of (potentially useful) debate within organisations. If we are working in the public interest and seeking to advance the rights, protections and lives others, should we not be prioritising accountability to those we claim to be acting in the interests of?  

I would encourage organisations that are serious about improving their accountability, and looking to build a stronger culture of monitoring, evaluation and learning within their organisations to focus on holding themselves accountable to the groups, communities, individuals and organisations they are seeking to support. Rather than asking yourselves, “Did we deliver the activities and results promised to our donors?” start by asking: 

  • What will results and impact look like for those we are claiming to act in the interests of? 
  • What would realistic and meaningful outcomes be for these stakeholders? (and how much variation is there between these groups and individuals)? 
  • From the perspectives of these different groups, what would be a meaningful indicator or progress marker that these results are materialising? 
  • Are there ways we can involve these groups, or representatives from these groups, in the collection, analysis or interpretation of evaluation data? 

Developing a monitoring, evaluation and accountability framework from this starting point could make the entire process more meaningful, learning oriented and potentially promote more significant results.  

Organisations should also be motivated to hold themselves accountable to themselves. Organisations might want to consider:   

  • What information do we need as an organisation to know that we are pushing things in the right direction?  
  • How will we know things are going to plan?  
  • Are we satisfied with how effective and efficient our activities are?  
  • How are we responding to challenges and changes? 
  • What is the unique role of our activities and programmes in the wider eco-system and how important is our contribution?  

Holding ourselves accountable to those we are seeking to support, and being accountable to ourselves are likely to be harmonious. If accountability is happening at these two levels, and your MEL systems are structured accordingly, you should be well-positioned to propose a MEL approach to your donors that holds you accountable to them in the same way – meaning donor accountably does not become a driving force but is second nature to your own accountability practices.  In my personal experience, the majority of donors I have worked with are adaptable, flexible and willing to support your own MEL structures and strategies if they can see time and thought has gone into doing these things appropriately, and they can see they are working.   

Taking action  

In summary, I advocate for and encourage organisations to prioritise holding yourselves accountable to the groups whose interests you claim to represent, closely followed by internal organisational accountability – s simple recommendation, which is more complex to realise than it is to suggest. But putting the time into this process could help you to shift the axis of power, generate more useful and meaningful insights and learnings which flow effortlessly into your donor accountability structures.  

Some simple first steps in this process could be: 

  • Conduct a light touch accountability assessment, mapping your organisation’s accountability structures and processes, identifying the strengths and weaknesses of your current approach, and understanding who or what is being prioritised and why.  
  • Engage with individuals and groups that you are supporting and who are the target of your projects and activities – find out what success would mean to them, and what results would be meaningful for them? Consider if your conceptions of “impact” are coherent with theirs.  
  • Consider the extent to which you are involving these individuals and groups in the design of data collection tools, the collection of data, analysis, interpretation and sharing of findings. Is there room for greater engagement and involvement in these processes?    
  • Consider what mechanisms you have in place within your organisation to hold yourselves accountable at a result or outcome level. How are you monitoring your outcomes? and who is involved in using, interpreting, and responding to outcome monitoring information and data?   
  • Consider who is responsible for MEAL in your organisation – if you don’t have a dedicated individual, do these tasks end up with your fundraising team? If so, how is this affecting your organisation’s approach to accountability? 
  • Find out what data, information and results are most interesting and useful to your donors, and how much flexibility there is to explore different approaches or to reframe and reorganise your indicators and milestones to prioritise accountability to target groups – you might be surprised with their flexibility and openness to experiment.   

The accountability debate of course has many layers and complexities, and these are just a few thoughts and reflections based on my personal and professional experiences of accountability and evaluation.  

Rights Evaluation Studio provides a range of services including project design, strategy, monitoring, evaluation and impact assessment. Please get in touch if you would like to discuss how we can help you to review, update or develop monitoring and evaluation systems that work for your organisation