Featured

Media Monitoring for Monitoring and Evaluation

By Hugh Atkinson

One of the many avenues through which NGOs and human rights organisations seek to have an impact is through direct public communications or by distributing material to actors who can share their work to a wider audience or actors who can strategically drive change.  

This process of external relations and distribution is often long and complicated, and there are many factors which must be considered, such as the need to balance the interests of those stakeholders who have been involved in the work, verifying material for public communications, and having legal teams proof material prior to publication. So, what happens after a report, story, or video has been released to the public? 

As evaluators, it is essential to consider how publications and communications are being consumed, who is consuming them, what actions, if any, are taking place as a result of the work, and identifying key learnings that can be used to inform future distribution. 

These recommendations can serve as a useful starting point for any organisation looking to monitor media reports, articles, or publications that they have released, as well as work that they have contributed to or published through a third party. A lot of media monitoring can be conducted using free, open-source investigation tools, however, there are a vast range of advanced social listening and analytics platforms available on the market that can be tailored to more specific needs. 
 

1) Understanding your objectives 

Before starting your media monitoring and analysis, it is essential to establish exactly what you or your organisation is trying to achieve through publication/distribution. Examples could include raising public awareness of censorship, or putting pressure on a government to address human rights violations in their country. Having established what the key objectives are for distribution, you can then start collecting evidence of whether these objectives have been achieved. There is not catch-all method for media monitoring, and the approach taken will be informed largely by the specific objectives for each instance of distribution or publication. 

2) Develop your indicators 

Once you have established your objectives, it is important to develop clear and measurable indicators to help you understand whether the objectives are being met. These indicators could relate to the reach of a report for example, in which case you might set out indicators based on views, number of page visits, or readership. Another example could be audience engagement, in which case you might develop indicators relating to number of comments on an article, likes, or shares. 

3) Establish parameters 

It is vital to establish a structure or set of parameters that your media monitoring will be built on. Media monitoring and evaluation can be an endless task (there are always more articles, more tweets and more shares which could be of interest), and establishing these boundaries helps to make the process manageable. This includes deciding on the types of media platforms that will be monitored, whether you are looking for qualitative or quantitative insights (or both), and how long you wish to monitor for.  

For example, if an organisation is looking to understand how the UK public reacted in the to a recent documentary that it helped produce – the monitoring parameters might look something like this: 

Monitoring period 3 weeks following release date 
Platforms Twitter, Facebook, Reddit, blogs. 
Data types Quantitative – Viewership figures, numbers for different demographics.  Qualitative – sentiment analysis on social media posts, prominent themes of discussion emerging on social media, main topics of debate. 

4) Build a toolkit 

When you come round to conducting your media monitoring it is important to have a strong sense of the tools that you can use to both collect and analyse key data. This could be a combination of social media tracking tools such as TweetDeck or Crowdtangle, data visualisation platforms like NodeXL, and media alerts systems like Talkwalker or Google Alerts. Building a suite of tools will help address the different needs that you or your organisation might have for monitoring media content and help to identify useful learnings that can be implemented into your work. Best of all, these tools are all free (or at least have some free versions) and are openly accessible to any individual or organisation that wishes to use them.  

5) Emphasise learning 

It’s all well and good tracking the reach and reaction to media articles or publications, but understanding what this means is key. Consider how the insights gathered from media monitoring can be used to improve or adapt your distribution strategy or help to improve future publication of a similar nature/theme.  

Whilst you might only be able to scratch at the surface of the online or media discussion, using media monitoring data provides an opportunity for reflection on who you are reaching and how.  

Featured

Intersectionality for Evaluation Professionals

by Sophie Nicholas

In this blog post, we hope to provide some food-for-thought on how to incorporate intersectionality more as a research or evaluation professional in the non profit sector. But before we dive into how to do intersectionality better, it’s important to know and understand what intersectionality is- and isn’t.

What is intersectionality?

Some think of intersectionality as a methodology for research; a theoretical lens; or a way of thinking about the world.

One explanation of intersectionality comes from ‘Disentagling Critical Disability Studies’ author Goodley, where intersectionality is described not simply as bringing together identity markers like disability, race, sex, age, economic position, but considering how different identities ‘support or unsettle the constitution of the other”.

Let’s take a common social bias: that disabled women are asexual or un-feminine. Arguably, this bias reaffirms the existence of rigid and sexist ideas of femininity and the female body as women who aren’t disabled are seen as an archetype for the female body (sexism). This bias simultaneously denies disabled women their freedom to be feminine, and to express themselves sexually in ways they value (ableism). And, this is just one of many examples of how intersectional identities experience distinct and multi-layered forms of discrimination and bias.

Crenshaw gives her seminal analogy of intersectionality, stemming from the black Feminist movement, asking readers to imagine traffic at an intersection. She explains how discrimination is like traffic, and gives the example of a black woman harmed both for being black and/or female. Her injuries due to discrimination (or traffic in Crenshaw’s analogy), could be as a result of discrimination based on sex, on race, or both.

In evaluation and research, what an intersectional approach tends to have in common, no matter how defined, is at least some of the following considerations:

1. Considering bias- within society, as an individual, as part of a group 

(In what ways might I be biassed as a researcher with X background/identity/language/opinions?)

2. Questioning how social identities and structures interact and change each other 

(Has X person experienced this programme activity the same as X person? Have they experienced the same barriers?)

3. Challenging dominant forms of thought as natural truths 

(Does X language or X approach actually benefit everyone? Why is this belief held?)

4. Questioning power dynamics and structural inequities

(How might my evaluation processes be reinforcing unfair power dynamics or structural inequities? E.g. my survey is in English in a context where English is not the first language)

5. Embracing complex narratives and being open to multi-faceted experiences

(How can I share my findings in a thematic way whilst making sure I illustrate unique opinions of underrepresented individuals?)

What it’s not

Hopefully you have a clearer understanding of what intersectionality is and what it can mean for people – but it’s also important to be clear on what intersectionality isn’t to avoid harm coming to any stakeholders during your evaluation work.

First, taking an intersectional approach shouldn’t be boiled down to labelling, or creating an environment of competition over aspects of identity. This kind of discourse is best to be avoided to prevent a competitive narrative and to prevent feelings of shame, guilt, and/or exclusion from whoever it is you’re conducting research or evaluation with.

Second, taking an intersectional lens in your work shouldn’t be a tick-box exercise. If in doubt about whether you are implementing it in a meaningful way, don’t go it alone. Read up on, and even consult with intersectional feminists, actvists, writers, and thinkers, and learn how to sincerely and meaningfully implement this lens into your evaluation process.

What is means to consider intersectionality in evaluation

To consider contextual intersections deeply, and actually adapt research or evaluative processes to make sure your target demographic/s/communities are represented, heard and meaningfully involved- is to be intersectional. So, in the beginnings of any evaluative process, making sure a diversity of identities is at the table is crucial in your methodology and planning.

One of the most important steps in evaluation in this sense is to conduct a context review where you’re working, taking the time to include ‘intersectionality’ as well as a mix of identity markers e.g. gender, ethnicity, age etc. in your reading search as keywords, and making sure you have a holistic, contextualised, and multi-layered picture of the project or programme you’re evaluating. 

Second, ask local communities, those with lived experience of the issue or theme you are exploring, and/or feminist, activist, grassroots and local experts to guide you with gaps you might be missing in your methodology or appraoch, in questioning your own assumptions, and ideally, to lead and/or facilitate aspects of the research.

The best-case scenario for an intersectional evaluator would be to learn from, share with, and make accessible to the best of your ability, the evaluative process with diverse groups and those with lived experience of the issue or theme you aim to explore. Plus, if possible in your timeframe and budget, having members of this group at the table, leading in the process of design, collection, analysis, and dissemination. In short, taking a foundational approach by building on people’s lived experience from the very beginning.

In terms of qualitative research, this might involve focus groups, sessions and meetings with lived experience stakeholders, interviews e.g.- but with questions that allow for intersectional exploration which means asking specifically and sensitively about the experiences of those from different identity groups.

However you go about collecting data, to be intersectional, one approach is for data to be disaggregated by relevant identity markers (depending on the context and stakeholders you are working with). Let’s take surveys. If you send out a survey asking for opinions and experiences of a programme’s impact without disaggregating data, you may be unable to tell of any intersecting discrimination or stories that sit at these intersections as a result of focusing on singular aspects of identity. You might risk missing, ignoring, or devaluing people’s experiences. 

After all, intersectionality is committing to making sure those who are most marginalised are front and centre.

It’s for everyone!

Intersectionality is for everyone – not only your stakeholders. Evaluators, researchers, NGOs and other organisations also have a lot to gain. Adopting an intersectional lens is like opening doors into lots of worlds; worlds that allow for a more holistic, richer and more nuanced evaluation of impact. Alongside your board members, funders, partners, and staff, it also holds you accountable to your genuine stakeholders.

An intersectional lens can help lived-experienced stakeholders and their supporting organisations flourish, providing a breath of fresh air to theoretical and methodological framing. Ultimately, it can better your evaluation work, making sure the positive outcomes and impacts you hope to achieve are truly beneficial to all. 

Want more information?

Check out these resources on intersectionality and research

Intersectional Approaches to Research

Intersectional Approach to Data

Intersectionality: A Tool for Gender and Economic Justice

bell hooks: Feminism is for everybody

Towards gender transformative change

Be sure to read our blog: Top tips for getting the most out of that evaluation report


Rights Evaluation Studio provides a range of services including project design, strategy, monitoring, evaluation and impact assessment. Please get in touch if you would like to discuss how we can help you to review, update or develop monitoring and evaluation systems that work for your organisation.

Struggling to know where to start with impact assessment?

Check-out our new guide for impact assessment!

Last year we had the pleasure of working with the Green European Foundation (GEF) to provide an introductory training on impact assessment to the GEF team, their network and the Bosch Alumni Network.

As part of the project, we developed a resource that participants could use to help them think through the different steps and decisions when developing a plan for impact assessment. GEF have kindly agreed for this resource to be made public so that anyone who could benefit from it can have access to it.

The guide covers essential introductory information, such as:

  • basic definitions used in monitoring, evaluation, learning and impact assessment;
  • the role of theory of change in evaluation
  • good practices and standards in evidence collection

The resource then provides a step-by-step guide to developing an impact framework – presenting you with all the key considerations and decisions you need to make along the way, as well as an over of ethical considerations, alternative approaches to evaluation and signposting to some valuable resources and materials.

So, if you have been struggling to think about how to get started with impact assessment, check out this resource.

Rights Evaluation Studio provides consultancy services in monitoring, evaluation and impact assessment – so if you think we can help you and your organisation to learn from your work and better understand your impacts please contact us at admin@rightsevaluation.studio

What a year…

We want to celebrate the year 2022 marking our achievements, milestones, and partnerships.

We believe in sharing our successes to reflect on what we have achieved, but also to recognise all those who have been involved and take this opportunity to say a big thank you!

Next year, we will continue to help measure, demonstrate and improve the results and impact of human rights projects and programmes, and help organisations make better informed decisions for more meaningful and sustainable impact.

Top tips for getting the most out of that evaluation report

So, you have just completed an external evaluation process report – what now?

by Patrick Regan

To help ensure an external evaluation is not just another dusty report in a filing cabinet, there are a few steps organisations can take to help ensure they get the most out of their external evaluation reports. You may not need to follow all the suggestions in this blog, but it is worth thinking about which might be of value for your organisational context and learning goals.  

An evaluator’s recommendations should not be seen as something the organisation is obliged to implement or agree with – they should serve as a jumping-off point for further reflection and considering if the proposed recommendations are feasible or useful.  

In this way, the independent perspective and expertise of the evaluator can be paired with the in-depth knowledge and experience of the team implementing the organisation’s work – and hopefully, key learnings are extracted, and recommendations are meaningfully engaged with.


1. Communicate Findings to Stakeholders

Many stakeholders would have taken time and energy to participate in the evaluation process so its best practice to make sure findings are communicated back to them.  

Not only does sharing findings and learnings help to show that you value their contribution and engagement, but it gives them the chance to object to findings which relate to any impact claimed and to learn themselves from the findings, making the process more of a conversation and exchange rather than an extraction. Plus, it makes them more likely to participate in future evaluations- a win-win.  

Communicating these findings could be in the form of a blog, email, video, or even a meeting where stakeholders are present and can exchange their perspectives. 

2. Reflect and Respond

You should review the recommendations proposed and organise them. This could be organised into: 

  1. Recommendations you support  
  1. Identifying any recommendations, you do not support 
  1. Recommendations you would like to implement but do not currently have the resources/capacity to 
  1. Recommendations you support and can realistically implement 

This process might also lead to additional ideas or internal recommendations which might respond to the evaluation findings in different ways. 

3. Make an Action Plan

Once you have identified the recommendations you plan to implement, it can be useful to create a brief action plan which clearly identifies who is responsible for implementing each recommendation, and a timeline to implement the recommendation. This action plan (and updates on it) can be used as a key paper for your staff team, senior management or board members to review at meetings to ensure accountability of implementation.  

See this useful template and guide

4. Develop a Management Response

The main purpose of drafting a ‘Management Response’ is to create an internal formal document to help contextualise the findings and recommendations should someone be looking back onto the report in years to come.  

It is common practice to develop a short (1-2 page) management response to an external evaluation summarising the key learnings that are of most interest and significance for your organisation. It should also highlight any evaluation findings which you think are ill-informed or not truly reflective of the programme evaluated.  

Finally, your response should document your intended next steps in response to the evaluation (I.e., which recommendations you will implement, which you would like to implement but cannot, which you do not deem suitable to implement).  

You might also choose to document new evaluation questions which arise as a result of these findings, or areas in which you would like to know more about, as this will help to frame and inform future evaluation exercises and help future external evaluators best understand how they can add the most value.  

A management response is particularly useful for evaluations which will be shared with existing or potential donors, so that the organisation’s perspective on the findings is documented and their intention to learn from the findings is shared.  


Remember, the idea is to make sure your report gets shared and meaningfully engaged with- but most of all, to find a balance that works for you between reflection, objection, and action, when considering your recommendations. 

Be sure to read our Blog: Accountability in Evaluation – Accountable to Who?  


Rights Evaluation Studio provides a range of services including project design, strategy, monitoring, evaluation and impact assessment. Please get in touch if you would like to discuss how we can help you to review, update or develop monitoring and evaluation systems that work for your organisation.