1. Analyse your Data
Data analysis is all about making sense of the information you have collected. Analysing will help you to produce evaluation findings about your service. It involves examining your data to find themes and patterns within it and drawing conclusions about what it is telling you.
Through data analysis, you can take the ‘raw data’ you have collected and turn it into clear evaluation findings. Your findings will describe the dataset you have collected and set out how you have interpreted it. Ideally you would only collect useful data and analyse all of the dataset; sometimes, to keep things manageable you may decide not to analyse everything.
Use the five types of data discussed in the previous guide to review the data you have collected systematically. The focus of your analysis will depend on your particular needs. For example, if your programme model is similar to others, and evidence on the effectiveness of your particular approach already exists, you may choose to focus on your reach and the quality of your service (user, engagement and feedback data).
When looking at your data, ask these questions:
- Who are we reaching and what are their characteristics? Who are we not reaching?
- Are we reaching our intended target audiences?
- How do people reach us? How do they hear about us?
- Who do we get feedback from? Who do we not?
- Do people enjoy the service(s)? Do they find it useful?
- Which aspects do they rate the highest and lowest? What is working well/less well?
- Do different groups of users respond differently?
- Is the service being delivered as we intended?
- How often are people using our service(s)? For how long?
- Do some groups of users engage better than others?
- What is different now? What are the changes in behaviour, attitudes and skills among our users?
- Are we seeing the outcomes we expected to see?
- How has our service(s) helped? Can change be attributed to our work? What other factors contribute?
- Have certain aspects helped certain users, and under what circumstances?
- Are the results consistent? Do we achieve better outcomes for some groups of users?
- Where do we get our best results? Which are our most effective activities? Under what circumstances?
- Are we helping those in greatest need?
- What is the long-term difference our service(s) has made and for whom?
- What other factors have contributed to this change?
2. Quantitative Data
Quantitative data is numerical data collected through responses to multiple choice questions such as those found in a survey. Analysing this type of data can help you understand who has experienced change as a result of your work, and how much change has occurred. Quantitative data is also useful for visually presenting evidence of your impact in charts and tables.
It is usually helpful to combine your quantitative data with qualitative data, which can tell you about the nature of the change, and why it has occurred.
1. Clean your data
Data cleaning is the process of preparing your data for analysis. It involved identifying and correcting inaccurate data such as blank responses, duplication and obvious errors. It also means standardising data so that, for example, all entries are in number format rather than a mixture of words and numbers.
If you find recurring errors with the data being entered incorrectly, you may need to review the tools being used and support in place for the individuals collecting the data.
2. Decide what statistics to use
Now you need to think about how you can use different statistics to answer the questions posed in the previous step.
Percentages give readers a sense of scale and proportion. However, be wary of using percentages when presenting data from small samples. Avoid percentages for samples of fewer than 50, and avoid drawing firm conclusions from small differences in percentages for samples of 50-100. Make sure you refer to the correct number of respondents.
- Measuring change
If you have asked the same questions before and after your intervention, you can subtract the pre-intervention score from the post-intervention score to find out how much change has occurred. For example, if 50% of participants stated they felt confident before an activity, and this rose to 75% after the activity, you can cite an increase of 25 percentage points. You can work out the average change for your whole group or for sub-groups, or the percentage of respondents who experienced positive or negative change.
Cross-tabulation is a way of comparing results for different types of respondents. For example, if you want to know if your intervention is more effective for people who are unemployed, or those who are in employment, you could use cross-tabulation to compare their experiences.
Averages are used to summarise a dataset using a number that represents the middle of the distribution. They can be used to report on the average experience of users; for example, the average score for the class was 7.3 out of 10. There are three main types of average:
The mean is the total of all values divided by the number of responses. For example, if the values are 2, 3, 4, and 5, the mean is the total (14) divided by the number of values (4) = 3.5. This is less helpful if your data is skewed (if top or bottom values have a higher value than the middle value) or if your data has outliers (values far above or below the majority of values). For example, in comparing the duration of time spent using a service, one unusually long visit will disproportionately skew the mean average.
This is the value in the middle of your data set arranged from smallest to largest; for example, if the values are 1, 2, 3, and 4, 5, the median is 3. This can be helpful if your data is skewed and/or contains outliers. However, the median does not take into account all of the information in the data set, only the middle value. For example, if the median duration time is four minutes, this doesn’t tell you anything about the duration of time spent by other users.
This is the value that occurs most frequently in your dataset. There may be more than one mode in your data set. The mode is the only measure of average that can be used with non-numerical data. For example, if 40% of your users engage with your service online, 30% via phone, and 30% in-person, no median or mean can be calculated but the mode is online users, as this is the most common.
To understand how much variation there is in your dataset, you can use two calculations:
Range: This is the difference between the largest and smallest value in your dataset.
Standard deviation: This is the average distance between a value and the mean average. This shows you how well the mean represents your dataset; the higher the standard deviation, the more dispersed the data set is.
3. Examine critically
Once you have chosen the most useful statistics, examine your data critically and ask yourself what it is telling you. For example:
- Are there any patterns, themes or trends?
- Are there any deviations from these patterns?
- Are outcomes different for different groups of people?
- Why were some outcomes achieved and others not?
- Has anything surprised you about the data, has it challenged your initial assumptions?
- Are there any gaps? What do you need to find out more about?
3. Qualitative Data
Qualitative data is descriptive data that is not numerical; for example, feedback collected through open-ended responses to surveys, interviews, or focus groups. While quantitative data can tell you how much something has changed, and for whom, analysing qualitative data can help you understand the nature of that change,and why it has occurred. It can offer insights into which aspects of your service work well, and why, as well as those that don’t.
1. Choose your approach
There are two main approaches to analysing qualitative data.
Code and count
This involves coding your data into categories – teachers, workbooks, or peer support, for example – and counting the number of responses. This is helpful for understanding how many people gave a particular response, particularly if you have a larger sample and the data can be separated into distinct categories. However, this does not enable you to capture the strength of feeling associated with responses, and, if your sample size is small, you may not be able to generalise from the data.
A ‘code’ is made up of three parts:
- The code itself – a number or letter that represents the code
- The category it represents (e.g. peer support)
- What is included or excluded (e.g. “Include references to positive interactions with classmates. Do not include negative interactions or interactions with individuals outside of the classroom”)
Theme and explore
This involves identifying themes from your data and exploring how different people have responded to these. This is good for smaller sample sizes and more complex subjects. It is particularly helpful when your respondents have different understandings of the same issue and you want to compare them. It can also help develop findings around how your work has contributed to changes compared with other factors.
For this approach, a theme is also a category but may not have rigid inclusion and exclusion criteria. Themes are usually decided on after you’ve read most or all of the responses. For example, if you interviewed people about their attitudes to loneliness, you may find the following themes emerge as you read through the transcripts: social isolation, physical health, and socio-emotional wellbeing.
2. Categorise your data
Now that you have your codes or themes, you can use them to sort your data before summarising what it says. You can categorise data in various ways.
By hand: With a small amount of paper-based data and a small number of codes or themes, you can categorise by hand. Make a note of the codes or themes in the margin. You can then cut up the transcripts and paste them onto larger sheets of paper, one for each code or theme.
Using MS Word or Google Docs: You can take a similar approach to paper-based data. Use the comments feature to make notes in the margin, or copy and paste sections of your transcripts into a new document under each code or theme.
Using a spreadsheet: If you are using code and count, create a column for each code and put a ‘1’ in the column if that code is mentioned in the survey response. You can then use the ‘sum’ formula to count how many times the code is mentioned, and the ‘filter’ function to view all the responses for a particular code.
Using data analysis software: You can use a software package to analyse qualitative data. Quirkos is an affordable option if you are working with text. Atlas.ti enables you to work with text, images, audio, and video data. MAXQDA and NVivo are the market leaders for working with both qualitative and quantitative data.
These packages allow you to code data more quickly, search for codes or groups of codes, and visualise your data in graphs or charts. If you analyse qualitative data regularly, then you may wish to invest in them.
Tips for categorising data
- Data can be categorised into more than one code or theme, but try not to do this too often.
- If using code and count, you will need to make notes of how often each code appears. You may want to create a table or tally chart to do this.
- You will need a category for ‘don’t know’, ‘no answer’ or ‘other’ responses. If ‘other’ responses make up more than 5% of your total, look at the data again to identify additional codes or themes. This helps make sure you’re not missing any important themes.
- It can be helpful to write notes to yourself as you go through your data, and highlight interesting quotes
3. Examine Critically
Once you have categorised your data, questions you might want to ask of your data include:
- Are there any links between codes? Are some things mentioned together frequently?
- Are there any other patterns, themes, or trends? Are there any deviations from these patterns?
- Are outcomes different for different groups of people?
- Why were some outcomes achieved, and others not achieved?
- How do people understand their journey or story? What do they think has caused or affected the outcomes they have experienced?
- What has surprised you about the data? What has challenged your assumptions?
- Are there any gaps? What do you need to find out more about?
Make sure your analysis can be verified and you can justify the claims that you make.
- Keep a paper trail including copies of your notes and your coded data.
- Check your analysis with others. It can be helpful to have two people code some of the data to check whether the coding matches. You may also wish to check your analysis with your evaluation respondents to confirm you are representing them accurately.
- Wherever possible, check data from different sources to see if the results are the same or different.
- Check your own biases. Write down your initial views on the data and deliberately look for evidence to dis-confirm your views.
- Coding your data can result in looking at statements out of context. Check back against the rest of the data provided by a respondent to make sure you haven’t misinterpreted them.
4. Compare your Data
Like any form of storytelling, data needs context. You must understand the circumstances surrounding your numbers to shed light on what they represent, so you can interpret them. Only then will you be able to turn facts into meaningful information that facilitates positive decision-making at your organisation.
You may discover that 20% of people who used your service went on to paid employment. But how do you know whether 20% is good, average, or poor? A simple step is to talk to colleagues, service users and others about what ‘good’ might look like for your organisation, and consider your results against this.
Even better, find or collect data that you can compare yourself to. There are two main types of comparison:
Using a baseline means comparison over time. A common approach is collecting data before someone uses a service and afterwards, to see if there is a change. Baselining can also be used at an institutional level; for example, to show if your results are improving across the whole organisation over time.
This means comparison with similar data from other sources.
- Comparing different groups of users (internal benchmarking): Comparing different user groups within your data can reveal insights about how they respond; for example, different age groups may respond differently. You can follow up with qualitative research to better understand these differences. The range of experiences and outcomes can be illuminating, so you could look at how individual users change as well as how groups of users change.
- Comparing to other sources of data (external benchmarking): The UK Data Service publishes government data on outcomes relevant to the charity sector. This can be used to put your work into context. Remember, your users may not always be comparable to the national average.
When comparing your data, you can make your findings more robust by:
- Being consistent in your data collection methods: This is relatively easy with internal baselines and benchmarks. Consistency is harder to achieve with external benchmarks, unless you are working together on a shared approach to measurement.
- Extending your view: For baselines, look at change over a longer time period, and with more than two data collection points to give greater context. For benchmarks, you could make comparisons with more than one organisation, or, if possible, with an average from your sector.
- Triangulate: Combine your findings with other sources of information like feedback from service users and qualitative research to check whether you are getting a consistent message.
5. Learn from Failure
When measuring and assessing your organisation’s impact, it can be easy to simply look for the positive changes you have made. However, it is extremely important to ensure that negative and/or unexpected outcomes are analysed just as carefully as the positive and planned ones.
Learning from failure is part of understanding ‘what works’. The conventional wisdom is that presenting good results will attract continued funding and helps build momentum for your charity, while the opposite puts funding at risk, demotivates staff and erodes trust from the public.
However, innovation in the charitable sector can only be achieved if we collectively explore and share where we’ve failed. Complex social problems cannot be solved without this, and true learning comes from understanding what works and what does not.
To improve your practice further, considering any problems during data collection or issues with missing data, as well as reflecting on how bias might have influenced your work, will help you decide which findings are most important.
6. Understanding Attribution
Measuring your impact is not as straight forward as showing your service users are better off due to your interventions. You will also need an understanding of attribution.
Attribution is about understanding that there are other factors at work alongside your services, and the extent to which your service has had an effect.
This is one of the more difficult things to prove as a charity, but an easier way of making the case for your activities having an effect is to envisage what it would be like if your project or intervention had not existed. Would your service user accessed other services? If so are they as effective as yours?
In certain areas of work it can be difficult, if not impossible, to single out your activities from others. In this case it can be better to look at contribution rather than attribution – looking at the sum of all the parts to creating outcomes.
In any case, you need to be aware that your organisation will not be the only one contributing to your service users’ outcomes and the best impact reporting acknowledges this.
7. Get in touch
If you need any further help with measuring your organisation’s impact, evaluating projects or would like us to run a training session for your staff, please get in touch with us.