Evaluation

In this section:

  • Seven principles for effective evaluation;
  • An introduction to evaluation design;
  • Choosing evaluation methods.

First steps

Once you have secured funding for your alcohol hidden harm service you are faced with the responsibility of delivering the best, most effective service you can deliver with the resources available. This means you must evaluate what you are doing – and use the findings of your evaluation to change and develop the service. It is highly likely that you will have been asked about evaluation in your funding bid and may already have routine measures that you use to measure the progress of individual clients. Take a look at the key principles of evaluation below to review where you and your organisation are at and what you need to do to develop an evaluation strategy for your alcohol hidden harm service.

Now ask yourself: How will you know if the service you are developing is meeting the needs of your clients? How will you know if it could be done better? How will you demonstrate your success to your clients and funders? How will you know that you are not doing more harm than good? Can you afford not to do evaluation?

Evaluation can seem a daunting technical task but if you have clearly identified your aims and objectives you have made a good start. Before deciding whether to carry out the evaluation yourself (or commission a consultant or academic to help you) take a look at our seven principles. See also ‘The essential guide to working with young people about drugs and alcohol’ Chapter 11: evaluating our work with young people about drugs and alcohol. Published by DrugScope (2008).

Principle 1: Build in evaluation from the earliest stages of planning. Use a logic model to help identify where monitoring and evaluation will help you to improve and understand what you are achieving for your clients.

Clarify your aims and your objectives. Make your objectives SMART so that you know what you hope to achieve at every step of the way and for whom. SMART objectives are:

  • Specific: clearly identify who will be affected by what is done, and how they will be affected
  • Measurable: there are ways of measuring the achievement of the objective
  • Achievable: the objective can be achieved based on evidence and experience
  • Relevant: the objective meets the needs of the families and the community
  • Time-bound: the objective can be achieved within a defined timeframe.

Listen to the audio clip which describes in more detail what SMART objectives are.

Think about a wide range of outcomes: for your clients, but also for your organisation and your community. Bear in mind the outcomes expected by your funders. If they seem unrealistic or unclear, negotiate early on about what the key priorities are, and if necessary ask for more help to assist them in achieving their expectations. Can you/do you need to measure all of these? Which are the most important? Carefully match your short, intermediate and long term outcomes to your aims and objectives.

When resources – time, people, money – are scarce, it’s what gets measured that gets done! Is there anything which you will find difficult to measure? If so try to come up with a proxy measure – something which is related to what you want to know. For example: In the CR projects the objective was to increase children’s resilience. This was difficult to measure, especially for young children who could not complete questionnaires. A proxy measure was developed to assess changes in children’s social capital – specifically if they could identify adults outside the immediate family who made them feel good about themselves. Social capital is an important component of resilience.If you have not set out to achieve something, do not attempt to measure it, and vice versa.

Principle 2: It is very important that sufficient budget resources are allocated to evaluation. Evaluation should be viewed as a long term investment, enabling your organisation to generate income to sustain effective services. Reliable sources suggest that between 5% and 10% of any project funding should be spent on evaluation. See the Road Safety budgeting for evaluation website for more information and guidance on budgeting.

In practice evaluation budgets are often less than this. No matter how small or large your budget you only get what you pay for so make sure the resources you have are used as efficiently as possible. When writing funding bids, include a budget for evaluation and show funders how their investment will help you to demonstrate they have spent their money wisely.

Principle 3: Choose an appropriate evaluation design. Evaluation design relates to the overall purpose of your evaluation. Do you want to know if you are achieving your outcomes (summative evaluation) or how you are achieving your outcomes (process evaluation) or how you need to change your intervention to make it more effective (formative evaluation) – or a combination of these?

An understanding of evaluation design will also help choose the timing of your measurements. For more help see the evaluation design and the evaluation toolkit.

Evaluation design outlined

Here, seven evaluation design options are described by common names used by evaluators. There is a separate table for each option. The first column explains what this means and the second column points out some strengths and weaknesses to help you decide what you need. Options 6 and 7 are the most suitable for large scale projects with significant expertise and funding for evaluation. Stronger designs mean you can rely more on the findings when making a judgment about the intervention, but the last two (quasi experimental and experimental designs) which are the strongest tend to cost more and require greater statistical and other technical expertise.

Option 1: After only

Description Strengths and weaknesses
A singe measurement, taken after the event. A weak design subject to several different forms of bias. Clients may be grateful for what you have done and exaggerate the benefits to please you. Similarly they may not recognise how much things have changed. You have nothing to compare the findings with.

Option 2: After only with a comparison or control group*

Description Strengths and weaknesses
A single measurement taken after the event but from two groups: those who have had access to your service, and a very similar group who have not (e.g. those on a waiting list) You have an opportunity to compare the outcomes for a group of clients with those with similar needs who have not (or not yet) had access to the service. However, your control group may have accessed help elsewhere, or been affected by developments in the wider community, which you cannot account for.

* Control group means clients are randomly assigned to ‘treatment’ or ‘no treatment’.

Option 3: After, then before

Description Strengths and weaknesses
A single measurement, taken after the intervention, which asks individuals to record how things are now and reflect on how things were before the intervention. Can be valuable where the perception of an issue is important to the effectiveness of an intervention, or where trust between worker and client is a key issue to disclosure. E.g. Asking adults about their parenting ability before a parenting course may result in inflated scores because they are unwilling to disclose anything which would raise concerns about their child’s safety. They may also not fully understand how parenting is defined until after they have completed the course.

Option 4: Before and after, with no comparison or control group.

Description Strengths and weaknesses
A measurement is taken from the clients, before and after the intervention An opportunity for you to compare the findings from the same measure for the same individuals(s) over time. In some cases you may want to consider taking more than one ‘post’ measurement to demonstrate sustainability. However, without a control group you cannot be sure that these changes would not have happened anyway. For example children develop rapidly in terms of social skills with age and support from schools and parents. It may not be possible to show that the increase in the specific skill you were targeting was due to your intervention.

Option 5: Case study

Description Strengths and weaknesses
A range of measurements and opinions about the same indicator using different techniques and from a range of informants including clients, partners and other professionals.

Can include elements of other designs such as pre and post measurements for some clients.
Can be useful where it is not possible to have a comparison group. ‘Triangulation’ of the findings enables you to be more confident about your findings than if they come from a single source. Provides a rich description useful in formative evaluation but also in summative evaluation, depending on the measures used. Requires a range of skills and time to develop relationships.

Option 6: Before and afterwith a comparison or control group

Description Strengths and weaknesses
Collecting measurements from both your intervention and a similar non-intervention group, before and after the intervention and over the same period of time. Sometimes known as a quasi-experiment. A strong design which helps to reduce bias or changes due to other things which have changed for the individual, in the community or the environment. However, there may be unintended bias, for example clients who seek a particular service may be different from those who are referred by another agency – and may respond differently to the same approach as a result.

Option 7: Randomised controlled trial

Description Strengths and weaknesses
Clients are randomly assigned to intervention and non-intervention groups, ideally by a third party. Priority is not given to any particular group or individual client for treatment. A strong design which further reduces the possibility of bias. However, there may be ethical objections from professionals who seek to prioritise those they think most in need of a service.

Principle 4: Choose the most appropriate evaluation methods and tools to suit your aims and objectives, the age and abilities of your clients and the resources available. All methods help you to measure something. Quantitative measures tell you more about the amount of something your service has delivered, or how much it has changed. Qualitative measures tell you more about what your service means to an individual or group than how much something has changed for them as a result of your service.

Evaluation methods outlined

How to use this table: Each of the four options describes a method often used in evaluation. The first column describes briefly what this means with some advice about suitability for different age groups and an example. The second column describes some of the pros and cons of using the method, relevant evaluation designs, as well as highlighting possible ethical concerns.

Option 1: Observation

Examples Pros and cons
A direct measure of behaviour. Needs to be carefully structured so that you record the specific behaviours sought.

e.g. Parents kneel down to eye level with their child when speaking to them.
Can be laborious to collect and analyse data. Can be subject to insider bias – e.g. the trainer mayunconsciously exaggerate the extent to which parents use the specific methods with their children, compared with an independent observer.
Parents use ‘I’ statements when asking their child to do something. Most suitable for small studies.
Observations can be used with adults and children.
Observations can be direct e.g. where the observer is present, or indirect e.g. using a two way mirror or a fixed camera. If indirect there will be ethical issues to consider. Where children are being observed, the observer should have up to date and relevant training in safeguarding and may need a Disclosure and Barring Service (DBS)check (England).

Can be used with pre and post evaluation designs.

Option 2: Interview

Examples Pros and cons
Can take many forms including All forms of interview require careful management of data for analysis to avoid bias
Structured (more like a verbal questionnaire). Does not usually enable the interviewer to clarify what the respondent means by a particular answer. Does not allow the respondent to generate new insights. Can be used to collect a relatively large amount of data.
Semi structured (with a clear series of open ended questions which can be answered in a different order depending on how the respondent answers). Most suitable for small and medium sized studies.

Allows for clarification and for the respondent to reveal new insights which the interviewer may not have anticipated.
Unstructured (more free flowing discussion around a particular topic). Requires the respondent to be comfortable to answer openly to what may be searching questions about personal issues.
All forms of interview can be carried out face to face or on the telephone or using teleconferencing facilities. The structured interview is more suitable to telephone interview and unstructured interview is best done face to face.
Interviews can be carried out with individuals or with groups, depending on the purpose of the interview and the sensitivity of the topic. Focus Group interviews should be managed in such a way that all members of the group feel equally able to contribute. If the participants do not know one another this can take time and skill on the part of the interviewer.
Interviews can be used with adults or children and may be most suitable for those who cannot, or prefer not to, use written forms of response. Anyone interviewing children or vulnerable adults may need a DBS check (England) and have up to date and relevant training in safeguarding.

Most often used with ex post ante and post only designs.

Option 3: Questionnaire

Examples Pros and cons
Questionnaires can take a number of different formats. Including open ended and closed ended questions or a combination of both. Questionnaires usually require the respondent to be able to read and write, unless a scribe is provided (e.g. for a child, adult with learning difficulties or a person whose first language is not English).
Questionnaires can be used at the time and place of the interventions (e.g. training event) or be delivered and returned by post. They can also be completed on-line using formats such as Survey Monkey. Questionnaires can be quick and easy for some people to complete. Response rates vary according to where and how they are completed, but they can generate a large quantity of data.

Can be used with pre and post and ex post ante designs

Option 4: Draw and write and other child friendly methods

Examples Pros and cons
The child is invited to draw or add to a picture about a familiar topic in their lives e.g. what makes them feel healthy/happy/good or to draw a picture to illustrate a story about a child similar to them. The child can draw and write for themselves, or they use a scribe to record what they want to say about their picture.
They are then asked to write or talk about their drawings. The child’s words are analysed, not the drawing. Can be used with individuals or with groups, depending on the topic.

The themes and content arising from the children’s responses can be described, and with a large enough sample, counted.

Can be used with pre and post and ex post ante designs

Evaluation tools are simply the specific interview schedules or questionnaires you decide to use. For example the Strengths and Difficulties questionnaire is standardised tool which could be used as part of an evaluation (http://www.sdqinfo.com/).

Principle 5: Ethical issues All evaluation studies need to balance the practical, ethical and research issues raised by what you are trying to evaluate. Do you have the resources to carry out the most rigorous and unbiased form of evaluation possible? Are your methods reliable and valid and appropriate for the client group? If so is what you are planning to do ethical? Could anyone be harmed by taking part in the evaluation? Might anyone fear that taking part (or refusing to take part) would lead to a service being withheld? Are participants (including children)able to give informed consent to take part? See the Gillick competence/Fraser guidelines. Who is responsible if there are concerns or complaints about the evaluation?

Ultimately ethical issues should over-ride all other concerns. However, this is not to say that evaluation should not be carried out because of ethical concerns. There are usually adjustments which can be made to the design or method used which make the evaluation ethically acceptable. It isequally unethical to continue to deliver an intervention which is ineffective or which may be doing harm, simply on the grounds that it is too difficult to overcome concerns about evaluation.

Anonymity: Whatever method you use for your evaluation you should aim to promise your participants that they will not be identifiable in any form of report. This may not always be possible in small studies where individual families or services have particular key characteristics. Where this is the case you should include the minimum amount of description possible to enable the reader to understand and interpret your data, while protecting participants’ identities.

Confidentiality: It is not possible (or ethical) to promisecomplete confidentiality, especially where participants are children or vulnerable adults, as you will be required to act where the clients or their dependents may be at risk of significant harm. However it is important that clients understand what confidentiality they can expect and what would happen if you were concerned about the safety of children or vulnerable adults. If your evaluation setting is a school or other institution you should always follow the policy and practice for safeguarding children which are already in place.

Informed consent: You should always ensure that participants in evaluation research know: the purpose of the evaluation; what is expected of them; how any data will be recorded, stored and reported; the risks of taking part; the benefits of taking part; the safeguards for them as participants; who they should contact if they have a question or complaint. This information should be available in print for participants to consider before agreeing to take part. Data protection issues: If you store personal information you should be aware ofthe requirements of the Data Protection Act (1998) which created rights for those who have their data stored, and responsibilities for those who store, process or transmit such data. The person who has their data processed has the right to:

    • View the data an organisation holds on them.
    • Request that incorrect information be corrected. If you ignore the request, a court can order the data to be corrected or destroyed, and in some cases [compensation](http://en.wikipedia.org/wiki/Damages) can be awarded.
    • Require that data is not used in any way that may potentially cause damage or distress.
    • Require that their data is not used for [direct marketing](http://en.wikipedia.org/wiki/Direct_marketing).

Sometimes you may want to share information collected by another organisation such as school attendance or achievement or medical records as part of an evaluation. These kinds of data are also covered by the Data Protection Act.

The Act is not a barrier to evaluation but you should be aware of the steps needed to comply. The Information Commissioner’s website has helpful advice on the implementation of the Data Protection Act.

Safeguarding: Children’s safety is a priority which overrides all other concerns.

Principle 6: Be systematic Any form of evaluation means collecting, storing and analysing data. In order to preserve the anonymity of the client who has, for example, completed a questionnaire e.g. for an external evaluator, consider if you need to identify each client with a code (usually a sequence of letters and numbers) which an administrator can trace back to the client but which is anonymous for the purposes of analysis and reporting to funders. The following tips will help you to keep track of who contributed what and when: - Use the same code for each client for all means of data collection (e.g. questionnaire and interview) and if a client returns to the service for a subsequent cycle of therapy or treatment. For example for John Smith attending Any Alcohol Service could be AAS/SJ If you want to cross reference members of the same family then create a family code, followed by an individual client code for each family member. For the whole family: AAS/SJ; /SA; /SN etc. If you are evaluating different aspects of the same service or similar services delivered in different locations, make sure the code includes this information.For example services in Birmingham and Norwich delivered by the same provider could be coded as AASB and AASN. Coincidences do occur so make sure each code is unique to the client!

  • Record the date when the information was collectedand data collection point (e.g. pre or post; Time 1, Time 2, Time 3)
  • Record the age and gender of the respondent, and where appropriate the relationship to the focal client (i.e. the client with whom you have the main contact, either parent or child).

Principle 7: Support and train staff carrying out evaluation It really helps if staff who have been asked to implement evaluation tools such as questionnairesunderstand the design of the study and the consent procedures and methods they will be using with clients. Training can be brief and new staff should be able to access training to bring them up to speed with their co-workers. Feedback on data collected and modifications to procedures can be built into management supervision for teams or individuals. This helps to motivate practitioners to collect the information.

Overview

Evaluation can help you to make clear decisions about the service you are developing and about how effective it is for your clients. This can help you to improve your service as well as demonstrate to partners and funders that you are doing the best you can to improve outcomes for your clients.

Tools

There are many texts which can help you with evaluation. Start with

  • The essential guide to working with young people about drugs and alcohol Chapter 11: evaluating our work with young people about drugs and alcohol. Published by DrugScope (2008)
  • The Evaluation Toolkit at www.volunteeringfund.com
  • A surprisingly useful website is www.roadsafetyevaluation.com which aims to help road safety practitioners to do small scale evaluation projects. There is an excellent glossary of terms and lots of useful information about planning and doing evaluation. The online toolkit can also be useful although it focuses on road safety. A generic set of questions which can be used for other kinds of intervention is available.