Our May breakfast focused on how you can effectively evaluate the impact and outcomes of engagement activities. There was a good turn-out and discussion of what people had tried and found more or less effective. Methods ranged from traditional surveys through more creative stickers and tiddlywinks to longitudinal interviews (check out animalmechanicalandme.com). The overarching theme being to start with the ‘so what?’
We had ‘2-minute wonders’ from:
Dee Davison – Evaluating a community science festival
- Dunbar SciFest http://dunbarscifest.org.uk/ measures success in a number of ways including the number of people involved, where they are from, economic impact and visitor comments.
- Consultants are brought in to produce an annual Event Evaluation and Economic Impact Assessment report.
- In future evaluation criteria will be developed to enable tracking of the uptake of science subjects in secondary school and future exam and study decisions from the target audience in order to measure the festival’s success in achieving its vision of “enthusing children and young people about science and to encourage them to take up scientific careers.
Dawn Smith – Using Tiddlywinks for evaluation
- Dawn talked about her experience of using evaluation methods that suit your event and audience. Evaluation method needs to work for participants as well as the funder.*
- Fun, visual things like voting with tiddlywinks can enable you to get responses without asking people to fill in a form. A 9 year-old with a clipboard is charming: a 29 year-old is intimidating*
Sarah Fleming – Development of an evaluation tool
- Sarah is an MSc Science Communication and Public Engagement Student working on a project to develop a resource that can be used for evaluation of PE events across the College of Medicine and Veterinary Medicine, University of Edinburgh informed by practitioners and funders.
- She is interested in the current challenges and priorities, what information should be collected, what funders expect from it and whether evaluation can be made consistent across institutions.
Liz Scanlon – Evaluation training from Methods for Change https://www.methodsforchange.org/
- Eric Jensen from the University of Warwick runs practitioner focused workshops on evaluation and research techniques. A series of these ran in Glasgow and Stirling in March focusing on the analysis of qualitative and quantitative data, survey design and using social media for evaluation.
- Key messages from the quantitative analysis workshop were (i) focus on the outcomes you are interested in and determine whether these are actually measurable, (ii) questions need to be consistently interpreted for the data to be meaningful (iii) a meticulous step by step approach is needed and by the time you get to statistical analysis everything has to be a number!
Sarah Keer-Keer from the Wellcome Trust Centre for Cell Biology and the Midlothian Science Festival talked about Practical evaluation. She suggested that in determining what to evaluate, the first thing to do is consult your public engagement strategy (if you have one). Key questions are:
- What were you trying to do? Did you do it?
- Who were you trying to reach? Did you reach them? Did they enjoy it?
- What did you want your audience to understand? Did they understand it?
- What was the take-home message? Did it go home?
Kat Przybycien from Heriot-Watt Engage talked about what she’s learned from being involved in the evaluation of various projects and events.
- Evaluating people at the exit not always as ideal as it seems: when they decide to go, they want to go*
- Tensions between ‘just’ evaluation and a full-blown research project*
- Difficult to change attitudes if you meet someone only one. Need to build relationships*
- Monitoring = is the project on track? Evaluation = did people enjoy it? Did they get what they wanted? The ‘so-what’*
*Thanks to @TheVintageDoc (Sarah from the Beltane) for tweets about the event under the hashtag #BelBrek