Introducing the Cultural Learning Evidence Champion's Handbook
Partnerships Director, Holly Donagh, introduces a new handbook designed to help evaluate projects, conduct research & think about how to measure the impact of work with children and young people
11 March 2019
I’m often asked about how you can prove the impact of the arts in education – what evidence is there and how can it be applied? I’ve become a bit of a pedant on the subject. Now my first reaction is usually: why assume the arts have an impact on education if you don’t have access to evidence? And what do you mean by 'impact', by 'evidence', and, for that matter, by 'education'?
I do of course feel passionate about the value of the arts within our education system and the necessity for society to provide young people and children with all manner of ways to be creative, I just think we need to be precise when talking about evidence because it is a technical area which requires rigour in order to be credible. In the era of ‘fake news,’ we need to make sure we know the difference between high-quality research which can go some way to proving causal effects and case study findings which help tell a story. Both are useful in context but they are not the same.
This is why we and the other Bridges are supporting the RSA’s Learning About Culture programme and in particular, the Evidence Champions initiative. The idea of Evidence Champions is that we build a network of professionals in every region who can grapple with the challenges of how to assign impact and share that knowledge with the wider world.
One of the first pieces of work coming from this programme is the new Evidence Champions Handbook, which is designed to bring the tools and techniques being used by the Champions to a wider audience. If you are involved in evaluating projects, conducting research or thinking about how to measure the impact of your work with children and young people, then this handbook is for you.
The handbook is a guide to the terrain of evidence, the terminology, the basic principles and some of the ‘how-to’. It is designed to make the process of evaluation, in particular, easier for people who don’t have a professional research background. I would say, however, that it is not un-challenging, and one of the things it is trying to do is raise the general level of skill in working with quite complex ideas and methods. This is all part of how we improve the production and sharing of solid evidence.
The route to a more evidence-informed sector is likely to be bumpy. Building an evidence base with high standards at the core may mean we need to get used to lower affect sizes – i.e our work might not be very ‘transformational’, it might move the dial only slightly; our work might be costly in comparison with other forms of intervention; it might be hard to know if it is as good as other work because there is no real benchmark. These types of findings can be hard to process, but they are in some ways inevitable and essential to raising credibility overall. The alternative is to say either that we are happy to rely essentially on anecdotal evidence, or that we only view value through the paradigm of artistic or aesthetic outcomes - which is tricky to quantify in itself and is surely a cop-out.
We should also remember that even with the best evidence base in the world, political opinions don’t always change and we can never stop being creative with how we tell the rich story of value for our work.