I enjoyed speaking on a lively panel yesterday about regional development and innovation as part of the UIIN conference, relocated successfully from Budapest to Zoom. Together with Matthew Guest from GuildHE we discussed how to better understand the local role of small and specialist providers.
The work builds on experimental alternatives to traditional economic impact studies. I first explored the idea of institutional heatmaps on a post here in 2018, and then expanded on this at a workshop in South Africa later that year. Over the past 12 months I have been working with GuildHE to ‘map’ the impact of some of their members. In yesterday’s presentation I set out why I think the traditional ‘big number’ approach to measuring economic impact is out of step with what places need from their universities. Below I go further and list why I feel these studies are, mostly, flawed endeavours. (I should add that these are my personal views, not those of GuildHE!).
You don’t have to look far to see economic impact studies. My former employer had a flagship biennial report with a steadily-increasing figure for the impact of UK universities – £21.5 billion to UK gross domestic product at last count – which it has used successfully for lobbying and campaigning. As long as this figure keeps increasing, everybody is happy. Many institutions have their own studies – £650 million of impact here, £400 million impact there – and often with LEP-level or regional disaggregation. Of course, such studies are not limited to higher education. We’re informed that shooting contributes £2 billion to the UK economy and supports the equivalent of 74,000 full-time jobs. Ornamental horticulture and landscaping contributed £24.2 billion to national GDP in 2017.
Why we need change
There are helpful academic papers which deconstruct the methodologies for calculating economic impact, and the common pitfalls. Instead, I want to challenge the preoccupation we seem to have with ‘one big number’ impact studies and what we lose in the process.
There are two shifts taking place which render the traditional impact study less effective:
- A single large number fails to capture what is increasingly important. The shift towards universities being ‘for’ a place, rather than simply ‘in’ or ‘from’ a place, means this data needs to be far more nuanced. We need to know specifically who is benefitting, and how, and who is missed out. We need to know the businesses and the communities behind these numbers. As disillusionment grows with traditional methods of measuring economic success – GDP, GVA – and attention on ‘inclusive’ and social development begins to be translated into policy change, economic impact analysis needs to keep up.
Traditional impact studies simply don’t do justice to the range of university activities. They measure spending, output and employment, but do not capture the full impact of engaging with communities in a marginalised neighbourhood, or working with small businesses to strengthen their supply chains, for example – activities that may have huge impact but make little difference to a £400 million impact figure. (Accounting for social value can help here).
- As we grapple with recovery from Covid-19, it is both tone-deaf and ineffective for universities to be shouting about how good they are, whilst also asking for assistance from government. Rather than communicating about the size of their value-added, university messaging needs to focus on solutions and partnerships. Policymakers need a more sophisticated understanding of impact which moves beyond broad figures to specific information on which communities, businesses and industries have benefited from the university, and who stands to benefit from future support.
What else is wrong with traditional impact studies?
I should note that economic impact studies are not all bad. It is helpful to see returns on investment, and to raise awareness that universities have economic clout and should be seen alongside other major industries. But they risk being a blunt instrument, obscuring what is often highly patchy and inconsistent local impact behind impressively large numbers. Economic impact studies need to be married to a rich understanding of local impact – perhaps through something like an institutional heat map combined with a survey of perceptions or social impact assessments.
Four further shortcomings that come to mind:
- Uniformity. Despite huge variation in local contexts across the UK, and the individual histories and missions of universities, impact studies all end up looking pretty much the same. As with my engagement strategies test, if you line up five university impact studies and remove the university name, can you tell who (or where) they are talking about? The uniformity of approach, and measuring success against numerical benchmarks, means we lose out on what may be needed. By working towards what is measured and counted, impact ends up converging into a standardised set of headline numbers and we lose the local context.
- Impact. Slightly tongue-in-cheek, I would like to see an impact study of impact studies. Do they lead to positive change? Or boost perceptions of universities? Quite possibly. But next time you are in a taxi to a university, ask the driver about the impact of the university. You’re unlikely to be quoted an economic impact figure of £450 million a year to the LEP’s economy. You’ll probably be told about the business that decided to open a new site near the university, or the impact of students volunteering with communities (and how the university is good business for the taxi company – at least before lockdown). You might argue that economic impact analysis is aimed instead at funders and policymakers. But should it not also reach residents and businesses?
- Fatigue. Somewhat cynically, does anyone really care whether the economic impact is £600 or £900 million? Beyond a certain point, big number fatigue sets in. Figures between institutions are not always directly comparable, and the process of reaching the figures is not always transparent (or easily replicable).
- Unintended consequences. We are not at this point, but I can imagine a league table of economic impact rankings. Universities should be well aware of the limitations of league tables, and the uncanny ability of rankings to shape and warp policies away from what is important – both for the institution and for the place.
Above all, my concern is that economic impact analysis can mask inequalities and ‘cold spots’ in university engagement. Of course, heatmapping as an experimental alternative brings its own set of issues. Consistency between institutions, subjective judgements over the importance and intensity of shading, and the complexity of trying to map such a wide range of activity are issues that need to be resolved. But they may also expose quite starkly where a university is not working, and not having an impact – things that are hidden in the ‘one big number’ approach.