Skip to content
News

Guest article: Measures of success in Higher Education

Natalie Naik
Guest Article

The following article has been written by Andy Youell – writer, speaker and strategic data advisor. 

This post was created exclusively for the Talis Informer, a quarterly newsletter from Talis aimed at those leading and influencing Higher Education libraries. If you’d like to receive the newsletter, please get in touch at info@talis.com.

Measures of success

In a world transformed by technology we often look to data to give us information and insight. Universities use data to understand their students, their market and their performance; the analysis and interpretation of data has become a key capability. But increasingly, those who measure and judge universities are turning to data to drive qualitative assessments.

Counting the quality

For many years media organisations have published university league tables. These league tables are usually based on metrics derived from published data sources and from organisations like HESA. Universities have had a complex, often Janus-like relationship with university league tables for many years. The sector is swift to criticise league tables for their over-simplistic metrics, their comparisons between disparate institutions and their tendency to exclude “non-standard” students and provision. Yet universities rarely fail to promote their league table successes on their web sites and promotional material.
In recent years the use of data to assess universities has become a more prominent feature of government policy. Although the government does not produce leagues tables, it works with the funding and regulatory bodies to create mechanisms like the REF, TEF and – from 2020 – the KEF to assess performance and to drive behaviours.

The 22 of March 2006 was a key moment in this shift to data-driven assessment. The then Chancellor, Gordon Brown, announced in the budget that the government would launch a consultation on its preferred option for a metrics-based system for assessing research quality that would replace the Research Assessment Exercise (RAE). This announcement was based on a number of assumptions that have consolidated in the policy psyche and which have subsequently underpinned the development of the TEF and the KEF. There are two key elements to this thinking. First is the idea that modern data technologies and analysis techniques are so sophisticated that they can generate accurate and meaningful assessments of quality. Second is the assumption that there is a wealth of data in university systems that can be “handed over” with minimal effort and thus reduce the burden of preparing a bespoke submission for the assessment exercise.

The reality is that both of these assertions are less clear cut than that original 2006 announcement would suggest. In the world of research assessment there has been a long and thoughtful consideration of the use of metrics and the Forum for Responsible Research Metrics provides advice, advocacy and leadership in this debate. The current Research Excellence Framework (REF) is primarily a process of expert review which uses metrics to support, but not drive, the assessment process.

The TEF and, it seems, the KEF are taking a different approach by building an assessment process that is primarily driven by metrics and then supplemented by a narrative from the university. In the TEF universities are presented with an initial hypothesis, generated entirely from data, and then have an opportunity to submit a case to support a final assessment at a level equal to, or high than, the initial hypothesis.

Navigating this terrain

Although these assessment regimes have different approaches to the use of metrics they all offer some role for data based on attributes that can be meaningfully and consistently counted. In many cases the metrics are derived from data submissions that the universities make to HESA and the starting point for any university that wishes to fully engage with and understand these assessment regimes should be a deep understanding of how their own, often complex and dynamic reality is mapped to the data structures used in the various HESA returns. Although the HESA data structures are designed to minimise the amount of legitimate flexibility that universities have in making their data returns, the categorisation of activities can have an effect on the metrics that are subsequently generated. This can be especially significant in cases where universities have (and are therefore reporting on) activity that deviates from the standard modes and structures which predominate the data specifications.

Once the data submissions are made, the analysts that work on these assessment regimes will calculate the metrics they require. In some cases these algorithms will be published before the data is submitted but in many cases they will not. From the perspective of the analysts this is a sensible approach since these algorithms often need to be developed and tested using real data, even if the high-level descriptions of the metrics are specified in advance. For the university this often means that predictive modelling of performance in these metrics is incredibly difficult and often impossible. Vigorous engagement with consultations and briefings around these regimes – where these opportunities exist – can increase the understanding of the metrics and therefore reduce the risk of nasty surprises. It can also enhance the ability to interpret these metrics and feed them back into the university in a way that is meaningful and that can enable remedial actions to be undertaken, where it is necessary and positive to do so.

In cases where metrics are derived from data that universities do not submit – such as research citations metrics from publishers or the graduate salary (LEO) data from HMRC – the work to understand and respond to the metrics is even more challenging. In many cases universities will not be able to access the underlying raw data from which these metrics are produced and these datasets will often include concepts and definitions that require knowledge and expertise about domains far removed from university life.

Building a rich understanding of how metrics work can consume significant amounts of time and often requires the ability to work with complex data concepts and analysis methods. Universities need to be clear about the level of investment they are prepared to make in order to engage with and understand these regimes. They also need to be clear about what benefits they want to derive from building this knowledge and capability. There can be a lot of value derived from understanding how different types of activity within a university can affect the metrics in these assessment regimes; there is usually very little, if any, scope to optimise performance in these regimes through the legitimate adjustment of data submissions.

The bigger picture

Those who measure and assess higher education see data as fast and powerful and with the ability to deliver high-value outcomes at a relatively low cost. This mindset is well embedded in policy thinking and the increasing use of metrics in assessment regimes aligns with the greater prominence of data-led policy and regulation. The higher education sector needs to engage in an informed and intelligent way on the bridge between data and policy. Individual universities need strong data capabilities in order to support accurate data reporting and to engage with the metrics that others derive from their data submissions and from other sources.

 

Thank you to Andy Youell for contributing to this post. Formerly the Director of Data Policy and Governance at HESA, Andy has been at the leading edge of data issues across higher education for over 25 years. His work has covered all aspects of the data and systems lifecycle and in recent years has focussed on improving the HE sector’s relationship with data.

If you’d like to hear more from Andy, he will be speaking about data at Talis Insight Europe 2020, find out more here.

More from the blog