#AUABlog: Subject Level TEF – is it really measuring the right things?

By Stephen McAuliffe MAUA

Academic Registrar 

University of Birmingham

Nothing is more exciting than sitting on an overcrowded and dirty train reflecting on how some organisations with poor approval ratings seem to continue unabashed whilst the University sector, a jewel in the economic and cultural crown of the country, gets landed with subject-level TEF.  This is the pilot of a new approach to – so the government theory goes – recognise and drive up teaching quality by providing information to students to help them differentiate between universities.

Now, to my mind, seeking to drive up quality is a good idea and one that is widely supported in the sector. I think everyone recognises that students do invest substantial sums (but also substantial time).  Providing students with better information and advice on how to use it can only be positive; they can learn more about their intended or current institutions and make decisions accordingly. It has the genuine potential to put them at the heart of the system (if they aren’t already).

That said, the sector already has a substantial track record of developing quality regimes that measure excellence and ensure quality outcomes for students. Our universities didn’t become world-beating by accident, so is subject-level TEF really necessary? Undeniably, we could do more. More to help students understand feedback, more consideration of how students are supported by professional and academic colleagues, and of course more focus on value for money.

But there is a risk here that headline-drunk politicians are dumbing down the very notion of teaching excellence and creating frameworks that assess metrics that are insufficiently nuanced to capture the quality of a student’s experience at university. Moreover, if the metrics are flawed, can decisions based upon them also be anything other than flawed?

The pilot subject-level TEF considers assessment, continuation (progressing to the next year of study), graduate employment outcomes and a variety of other factors. It also introduces two new ones: teaching intensity (which measures contact hours) and grade inflation.

Let’s take two of them: contact hours and graduate employment.  Which is more valuable to a student’s learning: three hours in a room with a junior lecturer, or one hour with a Nobel prize-winning professor?  The answer depends on which one is the better teacher. One might assume the Nobel prize-winner would be better, but that person will not have won the prize for their teaching. The sum of hours of contact is far less important than the quality of that contact.  Remember when students were said to be ‘reading for a degree’?

With that in mind, consider a student reading English: should they spend 20 hours reading and critiquing a novel with two hours to discuss their view in a seminar, or be spoon-fed the novel in three hours’ of lectures and a two-hour seminar? Despite students learning far less from being walked through the novel, the framework would value this didactic approach – an approach that denies the development of critical and independent thought, which is supposed to be the primary purpose of a University education.

When I think about the ‘fake news’ that is churned out relentlessly across the internet, isn’t it a positive that students are educated to challenge what they see and critically consider its validity? That is more important than the number of hours they sit in a lecture theatre taking in information. University isn’t ‘big school’.   

 Similarly, the use of the graduate level employment or further study metric raises the question of whether students engaged on a degree can only be considered successful if they get a graduate-level job (in general one which requires a degree in order to be considered).  What about those that graduate but want to pursue a path of their own choosing rather than one with a ‘graduate’ flag? The entrepreneurs, or the aspiring writers or artists working a day job to support themselves but putting their hard-won degree skills into their creative endeavours? And even within the traditional jobs market, do you know that x isn’t a graduate job, nor is y? Does that make them poor career choices? Does that mean students’ time at University was wasted?  What measures success in your own career? Is it that you needed a degree or is it fulfillment in the role?

There are similar arguments about all the metrics, what they measure and therefore what conclusions can be drawn. By measuring at the subject level we risk students deciding that one degree is inherently better than another because of hours in a room or because someone three years ago got a job on a graduate scheme. Of course we should make sure students get the contact needed for their degrees; similarly, we must make sure they can make informed choices about career paths. However, driving institutions to measure these in the way outlined in the TEF isn’t putting students at the heart of the system, but enabling politicians to gain some (not so) cheap popularity.