User research – have we had too much of a good thing?

With rising costs and diminishing value returns, we explain why user research in Central Government needs to be approached differently.
User research – have we had too much of a good thing?
May 31st 2019

There was a time when government policies and services were created with little input from members of the public who would be using them – not to mention paying for them! That all changed when GDS put user needs as their number one principle back in 2011. The idea was a refreshing one: it would improve the quality of our work, reduce our error rate, keep us honest, and break down the barrier between customer and creator that the private sector had long since demolished.

Yet in many parts of government, work to identify ‘user need’ through the method of ‘user research’ has ceased to be refreshing: it can be gratuitous, stifling, over-complicated and over-priced. In this article, we discuss the current problems with user research, how they arose, and call for an end to ‘discovery paralysis’.

Escalating costs

Historically, market research and qualitative research have not been particularly lucrative professions. Even now, the typical day rate for a qualitative researcher on PeoplePerHour is well below £200.[1] However, a user researcher has a median day rate of £475.[2] Moreover, this rate has increased by 12% since 2017, well above the UK inflation rate of around 4%.

User researchers do have highly valuable skills, but they are not so different from the skills of professionals who describe themselves as plain researchers. Moreover, the lack of any specific qualifications or well-recognised training courses within the user research field make it difficult to assess what good value user research looks like. It’s also a relatively new area, which makes it harder to benchmark. It’s not surprising therefore that a civil servant with a lack of experience in procuring user research is persuaded by an authoritative agency pitch – and perhaps even reassured by the hefty price tag that comes with it.

Whilst government is working to improve in-house capability in research, for example through creating Design Labs, the ambiguity around the specialist value of user research leads us to believe that the market will continue to perpetuate – and escalate – its own value.

Diminishing returns

The average length of a commissioned Discovery project on the Digital Marketplace is ten weeks.[3] Doesn’t that seem a long time to you?

Of course, the most complex research topics will warrant a longer research period. But well-scoped qualitative research is renowned for being subject to the Law of Diminishing Returns. In other words, it can take surprisingly few conversations with users to get the insight you need, and to start hearing the same thing over and over again.

An academic study by Guest et al[4] showed that the research team discovered 94% of their core insight within the first six interviews conducted. They had 97% after twelve interviews. Similarly, Steve Krug, a pioneer of user research and testing, has also suggested that a handful of participants can often get you most of the insight you require.[5]

In this spirit, we argue that for the sake of both timely project delivery and responsible use of tax-payer money, user research should aim to be ‘good enough’ rather than perfect. The next time you consider commissioning or authorising a ten-week discovery, ask yourself: how much could we achieve in five weeks?


Central government departments have commissioned almost 500 discovery-style projects in the past two years.[6] The discovery typically lasts ten weeks, with a £475 individual day rate and a typical team size of three. This adds up to almost £36 million pounds spent on insight since 2017. Where is all that insight and is it being used to full effect?

Ultimately, the civil service works to provide services for UK citizens. As diverse a user group as that may be, there are consistent things that users want from their interactions with government. This is especially true when research is not to determine policy (e.g. what sort of teachers do we need and how can we make them effective?) but rather to define how policy is delivered (e.g. the application form and process for people who want to become teachers.)

When user research insights remain hidden within the online folder systems of siloed teams, no-one else can access them. They could provide value to other similar teams and projects if they could be located and searched, avoiding additional new research costs and time. Currently, we believe user research is under-utilised, and the public purse frequently pays more than once for the answers to the same questions.

How did we get here?

The road to costly and under-utilised user research has been paved with good intentions. It’s common sense to ask service users directly about their experiences. As conscientious professionals, we all want to ensure we are as close to the truth and as representative as possible in our work. However, the enigmatic nature of the user research industry can make it difficult to know when we’ve got an answer that’s good enough.

In addition, uncovering a suite of user needs can be an overwhelming experience when a team is just working on one need or issue. Especially in the absence of accompanying analytics or financials, how can you decide which user needs should take priority?

These factors can contribute to unhelpful patterns of over-discovery and ‘discovery paralysis’ that we’ve seen in even the most well-intentioned and digitally-minded government departments.

Finally, relatively stringent user research requirements are currently embedded into the process of building government services. For any service with over 100,000 users to appear on, it must pass a GDS assessment, which includes requirements such as having a full time user researcher on the team. The risk of having an otherwise robust and well-made service rejected due to an arguably arbitrary or inflated threshold for user research can drive many in government to err on the side of caution, and ‘go large’ on their Discovery tenders.

What should we do about it?

We’d love to hear your views on user research – the ambiguities, its quality, costs, value and collaboration. Let us know if our views resonate with your experience.

We’ll be sharing more practical tips to tackle this problem in our coming paper: ‘Ways to do more with less user research’ – it will be published here soon.

Authors: Katie Burns and Antonio Weiss


  1. Analysis of sample of listing with ‘qualitative researcher’ titles on, as of 30/05/19
  2. IT Jobs Watch, Accessed 30/05/19.
  3. Analysis of all projects including discovery phase on Digital Marketplace (as of 25/4/19)
  4. Guest et al, 2006. ‘How Many Interviews are Enough?’, Field Methods. Vol 18, Issue 1, 2006
  5. Steve Krug, Rocket Surgery Made Easy. 2009.
  6. Analysis of all projects including discovery phase on Digital Marketplace (as of 25/4/19)