Dualo chevron
Back to blog
Oli Mival on using research to de-risk decisions
Interviews with leaders

Oli Mival on using research to de-risk decisions

We caught up with Oli Mival, renowned research leader and Head of Research & Insight at Picsart. In this post Oli tells us about his ‘people first, technology second’ approach, shares his thoughts on what makes an impactful and actionable insight, and discusses the role of ‘confidence thresholds’ when using research to de-risk decisions.

Dan Robins
May 12, 2022

Introduction

With a background in psychology, a PhD in computer science, former Director of User Research at Skyscanner, and currently Head of Research & Insight at Picsart, it’s fair to say that Oli Mival has a pretty impressive CV. We caught up to understand some of the underlying principles behind his research, and its practical application at Picsart. 

This interview is full of actionable advice and unique perspectives inspired by a career at the forefront of academic and applied research. Let’s dive in…


Oli, on your linkedin you talk about an approach to research which put simply, is about 'people first, technology second'. Can you elaborate on this?

To provide a little bit of context, I come from an academic background. A background in psychology means my focus has always been on people; what do they think, why do they think that, what are their motivations, and how do those motivations change? My PHD was in computer science but it was very much about human/computer interaction.

A lot of my academic work was based on a construct called ‘blended theory’ - how we move between physical and digital spaces. Obviously that is a conceptual idea, but this is what we do when we pick up our phone, when we open our laptop, when we go to the cinema - we move between the physical to the digital space. The tech is ultimately just the enabler. 

I’ve always been very excited by emerging technology and the implications it has. But technology is simply a layer on top that enables us to have an experience. At the end of the day you’ve got a person, and at the other end you’ve got another person. You may have multiple digital systems or physical hardware in between, but it’s always about people.

So what I mean by ‘people first, technology second’ is that it’s the experience that really matters; I care less about what the tech is, I’m more interested in what it enables people to do; the repercussions it has on how they feel, how they behave, how they interact.

So then we can start to say ‘this is the behaviour that we see, this is the behaviour that we want to enable, what design and technological decisions do we need to make to get us from one to the other?’. 

Let's talk a little about how you put this in practice. How do you know when you’ve discovered a powerful insight about the people you’re designing for?

It’s an interesting question, and ‘powerful’ is obviously the driving word here. And what powerful indicates really is that it can drive impact

The reality is that data is data, and until you put synthesis and analysis on top of it, you don’t have insight. We talk about research internally at Picsart as strategic, tactical or evaluatory

Strategic - what should we do?

Tactical - how should we do it?

Evaluation - how are we executing against this?


Powerful or impactful insights are the ones that transcend through to all levels of that. 

The interesting thing about discovering a very impactful insight is it may not necessarily be applicable straight away because of other constraints within the business. That can be challenging, but it’s where the positioning of researchers across an organisation can be very powerful.

At the end of the day, as researchers, we are here to help derisk decisions. Product moves at pace and it’s about understanding what are the most risky decisions that we’re making, and what are the consequences of getting it wrong.

So for me, the most powerful and impactful insights are the ones that allow us to course correct. They do this by indicating that the path you are going down is the wrong trajectory. The earlier up the decision making process you can recognise this, the easier it is to change things. It’s far easier to change things on paper than it is in code. 

How do you ensure that insights are validated and backed by sufficient evidence to support them? Can and should you police this?

I think what it really comes down to is a balancing act between rigour and pragmatism.

Surveys are a great example of this - how many people do you need to participate in a survey? The magic number is 385, because that gives you a 95% confidence level with a 5% margin of error rating - whether your population is a million or 100 million. So it’s a useful number to know… but it doesn’t mean that if you don’t hit 385, that your insight isn’t “valid”, it just means that you have an increased margin of error and/or a lower confidence level.

And so ‘sufficient evidence’ is exactly the right way of saying it, because it’s actually about confidence thresholds. Confidence that we are helping with the decision making process and doing so in a way which is robust enough, and delivered with the appropriate level of margin for error. 

As you become more experienced, you start to understand what is required as an output to make a product decision, commercial decision, executive decision, or whatever it might be. 

But what it often comes down to is a consideration of the repercussions of being wrong. If you’re designing a critical system for air traffic control, then you’ve got to get that stuff absolutely right. If you’re doing something which allows a colour picker to be redesigned, the repercussions of getting that wrong are not on the same scale. 

So it’s a case of saying ‘what gives us confidence that this is ‘true’, and what is the required level of confidence for what we’re actually doing?’. If there is no confidence, then don’t do anything! Because no research is often better than bad or incomplete research, that only gives you a false veneer of validity. 

A quick side note on “validation”, it’s a word anyone who’s worked with me knows I jump on when used inappropriately, for example “validating product or design decisions”. My job is not to validate your decision, that’s a vanity process to feel good about the decision - “yes, we did the right thing!” - my job is to help you evaluate that decision to determine and learn from the efficacy and impact of the outcome of that decision. It helps force us to be crisp about the objectives and outcomes the decision was made for, as well as being nuanced and specific as to how we measure and determine success.

And what’s next? How do you go about getting the most value out of this new found knowledge?

Ultimately, the value of research and insight comes when it’s operationalised into the decisions that are being made. If it doesn’t land in the consciousness of decision makers, then it’s meaningless and in danger of being a form of research theatre in many ways. 

So that handover point is crucial and also very difficult to get right. In my experience, nobody does this perfectly, and there are different levels of experience and maturity. Part of it is about not being too precious as to think that everything we do is meaningful for everyone, and ego plays a factor in that.

But there are many different factors; from legacies (‘we were always going to do this, now how certain are we?’), to the level of gravitas the discipline holds within an organisation.

There’s a danger that as researchers we overstep slightly into the solution, as opposed to giving the people who are doing that solutionising the appropriate framing -  the “so what” - to enable them to do whatever is required. 

I’ve found in the past that finding your ‘champions’, those PMs, Product Marketers or Designers that just get it, really helps. They understand that insights help them do their job better, and that enables us all to build more delightful, useful and usable experiences. So it’s about giving people the framework to understand why and how to use the knowledge and being mindful of how we articulate insights to different audiences - whether split by discipline, experience or seniority - in order to optimise accessibility and comprehensibility. This is how we ensure that it’s actionable.

So ultimately I think so much of product development comes down to needing to go from one state of affairs to another, to achieve this we need to make a series of decisions along the way, and insights are there to help inform, guide and enable us when making those decisions.

You can learn more about the process today’s best product organisations are following to level up their research operations by downloading a free copy of our playbook, User research is broken: A guide on how to level up your research operations playbook, available here.

About Dualo

Dualo is an insights hub used by digital product teams to get more repeatable value from their user research and insights, so that stakeholders can make informed and timely decisions across the organisation. If you're interested in learning more, please request a demo and a member of our team will be in touch.

ABOUT THE AUTHOR
Dan Robins

I’m a design, UX & strategy lead with a passion for storytelling. Proud member of Dualo’s founding product trio. Always seeking new inspiration.

Insights to your inbox

Join our growing community and be the first to see fresh content.

You're subscribed! Stay tuned for insights to your inbox.
Oops! Something went wrong while submitting the form.
Dualo checkbox

Repo Ops ideas worth stealing

Dualo checkbox

Interviews with leaders

Dualo newsletter signup