Confronting Assumptions & Reducing Risks

I have just got back from App Promotion Summit London – one of my favourite events of the year – which the very smart James Cooper (alumni of the Academy) has been running for the past 3 years. Over-subscribed and full to the brim – even on tube strike day! I was given a great opportunity by James to run a workshop where I got to try out a new framing for my campaign to keep people at the centre of digital innovation. Through my work with Pearson I have recently been introduced to the language of “Experiments” and “Experiment Boards”. It was a brilliant opportunity to try out a new tool – I am pleased to say that it worked well and I shall definitely be including it in my session at the next Academy. Thanks to all the participants of the workshop for helping to validate my own experiment and to Pearson for the fresh thinking. An experiment within an experiment – how exciting!

The job of an Experiment is to test out things that we believe to be true today so that we can save time and money tomorrow. 

APSLondon 2015
Some of the Workshop Participants – @fleurelliott @mercerjamie (me:@jewl) @wongston @designrichly @vestorach

We start in the same place – the challenge is to first list out all assumptions that you have made; then identify which are the riskiest. Work out where the assumptions can be corroborated, in other words, where you can find evidence to reduce the risk. That could be desk research – market stats / proof points already established from “similars”. Understand then that the riskiest assumptions are often those about how people feel and how they are going to behave and so you need to go find evidence to reduce them.

Enter the “Experiment Boards”, listed below in “references”. Having studied these, I have devised this short list of questions to help with the definition of the experiment. Behind each one, is a discussion of course.

1 Goal: What is it that we’re trying to learn / prove? Have we assumed a customer problem or behaviour?
2 People: Who is our customer / user? What are our recruitment criteria? Where will we find them?
3 Logistics: How are we going to conduct our experiment? When? Who will carry out the experiment? Where?
4 Measurement: What will make our test a success?
5 Outcome: What did we learn?

Here is the presentation that I used to facilitate the session that includes the design of the experiment.

I asked the group to offer up case studies based on the products that they were working on and I asked them to talk about assumptions they had made and how they could test those out. They then got the opportunity to work with others in the room to help them define experiments.

Some great examples came from Rich Brown, Co-Founder at I know this great little place in London. They wanted to test out their proposition and see if the appetite was there. They put up a Facebook Page and a a single pre sign-up landing page with a “free forever” message. They managed to get 40,000 Facebook fans and 110,000 pre launch sign ups. Rich also talked about a small Facebook advertising campaign that he ran and the high levels of success he gained there.

Simon Wong, Marketing Manager at What Now Travel talked about how he can be frequently found on Leicester Square, talking to tourists about features and showing his App with well constructed, uniform research scripts – like it!

The specific assumptions that were offered up to work on included

  • Have we focussed too closely on one target segment so that we may alienate others through our product positioning, marketing and design?
  • How do we know that our App will be habit-forming (necessary for the business model) rather than just used as a one – off?
  • Have we chosen the right feature to major on? It felt like the decision has been made without data.
  • Will what worked in one geographic market translate to another?

Attendees found it useful to get help from others in the room to work through the design of their experiments. One of the most challenging parts was to identify the “Goal” in a small enough component part to test. The group realised that a number of experiments may need to be defined and carried out and so they got involved with breaking down assumptions to a more granular level. There were deep discussions around how to design experiments to check assumptions made about people’s sentiment – but that is another session and blog entirely.

Talking to users / customers face to face is a good way to crack many of these experiments – in the presentation is also a check-list for how to carry out good customer conversations. The point about this particular session is that here is an approach that makes you stop and question your assumptions. It can be used at any point of the product development process, but best that you start early on (yes, before anything is even built) and then keep checking yourself as you move forward.

References:

As referenced above, this is an extract from the next The Mobile Academy, for innovators who #needtoknowmobile. The next course starts 1st October and runs for 10 weeks, Tuesday and Thursday evenings. Industry experts deliver practical sessions in business, design and how to work with mobile technology. Currently £100 off with code “Early”. 

Posted in Uncategorized

Leave a Reply

Your email address will not be published. Required fields are marked *