If no one reads, oh.
If someone reads and doesn’t care, oh..…
If someone reads and hates it, arrrrr…
It someone reads and loves it, hmmmm…
Wolfram launched their Connected Devices Project.
I find this interesting not for what it is – databases are really not interesting – but for what it makes possible. When services can self-configure, products become easier to use because the user doesn’t have to configure them.
Anything from wiring your own plug, fitting the batteries, connecting to the wifi. Any extra step is annoying but with the internet of things, things are going to get out of hand. Who wants to configure 200 lightbulbs? What happens when one disconnects?
Self configuring stuff is still a pipe-dream even on the web. Services like Zapir.com and IFTTT help. Standards like bluetooth and wifi are probably the best examples of services “discovering” and configuring themselves. I hate to think that a standard is needed to coordinate between things, so they can use each others’ services, find each other and do stuff without you having to configure them. I haven’t found such a standard… does any exist?
This would be amazing. You can imagine throwing a load of things in a room and have them be inter-aware. Able to see and talk to each other in a meta way. An analogy would be when first connecting to the internet (which I did in the ‘90s). It was ok, so long as you wanted to read quite a few manuals. You had to learn a little about DNS, routers and how to configure them. I remember configuring a router over telnet.
For the number of devices to become a reality, they need to work out how to work together. At the moment, it’s like a class of screaming children all wanting “my config, my config, feed me config”. It’s too noisy.
Better to have a quiet class of children who help each other find the coat racks. Actually, maybe that’s an analogy worth following.
The IoT of children…but not like that.
When a new child joins a class they usually have a “helper” or “buddy”. The IoT needs the same thing. You need to be able to nominate IoT devices as IoT-buddies. This is simply somewhere to copy config from and config could cover wifi details, privacy settings and the like. If we use some kind of tracability, you would know where each configuration has come from (i.e. who was the buddy who taught you to do it like that) and where it went to (who did you teach to put you coat on the floor?).
In fact… to bring in another analogy… maybe we should view the configuration of the devices as like Hypertext. The configuration of my scales learnt from the configuration of my laptop, or my fridge, or …. and so on. The privacy settings of my thermostat were learnt from my doorbell. And when I tell my doorbell that my brother is no longer welcome (sorry..), the doorbell tries to broadcast the update to all the devices it taught.
There are many flaws to this idea but perhaps some goodness as well. The point is that for something like Wolfram Connected Devices Project to be really useful, we need the devices to use it directly. That’s no mean feat but it’s not an impossible feat since there are lots of analogies and use cases to learn from.
Over this year I’ve been trying to adopt a growing number of lean startup techniques into my projects. I often try to explain how this works to people. Here are the parts which define the method for me.
Your job is to get something built. To turn the vague, wobbly ideas of the client, team or your business partner into something solid and touchable (or clickable), you have to do a heap of work. Loads of work.
The problem is, if that original idea is wrong then you’ll spend a lot of time building the wrong thing. You’ll “achieve failure”. That uncertainty – the differences between what should be built and what everything thinks should be built, is the uncertainty of doing business these days.
Lean start ups are designed to cope with extreme uncertainty. A start up is inherently highly uncertain. You have no customers, no revenue, no history. This means you don’t know what people really want from you nor what they’ll pay you. While the uncertainty falls as you do market research but it never turns hypothetical “I would buy that” into concrete, revenue-generating “I will buy that now”.
But the certainty that a specific person will buy something isn’t enough to base a business on. I could build the most amazing product, setup the entire company and sell it to the 20 people who said they’d buy it only to find that no one else will buy it. I got my 20 sales, made a couple of hundred dollars and … that’s it.
But my market research was entirely accurate: 20 people will buy it. My inference from the market research was entirely false: that is not a representative sample.
And then there are products which simply can’t be understood by people. There are many things that people wouldn’t buy or use until they exist such as Facebook, SMS, Television, email.
If there is uncertainty in the project, the lean startup method will help minimise the waste associated with reducing uncertainty and the associated risk. There’s a lot of negative stuff in that sentence: risk, misjudgement and waste.
The risk is that you (or you client or team) have misjudged the market. A huge misjudgement means you build something that people simply don’t want while a smaller misjudgement means you don’t quite hit the spot and get fewer sales than you could have.
Importantly, the misjudgement doesn’t mean you shouldn’t both at all. It means you need to learn a bit more. We’ll come back to this later.
We all want to minimise waste because there’s nothing to be gained from waste but how do you move forward from this crippling position of uncertainty, risk and waste-aversion?
This brings us to the 2 steps:
- You do less. Much less.
- And you do it sooner.
How the Lean Startup method works
A traditional project starts a clear goal in mind and then a series of steps to get there. The goal remains fixed throughout the process – this is what we’re trying to reach because we feel sure that there’s gold over there in them thar hills.
While some project methodologies will break the project down into short bursts of work (e.g. agile, scrum) and others are monolithic, the goal remains the same. Even if your development is split into short week-long sprints, if it doesn’t get to the customer for 12 months you’ve learnt nothing about the correctness of your idea.
The defining feature of non-lean projects is that they take the goal as correct and best, never questioning it between the point of departure and the point of arrival. The goal can remain the same for years, always thinking that this way of delivering this product to these customers for this price is right.
It might be. But it might not.
(As an aside, you often here bitching and moaning within companies about the way things are done, the fact that customers don’t like something or some product that exists purely because the MD love it. These kind of theories should be aired, tested and proven/disproved early so we can all move on. More on that later…)
I visualise the traditional process as a line:
Once you’ve arrived at the goal, it might turn out that there isn’t as much gold as you though. If you’re lucky, maybe there is a little gold. If you’re unlucky, there is none.
The point is that: you didn’t question the goal during the journey.
Lean startups work differently. Rather than having a single plan to get somewhere, you start with a series of hypotheses which can be validated or rejected. This is a stark difference: you have a business hypothesis not a business plan.
Instead of racing off in one direction, putting 100% of your yearly development budget behind a hypothesis, you come up with some tests which get you closer to the best goal. You can visualise this as shorter tests and movements towards a better end:
By testing and validating more often, you reduce the risk of entirely missing the best goal. Another way of viewing this is as finding a global maxima vs finding a local maxima.
Imagine being somewhere hilly and trying to head towards the top of the hill. If you do this based purely on what you can see from where you start, you’ll choose the local maxima – the highest point you can see. But when you’re walking around hills, trying to find the top you use continuous feedback. You look around. The lean method works in the same way. Rather than vehemently committing to a specific route to a pre-defined (possibly imagined) top of the hill, we look around at each stage and test what we thought at the outset.
Getting started with the lean method
Every product launch is a hypothesis: people will buy burgers for £2 from this street corner. Write down a fact and what you infer from it. For examples:
– Thousands of people walk past this point and people like burgers. Therefore, we should be able to sell burgers.
– We have a free feature which 10% of our customers use but we think it’s why 80% of them renew
– If we build this app, once it’s on the app store thousands of people will (i) find it on the app store and (ii) download it and (iii) use it and (iv) buy the in-app purchases
– If we offer direct debit as a payment method, our sales will increase
There will be a fact in there – something you’re sure of. But there will also be something you’re unsure of which is conditional based on (usually) a lot of work happening first.
if: this huge project is done then: this good thing will happen
It’s very tempting to dive into that huge thing because it’s a clear task. You can be a huge-project-matre. Woe is me for I have 6 months of late nights yada yada. Large projects, long lists of admin and tasks are good so long as the outcome
There is something comforting about starting a large project. It’s an exciting distraction front the unknowns and often (speaking from personal experience), we tend to fill up time with the stuff that we’re good at. And that stuff isn’t necessarily the stuff that matters.
(Incidentally, I would view this as a reason why there’s a gap between what’s technologically possible and what exists as products. It’s too easy, as an engineer or product developer to focus on what it’s satisfying to build rather than what’s going to be a good product in the market… But that’s for another day.)
Where to start and where to go (follow the numbers)
The first step is to produce a minimum viable product.
The minimum viable product is not what you think
It is not really a “product”.
It’s the minimum you can produce to prove your hypothesis.
The canonical example is google ads and a landing page. If users click your ads and try to buy from your landing page, you have real proof that they want what you’re selling.
With the MVP out there, it’s time to measure. Knowing where you really are is key and this is best done using numbers. Cold, hard numbers which tell you whether you’re really improving or in what way you’re improving.
So, if you want to manage it, measure it. For this reason, lean emphasises measurement and feedback from the outset. As you’d expect from a method heavy on testing on hypotheses rather than free investigation, you state your measures / metrics at the outset rather than conveniently choosing the pretty numbers after the event.
When the number come back good… wonderful. Just keep them going in the right direction. When they come back bad, perhaps you need to try something else or to measure something else.
Exactly what you measures matters. Each case is different but there are certain consistencies which are summarised in Dave McClure’s Startup metrics for pirates so named because of its AARRR initialism. This gives us:
- Acquisition. Users come to the site from various channels. How much does that cost you and how rapidly do they come through?
- Activation. When the come through, what percentage do anything. What percentage activate some kind of account with you?
- Retention. And how many of those stick around?
- Referral. And how many of those refer other users?
- Revenue. How much does the activated and retained users generate?
This avoids what are termed “vanity metrics”. Vanity metrics are the big exciting numbers which have no baring on the project or company’s success such as total number of sign ups. You’ll notice that the numbers above focus on movement from one stage (e.g. unknown user) to the next (e.g. acquisition). By studying the rate of change through these stages, you know that you’re definitely going in the right direction.
Viewing users in this lifecycle gives you many things, which I’ll cover in more detail in another post. But two important and much clearer views on what’s really going on are:
Firstly, you get a clearer picture of how much you’re growing and where that growth comes from. For example, knowing that most of your revenue comes from new users and you have no referrals tells you that acquisition is hugely important (and your cost of acquisition matters) but also tells you that there might be an upper limit to the number of people you can reasonably reach given your budgets and the reach of your advertising.
Secondly, these metrics give you the shape of your future growth rather than the size of the current growth. If you have sudden uptake by 20 people followed by 1 referral a year per person means that in 5 years you’ll have 320 people. Not bad if the revenue part of the numbers (above) is high enough per customer to pay your costs. But uptake by 20 people with 1 referral per month per person means that in 5 years you have 571,220 people, which gives you greater opportunity of making less per user and perhaps some more interesting pricing options.
This isn’t new
Since reading up on the lean startup methodology, I’ve found that this isn’t a new thing.
In Doing Capitalism in the Innovation Economy William Janeway notes that “cash and control” are critical to any firm. This is: access to cash when it’s needed and the ability to use that cash to try out new things and get out of a difficult situation. (There’s quite a lot more to it but that’s for another time.) The similarity here is that the lean method puts huge emphasis on using less cash upfront, refining the product or company and only scaling up when you’ve hit the product sweet spot.
Another interesting and similar case is in the Innovator’s Dilemma [Ref]. In this, [some firm..?] went very wrong because they bet (almost) the company on a single new type of disk drive bringing in lots of new revenue. In contrast [another firm – who?] tested the water first by asking customers if they would buy something. Clearly the latter’s case is more lean than the former.
If you want to read more
Start with the Learn Startup book: The Lean Startup: How Constant Innovation Creates Radically Successful Businesses
I’ll be posting soon on some maths behind the metrics and case studies of lean startups.
I wrote this for Dot Net magazine’s blog but it’s hard to find so here it is. Easy to find. Originally published http://www.netmagazine.com/opinions/heffalumps-and-user-experience
Piglet, I have decided something
You’re Pooh. Christoper Robin is anyone with some half baked idea. The Heffalump is your customer.
Don’t let Christopher Robin trick you.
“I saw a Heffalump today, Piglet.” He said carelessly. Piglet says he’s seen one and so does Pooh, both undoubtably lying.
Pooh later looks round to see that nobody else is listening and says in a very solemn voice: “Piglet, I have decided something.”
Lacking, a little in any idea of what the Heffalump would do Pooh hits on the idea that _he_ is a good model of the Heffalump’s behaviour.
“Suppose,” he said to Piglet, “you wanted to catch me, how would you do it?” What follows is the devising of plans, building of traps and the catching (or not) of a Heffalump.”
By means of a trap. And it must be a Cunning Trap.
“We are finding a Heffalump and I know how Heffalumps think” is what gets said to justify someone’s product idea and we just have to believe Christoper Robin. Later on in the project, Pooh has been indoctrinated and the hunt for Heffalump has spread.
You can recognise a Heffalump. They get talked about a lot more than anything which is actually _known_. We go around the room adding layers of convincing, but empty justification:
“I think the user would expect…”
“We think our users want…”
“If it were raining already, the Heffalump would be looking at the sky wondering if it would clear up…”
The Heffalump is the platonic customer who has faults and strange behaviours but all of which are completely understood. I, Pooh understand what our Heffalump wants and how they think. I can find us a customer.
We shall build this Cunning Trap – this Very Deep Pit and catch a Heffalump. We shall build this app and catch ourselves a huge load of users.
I think Heffalumps come if you whistle
If only Pooh knew the difference between Heffalump and fact. If he could test his ideas before we spend months building this Heffalump trap, we could all feel better about what we’re working on and the muttering of “it’ll never work” can be silenced.
Pooh thinks that changing the registration will sell more licenses? Let’s test it.
Hi thinks that 50% of users hitting our site from organic search don’t know how to buy? We can test that.
Things can be tested.
A/B testing, proper UAT, bits of paper and guerrilla testing gives product designers enough information about what will and won’t work to keep the project team from building Very Deep Pits.
You can test the wording of your campaign on a small landing page. The usability can be compared in UAT sessions and when it comes down to squabbling just use A/B testing to arbitrate between the designer and the MD.
I’ve come to see this as spotting the different between a Heffalump and a fact. A Heffalump is something that fills up a meeting, everyone has an opinion on and we can all talk about endlessly because _nothing can be proven_. By definition, everyone’s opinion is as right as everyone else’s.
It is opinion. A Heffalump is a hunch.
A fact can be acted upon. It’s the knowledge that we _need_ to change the registration or that the type faces genuinely confuse users.
To get fact from Heffalump you ask specific questions and run small tests: What can we ask 5% of our users which will answer this? What AdWord campaign can we run as a test for the new offer? What A/B testing can we do which tells us people want free delivery instead of more customer service?
Why Heffalumps hardly ever get caught
The heffalump hypothesis doesn’t mean you shouldn’t take risks, but that you shouldn’t lie to yourself about what is an isn’t a fact. A risk is a wonderful thing. An idiot is not.
There’s still room for the stupid, insane and widely ridiculed ideas, but too many people spend months on small and provably pointless changes leaving no time, budget and will power in the team left for the big ideas.
If a Heffalump idea comes up, say what it is. Turn it into proven fact before hours in meetings and days on the project are wasted so you can save more time for bigger things.
These Heffalump ideas – new products or changes to apps – can often be tested in isolation. So much of what is a product comes down to UX and relatively simple application changes, but the heavy lifting happens when you move all your users to this new idea.
Read anything on lean or agile business models and there’s the recurring theme of reacting to facts: test, refine, repeat. Everyone should do this not just the trendy start ups.
Use paper, UAT, MVT and heaps of stats. You should live knee deep in analytics and mock ups so when the project starts in earnest, you’re as close to right as possible.
And when you realise you’re a little wrong, do a test of what you think is more right before changing everything.
But don’t go hunting Heffalumps.
When I started coding no one mentioned the HUGE amount of time I would have to spend debugging. Really huge. Vast. Really big. In fact, it’s most of the time. Most of the time you spend coding is debugging.
When I started debugging, I started collecting everything I could on how to do it better. Surely I’m not the first person to debug? This little site is the collection of my notes on the subject of debugging. The techniques here are useful daily to me, to the extent that if you don’t know these I really don’t understand how you get through a day at work.
Hopefully it’s useful to you.
Most of the ideas are taken from books on debugging listed on the site.