The Two Most Important REQUIREMENTS WHEN Making a Totally New Thing

TL;DR:

Very Important: Iteration Speed, Decision Quality.

Not-so-important-as-you-might-think: Initial thing you build.

So you're going to make a New Thing. Let's assume you're going to use the Process Known as Agile.

Design > Build > Release > Adapt > Repeat

Design: You have some hypotheses, or you wouldn’t be in this business. A scan of adjacent technologies, some real-world analogs, current attempts to solve said problem, a survey, customer demand and behavioral modeling. You have your starting point.

Build: Build something. Best guess.

Release: Give it to real users.

Adapt: Learn what works, what doesn't. 

Repeat. (Except now the design part is better-informed)

Nothing new so far. If you’re doing this right, there are two major factors (within your control) that will influence how rapidly you meet with adulation. This can be proven, with math.

  1. Cycle speed

  2. The quality of your iterative decisions

Speed: So we need to go fast. This means we do everything fast. We work fast, we automate everything, we fire fast, we eat lunch fast, we eat breakfast… er, fast.

We are very attentive to speeding up the processes we use. Every expeditious technology available. Did I mention how much we <3 DevOps services.

Speed is really important. And this speed is mostly affected by Kinda Boring Stuff. Day-to-day things, and the efficiency and motivation of your people.

QoDs. - Quality of Decisions.

This is how you interpret what happens with your builds after release.

The quality of your iterative decisions is directly proportional to the quality of the signals you're getting from your current version, and depends on a few sub-factors. Let’s break it down. You released it to some users, right? 

  1. Are they the right users?

  2. Are there enough of them generating enough interactions to be reasonably representative of the market at large?

  3. Are you validating your adaptations based on where your users are leading you by their behavior?

  4. [Check] Is this a realistic experiment? Or are your users' motivations skewed?

Here's an example.

Once upon a time I was employed by a company who love to make displays - screens, monitors, OLEDs.

As such, my team tasked itself with looking for a new use for screens. Thanks to some recently-concluded experiments we had a hypothesis - that a large, interactive, viewable, beautifully-designed touch screen in the home could serve as a 10x more useful and delightful digital ‘domestic ops center’. Furthermore this could become a valuable platform.

Lemma - Definition Time

Domestic Ops Center: That place in the home where family information is registered, stored, communicated. Where memories are shared, and communication is often both informative, and emotional. A pin board, a calendar, photos, reminders, schedules.

Here is a picture of a Domestic Ops Center so I’m sure we’re all on the same page.

Examples of both ‘Real world analog’, and ‘existing solution to problem’

Examples of both ‘Real world analog’, and ‘existing solution to problem’

From a standing start, we committed to start conscripted user trials in 2 months, and here’s how we spent those precious 8 weeks.

We quickly found a suitable prototyping device. A large tablet. We bought 20.

We set about finding and vetting our 20 test user families. This involved running ads and surveys and conducting 30 hours of interviews, with a fair amount of discernment. It was some serious and sustained work - but we wanted to make sure we had the right users - if there were only to be 20 families, we would need them to be representative of the market, uncorrupted.

Meanwhile on the development side, we came up with our plan.

We had 8 weeks to get our MVP, with a connected mobile app, in front of users - 3 developers.

But we didn’t plan to start building our MVP for another 4 weeks, for two important reasons:

1. We had more important work to do first

2. We didn't know what to build anyway (designers were still thinking…).

The important work was to make sure when we did start, we had a really good quality signal from our users. Additionally, this was that most difficult kind of experimental product - the geographically-distributed, internet-connected, cross-platform, high-maintenance, potentially-fiddly, duct-tape prototype hardware experiment. (Thar be dragons!!)

I knew success would crucially depend on excellent operations software.

We set to work writing our own system software. It checked for its own updates, forced installs and restarts, would automatically and persistently launch, check and update our MVP app, check the user’s wifi connection, signal strength, and pinged us every hour to report status. We connected a missed-check-in alarm to Slack and SMS, so we knew immediately if there was any hint of a problem, on any device. Plus all the analytics and DevOps, PubNub, Cloudinary. By the time we were ready to build the MVP our platform was indestructible. Codename: Roly-Poly-Rasputin-Hydra

We then built and tested the software that paired the MVP devices with the companion mobile app, and we made sure it was suuuper easy for users to re-associate the devices, and invite other family members. We ensured we could remotely rectify any anticipated problem. We knew that any need for on-site tech support would be fatal to our plan!

Even when we did start building the user-facing app, any resource decision that pitted the robustness of the platform software against the user app, was heavily biased in favor of platform. Until the devices were deployed, the feedback channel quality was paramount.

After 8 weeks of development we drove hither and thither (only more optimally-routed) around the Bay Area and installed 20 wall-mounted, heavily-modified, prototype devices complete with a beautiful rudimentary MVP and iOS App. We had affixed artsy white matting onto the front and picture-hanging clips on the back. We had a bag of extension cords and multi-plugs, wall fasteners, screws, wire, hammer, a drill. We took off our shoes, we moved tables around, we climbed on chairs, and we drilled a surprising number of holes in walls. These were definitely a new (very prototypical) thing!

Then, and only then, could we start a meaningful iteration process. Because we knew:

  1. We had the right users

  2. We would know whether or not we were generating enough interactions, and we had significantly reduced the chance of false negatives.

  3. We could make decisions based on what users did, IRL. Not what they said or didn't say. 

Then we started running daily sprints on the user apps.

We were confident that our platform and our channel were good, and well-maintained, and we could step on the gas with user features. We deployed well over a hundred versions of our apps in the next 5 weeks. Here are a couple illustrative stories from the trial period:

One user said they loved being able to post reminder messages to the device from her phone, but didn’t like not knowing if it was read, back at home. Within 4 hours we pushed a feature where tapping on the message tile on the display sent a ‘Message read!’ notification back to the phone-of-origin. It was immediately very popular with all users.

Another time we deployed a feature suggested by a user - not only did no one use it, but the person who suggested it didn’t use it. In total that feature consumed 2 hours of developer time before it was deemed pointless, and removed, proven useless by actual (non) usage.

By the end of the trial we had a product significantly divergent from our MVP, but people loved it. Here are some quotes from the user exit interviews:

“I loved it”

“I would like to buy one, where can I buy one?”

“You’re leaving a small hole in my wall, and a big hole in my heart”

In summary:

If you're going to synthesize a product out of thin air, Speed and Decision Quality are more important than your initial guess. And they're within your control.

James Flynn