UX design methods are cheaper, faster, and more effective at gathering data than “build, measure, learn” — but agile is designed not to take advantage of them. -Pavel Samsonov
Lowercase-a agile has become synonymous with software. A lot has been added on to the original manifesto but even without Scrum certification, research into the Toyota Way, or looking up “lean” on Wikipedia, everyone involved in the industry for more than a few months understands the basic concept as it is practiced today: we deliver value to customers through working software, so reducing the scope of deliverables puts software in the hands of our users more quickly, which in turn creates more value.
This simple concept stuck because it’s very appealing to both programmers (avoiding the goalie problem and scope creep) and project managers (improving time to revenue). Now, every product org will tell you that they are Agile, or at least undergoing Agile Transformation for the last ten years. But agile is a development method, not a product design method, and allowing “agility” to dominate how products are conceived has created a glaring contradiction that has persisted, unresolved, for over a decade.
Looking at agile through the eyes of a designer, it’s easy to spot the problem: the dogma of shipping small stuff fast so that you can gather feedback on what to do next more quickly. Agilists believe that finished software is not only the best way to deliver value, but also the best way to learn and iterate.
In reality, it is probably the worst way.
While software is a prerequisite to value, the value is only created when users use it to achieve their goals. Only software which is fit for purpose — the right software — is valuable. But the way we learn is by being wrong. By trying to deliver value (be right) and learn (be wrong) at the same time, we end up accomplishing neither.
To succeed at both learning and delivery, we must untangle them from one another, with the help of design methods.
Agile iteration, or waterfall with extra steps
“There’s never enough time to do it right, but always enough time to do it over.” — John Bergman
Iteration is perhaps the best-known pan-agile best practice. The broad strokes of vertical slicing, the lean skateboard, or the Mona Lisa sketch are the same: focus our efforts on quickly solving one problem, and use feedback from that release to guide what we need to do next.
While iteration certainly gets code into production more quickly (due to starting small), it doesn’t help much when it comes to delivering value to customers. Agile pays lip service to “build-measure-learn” but in practice, conversations always revolve around build velocity (how can we get the thing out the door faster) rather than measure velocity or learning velocity.
In other words, agile faithfully reproduces Waterfall’s core tenet: the idea that the only real measure of progress is producing an output to be handed over (in the dev team’s case, the software product that goes to the customer). The iteration frameworks reinforce this notion: we can sketch the Mona Lisa, but we mustn’t erase any part of it, only add detail.
Unfortunately, though the “Big Design Up Front” phase has been eliminated, these vertical slicing models do not provide any alternative design phase. They center the entire conversation on a very narrow piece of the development process: what happens after we have defined the problem and the broad-strokes solution, and are discussing only the sequence in which that solution should be delivered. The question of whether we understand the problem is swept under the rug: the important thing is to build, and then all our questions will be answered by data.
The weakness of this method of data-gathering should be immediately apparent to any researcher: regardless of the leanness of our MVP, we are still testing one idea in sequence. While it is certainly cheaper and faster to launch a failed “skateboard” than a failed “car”, it is still orders of magnitude slower and costlier than almost any other method of data collection. Prototyping can only ever answer the question “in what ways were we right or wrong?” while the question you want to answer at this stage is “what are the ways in which it is possible to be right or wrong?”
To answer that question, and truly take advantage of iteration, the team must be willing to set learning rather than outputs as their main goal. Instead of drawing one sketch and incrementally building upon it, they should be drawing a hundred sketches, learn what not to draw, and then be willing to throw them all away to apply those learnings.
As practiced in the industry, agile methods create a system of incentives that accomplishes the opposite.
The cult of validation
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” —Upton Sinclair
While you may have heard about “design within agile,” you will struggle to find business-within-agile: no agile transformation allows companies to make decisions on a two-week cycle. The aspiration of freely pivoting every sprint as you gather data meets the reality of quarterly roadmaps and annual budgets. Teams can’t just pitch the “skateboard” but need to show the roadmap towards the “car” version.
Now imagine that the team’s proposal has been approved, and they’ve spent a few sprints and FTE hours developing one MVP. They put it into production, and are beginning to gather data — but the roadmap already has the next set of features for them to work on. Most likely they are also tracking trailing indicators that take a long time to be adequately reflected in their dashboards.
The team is learning at a much slower pace than they can respond, which means they are incapable of effective action. By the time the team has gathered enough data to conclude that they had built the wrong thing, they may be several sprints into delivering that next phase. Now they must either work double-time to simultaneously fix the problems and deliver the promised features, or renege on their commitments to stakeholders (this is known as a career-limiting move).
Rather than enable agility, the team’s approach to learning has locked them into an outdated commitment, which any solution must be contorted to fit. And if that commitment was sufficiently off the mark, they run into a “can’t get there from here” problem — there’s nothing to be done but starting from scratch, or keeping a zombie project going forever.
In any organization without extreme degrees of psychological safety, the agile ideal will not be upheld. Instead of identifying mistakes and fixing them, teams are best served by quietly ignoring them. To conceal the resultant user experience debt, teams trade out their user research for validation — finding the ways in which they were right, to prove that everything is fine.
There’s an easy, two-question test to see whether your team is doing research or validation. Before starting work on a new product, ask yourself:
- How would we know if we are wrong?
- What will we do if we are wrong?
Validation taking place after the work has already been done is designed in such a way that the only possible answers are “we can’t” and “nothing.” Regardless of how honest the research itself is, the context of validation (development bound to the feature roadmap) ensures that the only insight that is permitted to be drawn from the data is “we are doing the right thing.”
Agile within design
“The skill of drawing is of making thousands of pencil marks and erasing all but a few of them.” — Joana Weber
Agility is not how quickly we move forward, but how quickly we can turn. Recall the twin objectives of agile: value for the customer and learning for the team. Shipped code is only valuable insofar as it helps the customer accomplish their goals, so we must start with the latter: the learning, the being wrong. And to make sure that being wrong doesn’t cost us our jobs, we must be wrong in a way that is palatable to the org: cheaply, quickly, and safely.
Once we let go of “time to shipped code” as the governing metric of success and turn to “time to customer value,” the feedback loops of the design process become the ideal methodology to achieve our goal. Designers are not limited by the binary of working/not-working software, and do not require any minimum of conceptual fidelity. And because these artifacts are cheap, they are also not required to look anything like the finished product. UX methods run the fidelity gamut from storyboards to higher-fidelity provotyping or non-finito prototyping to catalyze new ways of thinking about the problem without constraining the exploration by committing to any one concept too early.
This also means that we need to rethink how our feedback loops fit into our timeboxes. There are many possible points of failure within a single timebox: the discovery didn’t size the opportunity right, the design didn’t address the opportunity well, the delivery didn’t match the design closely enough. To identify and avoid each of these pitfalls in turn, we need multiple feedback loops within a single timebox, tracking clearly-defined leading indicators to understand when we need to stop and try a different approach to achieving the desired outcome.
Regardless of how many developers you have, this kind of agility is impossible if you are shipping pieces of software on a linear roadmap. But user research methods make it relatively trivial to test a dozen options at a time to independently nail down the problem frame, the solution hypothesis, and the necessary capabilities so that the agile delivery team can do what it does best.
This is a very different approach to producing software that most organizations practice today. But the true “ROI of design” cannot be attained by sticking design (a decision-making process) into agile (a delivery process).
Companies who put agile within design will run circles around those who don’t.
Comments are closed.