Why we fail

Review of the book “Why we fail: Learning from Experience Design Failures” by Victor Lombardi.

BOOKSREVIEWSDESIGN

[Ligia Fascioni]

2/6/20257 min read

why we fail

A well-designed book, with a beautiful cover, and about a subject I love: how to resist?

I came across “Why we fail: Learning from Experience Design Failures” by Victor Lombardi, with a foreword by none other than Don Norman.

The author has been working in digital product development for over 30 years. To write this book, he conducted extensive research and selected 10 examples: four websites, two services, one software package, one operating system, and two hardware-based products (likely embedded software).

I really liked the criteria: all of them were innovative products (not trying to copy or improve upon a competitor) and didn’t fail due to incompetence(which is very common).

Right at the beginning, Lombardi makes one important thing clear: in this book, he is only considering failures in customer experience; in other words, the product failed to provide a good experience.

Bad experience = poor design?

Okay, but isn’t a failure in customer experience just a more recent term for poor design?

He explains that it’s not the case: in the past, when products were simpler, bad design could explain many failures.

However, as products become increasingly complex, a product can have good physical design and function perfectly well (like a smartphone), yet it’s no guarantee that the user will enjoy using it.

The reason is that new products are complex and multifaceted, and our experiences and interactions are emotional and subjective.

And would the blame for these failures be on poorly trained or incompetent designers? Victor says that nowadays the answer is not so simple.

A product can be loved by one audience and detested by another. An aspect of the experience that the designer believes to be vital, such as a website always being available, may not surpass a competitor that goes offline multiple times for maintenance. Or two similar products that produce similar experiences, but one fails due to social or cultural issues.

First Failure

He recounts an experience where, in 2000, he was hired by a digital design agency to create the website for a portal that was supposed to revolutionize financial information research; the target audience consisted of investors from large companies. In the briefing, the client made it clear that they wanted something resembling a Bloomberg terminal (which is an American stock exchange); in other words, multiple screens with colorful numbers on a dark background and constantly moving charts.

The author and his team suggested something cleaner and friendlier (the dark background made it difficult to read), based on previous experiences with similar products. But the harsh truth is that:

Neither the development team nor the client had conducted any research to truly understand the user’s perception.

The result? The client didn’t like it, changed the development team twice, and ended up using off-the-shelf software.

An absolute failure.

It was one of the author’s first projects, and he was somewhat traumatized because nobody in the company liked to talk about failures, so they never tried to deeply understand what had actually happened. The easiest, most popular, and conventional way is to blame the client, calling them ignorant.

He started seeing this story repeat itself many times: projects being canceled, dissatisfied clients, and users systematically being ignored while agencies turned a blind eye. And worse: people focusing on studying successful cases to replicate them, always avoiding discussing failures.

Survivorship bias

Initially, looking only at what went right seems like a good idea, but it’s not quite the case. To illustrate how this approach can be dangerous, he tells the classic story of the fighter plane during the war (it’s a bit cliché, but there’s always someone who hasn’t heard it yet).

The B-29 planes returning from combat in World War II always had bullet holes in specific parts of the fuselage. The team’s idea was to reinforce those areas since they were more targeted. But a mathematician on the team noticed that statistically, they were only accessing a part of the sample: the ones that survived and returned. They didn’t know where the planes that crashed were being attacked.

The conclusion was that, probably, the lost planes were being shot at other parts of the fuselage, causing them to crash. And the surviving planes were precisely the ones being attacked in the areas they were trying to reinforce.

Conclusion: reinforcing the rest of the plane, not the most bullet-ridden parts of the survivors, increased the aircraft’s resilience. Perhaps, if they had ignored what is called “survivorship bias,” where we only analyze what succeeded, they would have wasted precious resources without any improvement.

I distinctly remember many people using this fallacy during the pandemic, claiming that we were only counting the dead, not the survivors.

In other words, we need to carefully analyze the context in a comprehensive way to avoid failures. The answers to complex problems are never obvious.

So, the author asks: why do we need a book talking about experience failures when there are so many out there? His answer is quite blunt: because most of those books are about DESIGN, not about EXPERIENCE.

Here, he illustrates the difference between failures:

Engineering failure: when the product doesn’t work as expected from a physical point of view.

Design failure: when the product does work, but it’s so bad that people don’t want to use it. It’s when a disaster happens, and we blame it on “human error” when, in fact, the design led the user to that outcome.

Experience failure: the product works, and people can use it, but using it is an experience no one desires.

Having the right experience

Here, the author provides a detailed account of the launch of the navigation, telemetry, and entertainment system called iDrive, introduced by BMW in 2002 in their flagship model.

Elegant, with most functions integrated, and few controls, most functions were accessed through a small number of buttons that could be pressed, rotated, or pulled. Simple, clean design, a true dream… that quickly turned into a nightmare!

The public hated having to wait for the Windows operating system installed in the onboard computer to boot every time the ignition was turned on. There were a series of alert messages that needed to be acknowledged before everything started functioning. Nobody could understand the logic behind the controls, which left customers extremely irritated. Not to mention that every time the system was updated, the person had to take the car to the dealership. Imagine!

In truth, telematics and navigation systems in cars continue to be a major challenge. But in 2002, there were fewer resources and perhaps higher expectations. As positive points, it can be said that they succeeded in placing controls in accessible locations for the driver, provided clear feedback (on/off button, active/inactive function), introduced dedicated controls for some functions, made the software capable of incorporating new features, and the design was beautiful and clean.

However, they made a mistake by introducing unfamiliar controls, providing an excess of information (the driver constantly had to look at and close some warning window on the dashboard, diverting their attention from the road), the way information was organized and grouped was not intuitive, some commands had side effects on the configuration that were not visible, acronyms that made no sense, very slow processing, inefficient controls, and some unnecessary redundancies. In short, it was a horror.

Finally, the test

The result was that, after all the complaints and uproar, the company finally conducted a usability study with 500 users worldwide. The company made some modifications and improved many things. However, even with these changes, comparative publications with competing systems still consider iDrive a poor experience.

The end of the story? Between 2002 and 2008, sales of the 7 Series model, which featured iDrive, dropped by almost half and have not recovered to this day.

Why did it all go wrong? The author diagnoses that the software system, outside the company’s traditional area of expertise (automobiles), was launched with the concept of being a technological advancement, rather than a better experience for customers. There were likely not enough usability tests before the launch, due to the company’s overconfidence.

In this chapter, he also discusses the case of Google Wave, a collaborative platform for creating documents (similar to the precursor of Google Drive as we know it today), which also turned out to be a major failure due to its complexity; people couldn’t understand how to use it.

In both cases, the focus was on technological advancement rather than the user experience. It’s a common mistake in startups, but as you can see, it’s not uncommon in large companies either.

Other cases

The author thoroughly analyzes other famous cases of failure, such as OpenID developed by Yahoo to facilitate account access by remembering passwords and usernames. However, it became too complicated, even for the most technically inclined audience it was designed for, and it became even worse when non-technical users started using it (after all, the idea was good and useful; today it’s what allows us to log in to a website using a social media account like Facebook, for example).

The rest of the book follows this direction: the context, the history, what went wrong, why it went wrong (the different reasons analyzed one by one), and the lessons learned.

Conclusions

Lombardi concludes the book by saying that we fail because:

Experience matters.

Three facts need to be acknowledged:

  1. accepting that errors happen

  2. openly sharing information is essential

  3. collecting verifiable data is necessary to correct errors

BUT THEN, HOW TO AVOID FAILURES?

The author finally explains the method developed based on these studies that prevents similar failures from occurring.

It’s essentially the framework of the scientific method, which involves observing, formulating a hypothesis, conducting experiments and tests, and finally interpreting the results. However, it is adapted to the development of new products with a focus on the experience.

The adapted version would look like this:

  1. Observe the context (business, technology, and users)

  2. Develop hypotheses

  3. Test prototypes with users

  4. Measure and interpret the results

  5. Repeat as necessary.

What did I think? Nothing groundbreaking, but I liked the way he organizes the information and analyses the cases.

And more, he recommends some books that are on my reading list waiting for their turn.

In my opinion, I think it’s an important reference for designers, developers, UX designers, product managers, and everyone involved in the field. It’s easy to read and very enjoyable.

Go for it!