Mapping Innovation: A Playbook for Navigating a Disruptive Age

Mapping Innovation: A Playbook for Navigating a Disruptive Age

Business May 12, 2017 / By Greg Satell
Mapping Innovation: A Playbook for Navigating a Disruptive Age

We're entering a new era of innovation. Learn the strategies of the world's most inventive startups, corporations and scientific institutions. Introduction to "Mapping Innovation" by Greg Satell.

Introduction to "Mapping Innovation: A Playbook for Navigating a Disruptive Age" by Greg Satell

 The “Mother of All Demos”

On December 9th, 1968, a research project funded by the US Department of Defense launched a revolution. The focus was not a Cold War adversary or even a resource rich banana republic, but rather to “augment human intellect” and the man driving it was not a general, but a mild mannered engineer named Douglas Engelbart.

It’s hard to fully grasp what happened that day without understanding the context of the time. In those days, very few people ever saw a computer. They were, in large part, mysterious machines to be used only by a select priesthood who were conversant in the strange mathematical languages required to communicate with them. The tasks they performed were just as obscure, carrying out complex calculations for scientific experiments and managing mundane back-office tasks for large organizations.

But here was Engelbart, dressed in a short-sleeved white shirt and a thin black tie, standing in front of a 20-foot high screen and explaining in his low-key voice how “intellectual workers” could actually interact with computers. What’s more, he began to show them. As he began to type a document on a simple keyboard, words started to appear, which he could then edit, rearrange and add graphics and sound to, while all the time navigating around the screen with a small device he called a “mouse.” Nobody had seen anything remotely like it ever before.

The presentation would prove to be so consequential that it is now called “The Mother of All Demos.” Two of those in attendance, Bob Taylor and Alan Kay would go on to further develop Engelbart’s ideas into the Alto, the first truly personal computer. Later, Steve Jobs would take many elements of the Alto to create the Macintosh.

So who deserves credit? Engelbart for coming up with the idea? Taylor and Kay for engineering solutions around it? Jobs for turning it all into a marketable product that created an impact on the world?

Maybe none of them. Engelbart got the ideas that led to the “The Mother of All Demos” from Vannevar Bush’s famous essay, As We May Think,[1] so maybe we should consider Bush the father of the personal computer. But why stop there? After all, it was John von Neumann who invented the eponymous architecture that made modern computers possible. And that, in turn, relied on Alan Turing’s breakthrough concept of a “universal computer.” Or maybe we should credit Robert Noyce and Jack Kilby for developing the microchip that powered the digital revolution? Or Bill Gates who built the company that made much of the software that allowed businesses to use computers productively?

The story doesn’t seem any clearer when we try to look at the events that led to modern computing as a linear sequence going forward. Turing never set out to invent a machine. He was, in fact, trying to solve a problem in mathematical logic, the question of whether all numbers are computable. He created his idea of a universal computer — now known as a Turing machine — to show that it was possible to create a device that could “compute all computable numbers,” but ironically in doing so he proved that all numbers are not computable. His work was an extension of Kurt Gödel’s famous incompleteness theorems, which showed that logical systems themselves were broken. It was these two insights about the illogic of logical systems and the incomputability of numbers that led to the powerful logic of modern computers that we see all around us everyday. Confusing, to be sure.

The waters muddy even further when we try to gauge the impact of personal computing.  We know that Xerox built the first Alto in 1973 and Apple launched the Macintosh, with great fanfare in 1984, but as late as 1987 the economist Robert Solow remarked, “You can see the computer age everywhere but in the productivity statistics.”[2] And, in fact, economists didn’t start seeing any real economic impact from information technology until the late 90’s—nearly 30 years after “The Mother of All Demos.” So what happened in the interim?

It seems that any time we try to understand an innovation through events, the story only gets more tangled and bewildering. And it doesn’t get any clearer if we look at the innovators themselves. Some were highly trained PhD’s, but others were college dropouts. Some were introverts. Others were extroverts. Some worked for the government, others in industry. Some worked in groups, but others largely alone.

Yet that brings us to any even more important question: How should we pursue innovation? Some companies, like IBM, invest heavily in basic research and always seem to be able to invent new businesses to replace the old ones that inevitably run out of steam. Others, like Procter & Gamble, are able to effectively partner with researchers and engineers outside their organizations to develop billion dollar products. Apple became the world’s most valuable company by limiting the number of products it sells and relentlessly focusing on the end user to make things that are “insanely great.” Google continuously experiments to develop a seemingly endless stream of new innovations. Which path should you pursue?

Fortunately, there is an answer and it starts with asking the right questions to define the problems you seek to solve and map the innovation space. From there, it is mostly a matter of choosing the right tools for the right jobs to develop an innovation playbook that will lead to success in the marketplace. This book will show you how to do that.

What is Innovation?

In The Little Black Book of Innovation, Scott Anthony defines innovation as “something different that has impact.[3]” That seems like a reasonable definition. After all, to innovate we do need to come up with something different, if not a completely new invention, then a process for using an existing technology in a new way. That would cover both significant technologies, like the Internet and the World Wide Web, while also making room for services like Uber and Facebook that harness those earlier inventions for new purposes.

And clearly, innovation needs to have an impact.  Yet how are we to judge that? Did Engelbart’s “Mother of All Demos” have an impact in 1968? Maybe it did on the people who were there to witness it, but few others. But Anthony insists that innovations need to have a measurable impact,[4] which probably didn’t happen until 1984, with the launch of the Macintosh. So does that mean that Steve Jobs was an innovator and Engelbart was not? That certainly doesn’t sound right. Maybe the Macintosh was the impact of “The Mother of All Demos.” But that would mean that Engelbart didn’t become an innovator until 16 years after he completed the work and that, in fact, Steve Jobs is responsible for making Engelbart’s work important and not the other way around. That doesn’t sound right either.

This is not, to be sure, a new debate, but one that’s been raging for over a century. In 1939 Abraham Flexner, published an article in Harpers Magazine entitled The Usefulness of Useless Knowledge[5], in which he recounted a conversation he had with the great industrialist George Eastman. He asked Eastman who he thought was the man most useful to science, to which Eastman replied that he felt it was Marconi, the inventor of radio.  Flexner then argued that Marconi was inevitable, given the work of Maxwell and Hertz, who discovered the basic principles that made radio possible.  Further, he argued that these men were driven not by practicality—or as Anthony would put it, by the impact of their work—but merely by curiosity.

Flexner went on to describe an institution he was building in Princeton, New Jersey called the Institute for Advanced Study, in which minds like John von Neumann as well as Albert Einstein, Kurt Gödel, and many others, could pursue any subject they liked in any manner they chose, without any responsibility to teach or publish or show any impact at all from their work.

It was there that von Neumann developed a computer with a revolutionary new architecture that could store programs. He devised his new machine using other ideas once thought useless, like the vacuum tubes invented by Vladimir Zworykin in the 1920’s.  This design, now known as the von Neumann architecture, was open sourced and led to the development of the first commercial computers that were sold to businesses. Just about every computing device in the world is still organized according to the scheme that von Neumann came up with in 1945.

Today, hundreds of scholars come to the Institute for Advanced Study each year to work on abstract problems like string theory and abstract geometry. Will there ever be a measurable impact from their work? We won’t know for decades, but clearly there is an incredible amount of innovative thinking about some very tough problems going on there.

So, I think a better definition for innovation would be “a novel solution to an important problem.” But that leads to the question: Important to whom? Well, first to a particular industry or field. Engelbart’s work was innovative because it was both new and considered incredibly important to the field of computer science, for which it created an entirely new paradigm. Also, innovations are important to the next innovator. Engelbart made Taylor and Kay’s work on the Alto possible, which made Steve Jobs’ work on the Macintosh possible, which in turn helped unleash the creativity of millions of others.

That’s why it’s so hard to understand where innovation begins and ends. The truth is that any significant innovation involves an incredible diversity of problems that need to be solved, from theoretical and engineering challenges to manufacturing and distribution hurdles. There is no silver bullet and no one person—or even a single organization—can provide all the answers alone.  

Still—and this is a crucial point—we all must pursue our own path to innovation alone. We have to choose what problems we intend to solve, who we will work with, the manner in which we will work with them and how we will bring our solutions to market. Those are decisions that we need to make and no one else can do it for us.

This book will show you how to map the innovation space in order to make those decisions in a more rational, informed manner. It will also help you build a strategy around those decisions that can help you win in the marketplace.

A New Era Of Innovation

As we have seen, innovation is far more difficult and complex than most people give it credit for. It takes more than a single big idea to change the world and it can take decades after the initial breakthroughs for the true impact of an idea to become clear.

Still, in some ways we’ve had it easy. Our basic computer architecture has not changed since John von Neumann created it in 1945. Moore’s Law, the regular doubling of chip performance that Gordon Moore postulated in 1965, has effectively given innovators a roadmap for developing new technology. Since the 1970’s, engineers have depended on it to tell them how to focus their efforts. Other key technologies, such as the lithium-ion batteries that have made mobile devices predictably smaller and more powerful with each generation, have been in use since 1991. Over the last quarter century, these technologies have dramatically improved, but the basic paradigm of their design not changed in any significant way.

The next decade or two, however, will look more like the fifties and sixties than it will the nineties or the aughts. We’ll essentially be starting over. Moore’s law, that trusty old paradigm that we’ve come to depend on, will likely come to an end around the year 2020, as transistors become so small that quantum effects between molecules will cause them to malfunction. Lithium-ion batteries will hit theoretical limits soon after that. They will be replaced by fundamentally new technologies, like quantum computing, neuromorphic chips and new materials for energy storage that nobody really knows how to work with yet.

At the same time, new fields, such as genomics, nanotechnology and robotics are just beginning to hit their stride, leading to revolutionary new cures, advanced materials and completely new ways to produce products. Artificial intelligence services, like Apple’s Siri and Google Now, will become thousands of times more powerful and change the way we work and collaborate—with machines as well as each other. I’ve talked to many of the people developing these revolutionary technologies and, despite the amazing potential of the breakthroughs, each time I’ve been struck by how much work there is still to do. We’re just beginning to scratch the surface.

Over the past 25 years, we’ve struggled to keep up with the pace of change. But over the next few decades, we will struggle to even understand the nature of change as fundamentally new technologies begin to influence the way we work, live, and strive to innovate. It will no longer be enough to simply move fast, we will have develop a clear sense of where we’re going, how we intend to get there and what role we will be able to play. We’ll need, in other words, to learn how to map innovation.


The purpose of this book is threefold. First, it will help you get a better understanding of innovation by dispelling destructive innovation myths. Innovations don’t happen just because someone comes up with one big idea. It takes many ideas to solve an important problem and that requires a collective effort.

Second, this book will give you valuable tools to help you frame the problems that are important to you. As you will see, it is only by framing problems effectively that you can find the approach most likely to solve them. Finally, it will help explain how innovation in the digital age is different from what it was in previous generations. Simply put, technology has given us powerful new tools and we need to learn how to use them effectively.

In Part I, we will see that, contrary to the innovation fairy tales we often hear of single flashes of insight and “Eureka!” moments, innovation is never a single event and that rather than following a linear path, effective innovators combine the wisdoms of diverse fields to synthesize information across domains. If a problem is difficult enough, it needs to borrow from multiple fields of expertise. Innovation, more than anything else, is combination.

Part II offers a powerful framework, the “innovation matrix” that will help you map the innovation space and define your innovation approach. It explains that first, we have to ask the right questions—How well is the problem defined and how well the domain is defined—to help determine the innovation strategy that will be most likely to yield results.

It will also give you a set of tools to navigate the often confusing—and jargon laden—world of innovation and find the right path for you and your organization. You will be shown how to access path breaking new research, pursue open innovation strategies, develop new business models, and how to seek out new horizons without forsaking your core business.

Part III will focus on the challenges of innovating in the digital age. In earlier generations, we could get by with just a few collaborators with which we worked closely. Today, however, we must use platforms to access ecosystems of talent, technology, and information in order to tackle the increasingly complex problems we face today.  Finally, Chapter 9 will explain how, as we enter a new era of innovation, collaboration itself is becoming a source of competitive advantage.

In the afterword, you will be shown how to use the principles explained in this book to create your own innovation playbook.

So let’s get started…




[2] Robert Solow, "We'd better watch out", New York Times Book Review, July 12, 1987, page 36.

[3] Anthony, Scott, The Little Black Book Of Innovation, Harvard Business Press, 2012, p. 16

[4] Ibid p.17

[5] Flexner, Abraham The usefulness of useless knowledge, Harpers Magazine, August 1939 (














Today, managers are often told that they must "innovate or die," but are given little useful guidance on how to go about it. Sure, there are many books and articles that champion one approach or another, but till now there has been no effective guide to help executives find their way through the tangled jungle of competing ideas.

In this book you will find: 

  • Insights into how the world's top innovators implement their innovation strategies.
  • A step-by-step guide to creating your own innovation playbook to win markets and run circles around your competition!
  • A simple-to-use framework for identifying the optimal innovation strategy that is most likely lead to a successful outcome.
The truth is that there is no one "true path" to innovation, no silver bullets and no shortcuts. There are, however, effective strategies that managers can pursue to dramatically increase their chances of success. Thoroughly researched, backed by original reporting and told through compelling stories of innovative organizations such as Google, IBM, Experian, Argonne National Laboratory and MD Anderson Cancer Center, Mapping Innovation will give managers what they have been looking for, a strategic playbook for navigating a disruptive age.
comments powered by Disqus