The Collective and the Future of Creativity

The Collective and the Future of Creativity

Technology March 21, 2012 / By Mark Changizi
The Collective and the Future of Creativity
SYNOPSIS

Is creativity going the way of the collective?

Imagine, if you will, a Borg cube from Star Trek humming along through space, part of a fleet of such cubes, each with millions of drones participating in a spatially non-localized brain of billions.

Now imagine that this collective Borg brain has a headache. The camera zooms inside one of the cubes and we see the source of the problem: a dreadlocked alien has awakened, and he’s raging through the ship, ripping up the neural wiring that connects the Borg drones to one another. Suddenly disconnected from the collective, the drones are waking up and finding themselves for the first time.

Although this rabble-rousing nerve-cutter might sound like the actions of a Klingon, as the camera gets closer we realize it’s actually a human.

Closer still and we realize that—holy crap!—it’s Jaron Lanier, author of the book, You Are Not a Gadget: A Manifesto  (Knopf, 2010).

Lanier may be physically grounded on the Earth and not battling Borg, but he is battling a collective, trying to wake up somnolent drones. The collective in his sights is the Web. And the drones? Well, that’s you and me.

Although Lanier is more civil than your average Klingon on a righteous ransack, one senses (even without the “manifesto” tip-off) that just below the surface is an individual bent on a revolution.

In fact, what comes across most clearly in the book is Lanier’s distinct and unique individuality. He brims with novel ideas, from the origins of speech and music (he speculates that it connects to color signaling in cephalopods), to radical kinds of programming languages (without protocols), and to new ideas for virtual reality (e.g., altering our perceptions so that we experience life as billowy clouds). Although many of these ideas are not entirely crucial to his central thesis, they serve to illustrate that it is in individuals, not collectives, where we find the lion’s share of creativity. These novel ideas also serve to convince the reader to trust Lanier’s intuitions about where creativity comes from.

And Lanier’s main thesis? It is that the Web is a creative bust.

Twenty years ago one would have expected something like the Web to have liberated the creative spirits inside tremendous numbers of people who had not previously had such an outlet. Instead, Lanier argues, the Web has caused the evolution of creativity to stagnate.

Where, Lanier asks, are the radically new musical genres since the 1990s? Technological change seems to be accelerating, and yet the development of novel forms of musical expression appears to have not merely slowed, but stalled altogether. We’re listening to non-radical variations on themes that existed two decades ago, Lanier argues. (Although I agree with his feeling in this regard, he and I may be too far removed from the last two decades of music to distinguish any ongoing revolutions, something he is quick to admit is a possibility.)

The music has stopped, Lanier suggests, because the Web—as currently structured, at least—does not value individual creators. Sure, you can now put your music onto the Web and, in principle, have the world listen. But because very few people have figured out how to make any money by giving away their creative products on the Web, most composers end up setting aside their musical endeavors for any paying job as soon as Mom kicks them out of the house.

More generally, he challenges readers to point to the new generation of people living off their creativity on the Web. They’re harder to find than a Borg at a bowling alley.

Contrary to the prevailing idea that consumers on the Web should not be obligated to pay individual human content-creators for their work, Lanier is adamant that music and human-created information should not be free. Creativity that goes unpaid leads to a novelty- and diversity-impoverished intellectual world dominated by material that takes minimal effort to produce—think LOLcats. Creative artists get cut out, and all that remains are the content distributors, like YouTube, who become fabulously rich.

That might be something some of us would be willing to live with, if it were nevertheless the case that the Web, by virtue of its vast interconnectivity and complex emergent properties, was smart enough to turn our drone-ishly dull works into things greater and more beautiful. That’s what clouds, or colonies, or collectives, are meant to do: Take large numbers of meager parts, and uplift them into something larger and smarter at the level of the whole.

And “smart collectives” do indeed exist. Social-insect colonies are the common example of stunning intelligence emanating from underwhelming individuals. Even our human-built machines are smart collectives: For instance, my personal computer’s hardware is built from hundreds of thousands of unimpressive parts, and my word-processing software is a collective of operators and terms working together as a single functional whole. I’ve even shown in my research that city road networks are rather like brains, in terms of their scaling laws.

If all these collectives can be, in some capacity, brilliant, why can’t the Web itself? Who needs creativity from individuals when there’s a super-creative hive-mind capable of doing it much better?

The beliefs underlying these questions are increasingly common in today’s techno-utopian world, and they need to be dispelled.

But as important as it is to Lanier’s thesis to do so, he doesn’t succinctly elucidate the source of the misconceptions about where smart collectives come from.

Here’s what, in my experience, people tend to tell themselves: Smart collectives result from liberal servings of self-organization and complexity. Why? Because the most brilliant collectives that exist—those found in biology, such as our bodies and brains built out of hundreds of billions of cells—are steeped with self-organization and complexity. And, the intuition continues, the Web also drips with self-organization and complexity. The Web therefore must be smart. And because the Web is growing and evolving over time, the Web must be getting ever smarter. Perhaps some day it will even become self-aware!

This is entirely wrong. Although “self-organization” and “complexity” are strangely seductive (even if no one is quite sure what they mean), neither is key to a smart collective. My computer is a smart collective without having self-organized, and crystals are dumb despite having done so. And although “complex” may well apply to all smart collectives, most intuitive notions of “complex” also apply to countless ridiculously stupid collectives, such as creamer poured into a cup of coffee.

No, the key to smart collectives is not to be found in these buzz words, but rather in something often overlooked or strangely derided: design.

The smart and amazing machines in the world—whether functioning as software, hardware, organisms, insect colonies, or creative brains—have undergone tremendous design, whether via deliberate engineering or some variety of selection (e.g., natural, or cultural). Smart biological collectives do indeed self-organize, but it is only a negligible fraction of all self-organizing creatures that make it through the process of natural selection. The result is “design,” without which the parts would “self-organize” into some functionless mass, like the unwieldy tangles that power cables seem to inevitably form when thrown loosely into a drawer.

And to be mind-like, the collective must not only have undergone design, but design for mindhood.

The problem with the Web is simply this: The Web is not really designed for anything. The structure of the Web is characterized by its interconnectivity, and that depends on how individual sites choose to connect to one another. The Web’s large-scale interconnectivity isn’t designed by engineers, and it is also not a consequence of selection mechanisms capable of implicitly leading to anything one would call “design.” Selection does happen, but at the level of the individual sites within the Web, not at the level of the entire Web.

If there were, say, many functionally distinct Webs, and they competed with one another over time, with some growing and some dying, Web selection would be possible. The surviving Webs might well end up exceedingly good at some things. But that’s not happening. And even if it were, there’s no reason to expect that the surviving Webs would be selected to fill the creativity gap that humans left open. It’s more plausible that they would be selected to minimize the time taken for consumers to find products—to be efficient shopping mechanisms, not mind-like entities at all.

The Web itself hasn’t been designed to do anything. And so it doesn’t do anything, much less anything smart, creative, or suggesting awareness.

If the Web is crushing human creativity, as Lanier argues, then there’s no solace to be found in looking to the Web itself, as a collective, to fill the creative shoes humankind needs.

The question we are left with is whether the Web’s current trajectory can be reversed, and a new human-centrism brought back. Or, instead, might it be that resistance is futile?

~~~

Mark Changizi is Director of Human Cognition at 2AI, and the author of Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man and The Vision Revolution. This piece originally appeared at Seed Magazine.

 

comments powered by Disqus
RECOMMENDED
FOR YOU