Our Storytelling Minds: Do We Ever Really Know What’s Going on Inside?

Our Storytelling Minds: Do We Ever Really Know What’s Going on Inside?

Psychology September 16, 2012 / By Maria Konnikova
Our Storytelling Minds: Do We Ever Really Know What’s Going on Inside?
SYNOPSIS

We formulate stories about our own behavior and that of others all the time. If we’re not sure about the details, we make them up – or rather, our brain does, without so much as thinking about asking our permission.

W.J. was a veteran of World War II. He was gregarious, charming, and witty. He also happed to suffer from a debilitating form of epilepsy—so incapacitating that, in 1960, he elected to have a drastic form of brain surgery: his corpus callosum—the connecting fabric between the left and right hemispheres of the brain that allows the two halves to communicate—would be severed. In the past, this form of treatment had been shown to have a dramatic effect on the incidence of seizures. Patients who had been unable to function could all of a sudden lead seizure-free lives. But did such a dramatic change to the brain’s natural connectivity come at a cost?

At the time of W.J.’s surgery, no one really knew the answer. But Roger Sperry, a neuroscientist at Caltech who would go on to win a Nobel Prize in Medicine for his work on hemispheric connectivity, suspected that it might. In animals, at least, a severing of the corpus callosum meant that the hemispheres became unable to communicate. What happened in one hemisphere was now a complete mystery to the other. Could this effective isolation occur in humans as well?

No one had ever checked, but the pervasive wisdom was a resounding no. Our human brains were not animal brains. They were far more complicated, far too smart, far too evolved, really. And what better proof than all of the high-functioning patients who had undergone the surgery. This was no frontal lobotomy. These patients emerged with IQ intact and reasoning abilities aplenty. Their memory seemed unaffected. Their language abilities were normal.

The resounding wisdom seemed intuitive and accurate. Except, of course, it was resoundingly wrong. And it was proven so by a young neuroscientist who had just started working in Sperry’s lab, Michael Gazzaniga. When a new patient was brought in to Sperry’s lab for preoperative testing—he was to have his corpus callosum severed to treat his epilepsy—the task fell to Gazzaniga. Could he find a way to show what had never been done, that a pre- and post-operative brain were not one and the same, that the severing of this densely packed network of fibers could actually affect brain functioning?

Photo Credit: Mike Licht, Creative Commons



W.J. came into the Sperry lab from his home in Southern California to find Gazzaniga waiting with a tachistoscope, a device that could present visual stimuli for specific periods of time—and, crucially, could present a stimulus to the right side or the left side of each eye separately. The patient had no problems identifying objects in either hemisphere and could easily name items that he held in either hand when his hands were out of view. Gazzaniga was satisfied. W.J. went in for surgery, where both the corpus callosum and the anterior commissure (a thin tract of white matter that connects the olfactory areas of each hemisphere) were severed. One month later, he came back to the lab.

The results were striking. The same man who had sailed through his tests weeks earlier could no longer describe a single object that was presented to his left visual field. When Gazzaniga flashed an image of a spoon to the right field, W.J. named it easily, but when the same picture was presented to the left, the patient seemed to have, in essence, gone blind. His eyes were fully functional, but he could neither verbalize nor recall having seen a single thing.

But he could do something else: when Gazzaniga asked W.J. to point to the stimulus instead of speaking, he became able to complete the task. In other words, his hand knew what his head and mouth did not. His brain had effectively been split into two independently functioning halves. It was as if W.J. had become two individuals, one that was the sum of his left brain, and one, the sum of his right.

W.J. was Gazzaniga’s patient zero, the first in a long line of initials who all pointed in one direction: the two halves of our brain are not created equal. And here’s where things get really tricky. If you show a picture of, say, a chicken claw to just the left side of the eye (which means the picture will only be processed by the right hemisphere of the brain), and one of a snowy driveway to just the right side of the eye (which means it will only be processed by the left hemisphere), and then ask the individual to point at an image most closely related to what he’s seen, the two hands don’t agree: the right hand (tied to the left input) will point to a shovel, while the left hand (tied to the right input) will point to a chicken. Ask the person why he’s pointing to two objects, and instead of being confused, he’ll at once create an entirely plausible explanation: you need a shovel to clean out the chicken coop. His mind has created an entire story, a narrative that will make plausible sense of his hands’ discrepancy, when, in reality, it all goes back to those silent images.

Gazzaniga calls the left hemisphere our left-brain interpreter, driven to seek causes and explanations—even for things that may not have them, or at least not readily available to our minds—in a natural and instinctive fashion. The interpreter is responsible for deciding that a shovel is needed to clean out a chicken coop, that you’re laughing because the machine in front of you is funny (the explanation, given by a female patient when a pinup girl was flashed to her right hemisphere, causing her to snicker even though she swore she saw nothing), that you’re thirsty because the air is dry and not because your right hemisphere has just been presented with a glass of water (another study in confabulation run by Gazzaniga and colleagues). But while the interpreter makes perfect sense, he is more often than not flat out wrong.

The left and right brain aren't created equal. Image credit: Bernard Goldbach, Creative Commons

Split-brain patients provide some of the best evidence of our extreme proficiency at narrative self-deception, at creating explanations that make sense but are in reality far from the truth. But we don’t even need to have our corpus callosum severed to act that way. We do it all the time, as a matter of course.

Consider a famous problem-solving experiment, originally designed by Norman Maier in 1931: A participant was placed in a room where two strings were hanging from the ceiling. The participant’s job was to tie the two strings together. However, it was impossible to reach one string while holding the other. Several items were also available in the room, such as a pole, an extension cord, and a pair of pliers. What would you have done?

Most participants struggled with the pole, with an extension cord, trying their best to reach the end while holding on to the other string. It was a tricky business.

The most elegant solution? Tie the pliers to the bottom of one string, then use it as a pendulum and catch it as it floats toward you while you hold the other string. Simple, insightful, quick.

But very few people could visualize the change in object use (here, imagining the pliers as something other than pliers, a weight that could be tied to a string) – unless, that is, the experimenter seemingly by accident brushed one of the strings to induce a swinging motion. Then, participants appeared to spontaneously think of the pliers solution. I say spontaneously because they did not actually remember the stimulus that prompted them to do so. It was a so-called unconscious cue. When subjects were then asked where their insight came from, they cited many causes. “It was the only thing left.” “I just realized the cord would swing if I fastened a weight to it.” “I thought of the situation of swinging across a river. I had imagery of monkeys swinging from trees.”

All plausible enough. None correct. No one mentioned the experimenter’s ploy—and even when told about it in a debrief session, over two-thirds continued to insist that they had not noted it and that it had had no impact at all on their own solutions – even though they had reached those solutions, on average, within 45 seconds of the hint. What’s more, even the third that admitted the possibility of influence proved susceptible to false explanation: when a decoy cue (twirling the weight on a cord) was presented, which had no impact on the solution—that is, no one solved the problem with its help; they were only able to do so after the real, swinging cue—they cited that cue, and not the actual one that helped them, as having prompted their behavior. Explanation is often a post-hoc process.

Our minds form cohesive narratives out of disparate elements all the time: one of the things we are best at is telling ourselves just so stories about our own behavior and that of others. If we’re not sure, we make it up – or rather, our brain does, without so much as thinking about asking our permission to do so.

***

In 1916, Sigmund Freud delivered the eighteenth of his Introductory Lectures on Psychoanalysis. There, he spoke about the three great blows that “the naïve self-love of men has had to submit to … at the hands of science.” The first: the Copernican Revolution. The second: Darwinism. And the third: Freud himself. He said:

Human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must be content itself with scanty information of what is going on unconsciously in its mind.

The excerpt opens the recently published The Freud Files: An Inquiry into the History of Psychoanalysis and is meant to illustrate Freud’s ability to create a myth around his name and his work, to lay claim to greatness—the Darwin of psychology—that others had tried, unsuccessfully, to claim before him. And while I agree with Freud’s prowess, I think the point is a much greater one (and one that suggests that the book’s ultimate conclusion, that psychoanalysis never really existed as anything more than a story, is either entirely inaccurate—or a truism that can be applied to almost any movement or school in history): all our minds have are the stories we tell ourselves.

When Freud claimed that the ego is not master in its own house, that it must be content with only scanty information about what is actually going on in the mind, he didn’t have access to any of the neuroscience or cognitive psychology research that would follow his words. He couldn’t know what W.J. would come to teach us about the mind, what researchers like Richard Nisbett and Timothy Wilson (along with, Daniel Gilbert, Daniel Wegner, Tania Lombrozo, and many others) would tell us about our inability to access even something as seemingly simple as the cue that prompted us to solve a problem. He didn’t know that we really were masters of storytelling – and of telling stories that, more often than not, had little to do with the truth. He didn’t know any of it, but in a way, he had guessed what would happen over thirty years after he delivered his lecture, when Gazzaniga first met his historic patient.

In a way, we are all, to a greater or lesser extent, W.J.

This post has been modified and expanded from a draft of my forthcoming book on Sherlock Holmes, to be published by Viking in 2013.

----------

Gazzaniga, M. (2011). Interview with Michael Gazzaniga Annals of the New York Academy of Sciences, 1224 (1), 1-8 DOI: 10.1111/j.1749-6632.2011.05998.x

LeDoux, J.E., Wilson, D.H., & Gazzaniga, M.S. (1977). A divided mind: observations on the conscious properties of the separated hemispheres. Annals of neurology, 2 (5), 417-21 PMID: 103484

Nisbett, R., & Wilson, T. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84 (3), 231-259 DOI: 10.1037//0033-295X.84.3.231

This post originally appeared here on the Scientific American Blog Network

comments powered by Disqus
RECOMMENDED
FOR YOU