Working Memory and Fluid Reasoning: Same or Different?

Working Memory and Fluid Reasoning: Same or Different?

Psychology February 24, 2014 / By Scott Barry Kaufman
Working Memory and Fluid Reasoning: Same or Different?
SYNOPSIS

Can fluid intelligence be separated from working memory? Perhaps, if we remove time constraints.

In 1990, researchers Patrick Kyllonen and Raymond Christal found a striking correlation.

They gave large groups of American Air Force recruits various tests of working memory, in which participants performed simple operations on a single letter. For instance, in the “alphabet recoding” task, the computer briefly displayed three letters:

H, N, C

Followed by an instruction, such as:

Add 4

In which the answer would be:

L, R, G

Of course, adding four letters is a piece of cake. The difficult part is remembering the letter while performing the next mental operation, and holding both of those in mind while operating on the third. This can get increasingly difficult with more complex instructions and more letters to transform in your head.

Across four different studies, they found extremely high correlations—ranging from .80 to .90 — between their measures of working memory and various measures of reasoning. In fact, the correlations were so high they titled their paper: “Reasoning ability is (little more than) working memory capacity?!

Many studies since then have confirmed that working memory is an important contributor to fluid reasoning. Out of all the cognitive abilities ever measured by intelligence researchers, fluid reasoning is the most general cognitive ability of them all, explaining the most amount of variance in all of the other cognitive abilities. The ability to infer relations and spot patterns on problems that draw on minimal prior knowledge and expertise plays a role– in varying degrees– across virtually all areas of human intellectual functioning.

But just how strong is the relationship between working memory and fluid reasoning? As is often the case with science, the strength of the correlation between working memory and fluid reasoning has been all over the map, making the true relationship between working memory and fluid reasoning difficult to determine.

There are many reasons for the inconsistencies. Different studies include a different selection of tests, a different number of tests, and a different range of cognitive abilities among the participants. These sort of methodological details matter.

A new study suggests an additional factor at play: the timing of the tests. Adam Chuderski reviewed 26 studies that administered measures of working memory and the Raven’s Progressive Matrices test, which is the most widely used measure of fluid reasoning.*

On each Raven’s question, you are presented with a 3×3 matrix and you have to identify the missing piece that completes the pattern:

What does it take to do well on this test? It turns out there are only a handful of rules required to solve all the items on this test. The easier problems require you to apply a single rule— such as adding or subtracting a single attribute (such as a line). But the harder ones require combining multiple rules, and juggling multiple attributes (such as shapes, sizes, and colors). The difficulty in solving the Raven’s items is that you have to sort out the relevant attributes from the irrelevant attributes and hold the rules in your mind while testing them. And when some rules don’t work out, you have to know when to stop going down that path and start over. Since this task requires the ability to discover the abstract relations among novel stimuli, it is a good measure of nonverbal fluid reasoning.

Chuderski found that the studies that increased the time pressure of the Raven’s test significantly increased the correlation between working memory and fluid reasoning. In other words, when people were given more time to reason, working memory capacity wasn’t as strong a contributor to fluid reasoning.

He found this finding intriguing, so across two studies, he decided to dig deeper.

In his first study, he administered multiple tests of working memory and fluid reasoning to 1,377 people with an age range of 15-46. Using a statistical technique called confirmatory factor analysis, he confirmed that the time pressure of the fluid reasoning tests impact the strength of the correlation between working memory and fluid reasoning.

In the case of the “highly speeded group” (20 minutes), working memory explained all of the variance in fluid reasoning, whereas in the “unspeeded group” (60 minutes), working memory accounted for only 38% of the variance in fluid reasoning:

Chuderski replicated this finding in a second study, finding that under no time pressure during fluid reasoning, working memory only explained about a third of the differences in reasoning performance. Also, he found that a measure of “relational learning”– the ability to learn from prior letter relations to increase efficiency of subsequent processing of number relations– independently contributed to the amount of variation in fluid reasoning.

Why does this matter?

These results suggest we may be seriously underestimating fluid reasoning ability in people by imposing strict time pressures. This study is consistent with other recent research suggesting that “fast intelligence” can be distinguished from “slow intelligence”.

Researchers, educators, and business leaders attempting to assess a person’s level of fluid reasoning face a dilemma: Do you measure the person’s fluid reasoning ability through a highly speed task or do you give the individual more of an opportunity to display his or her reasoning power? As Chuderski notes,

“The former testing method [highly speeded tests] will measure the ability to cope with complexity in a dynamic environment, thus having a high real-world validity, as the technological and informational pressure of the world increases rapidly, but it may underestimate people who regardless of their limited capacity would work out good solutions in less dynamic environments. The latter method [more relaxed time pressures] will give a more comprehensive account of reasoning ability, including the contribution of intellectual faculties that lay beyond WM, and seem to be complementary to it, but it could also include a lot of noise (e.g., learned task-dependent strategies) negatively influencing the evaluation of future effectiveness of an individual in demanding, timed, and completely novel tasks.”

This is important, because given more time, people can compensate for their working memory capacity limitations. For instance, people show large improvements in fluid reasoning after learning how to draw diagrams to represent a problem. When Kenneth Gilhooly and his colleagues presented syllogisms orally, it placed a higher demand on working memory as participants had to store the premises in their head. But when the syllogisms were presented with all the premises remaining on the projector screen, people performed better because they could unload the premises from their working memory and free up limited resources to construct an efficient mental model of the problem.

Over the past decade, John Sweller and colleagues have designed instructional techniques that relieve working memory burdens on students and increase learning and interest. Drawing on both the expertise and working memory literatures, they match the complexity of learning situations to the learner, attempting to reduce unnecessary working memory loads that may interfere with reasoning and learning, and optimize cognitive processes most relevant to learning.

There are also implications for brain training interventions. As I’ve mentioned in an earlier article, the cognitive training literature is a swamp. While some studies find that improving working memory improves fluid reasoning, others studies find a lack of transfer.

A potential cause for the inconsistencies in the cognitive training literature may be the timing of the tasks. For instance, Susanne Jaeggi and colleagues administered their fluid reasoning tests under extreme time constraints (e.g., 10-11 minutes for 18 Raven’s items), and found that working memory training showed an increase in fluid reasoning performance. In contrast, Roberto Colom and colleagues administered fluid reasoning tests under standard time pressures and found no effect of working memory training on fluid reasoning.

These contradictory findings make sense in light of Chuderski’s study: when fluid reasoning tasks have strict time limits, they are essentially tests of working memory. So you would expect more of a transfer from working memory to fluid reasoning under such conditions. But when fluid reasoning tasks have more relaxed time pressures, working memory is more weakly associated with fluid reasoning, and other cognitive mechanisms come into play, such as relational learning and associative learning. Also, external aids can be employed, such as the use of diagrams to facilitate the construction of more elaborate and efficient mental models.

Conclusion

Working memory and fluid reasoning: same or different? It depends. Imposing extreme time pressures on an IQ test forces people to draw almost exclusively on their limited capacity working memory capacity, whereas giving people more time to think and reason gives them more of a chance to bring to the table other cognitive functions that contribute to their intellectual brilliance.

© 2013 Scott Barry Kaufman, All Rights Reserved.

Note: Portions of this article were excerpted from Ungifted: Intelligence Redefined.

image credit #1: istockphoto; image credit #2: George Doutsiopoulos; image credit #3: Chuderski study


This post originally appeared at Scientific American.

comments powered by Disqus
RECOMMENDED
FOR YOU