Published Oct 29, 2009
From the magazine issue dated Nov 9, 2009
Sharon Begley
Thanks to recent discoveries that they were canny hunters, clever toolmakers, and probably endowed with the gift of language, Neanderthals have overcome some of the nastier calumnies hurled at them, especially that they were the "dumb brutes of the North," as evolutionary ecologist Clive Finlayson describes their popular image. But they have never managed to shake the charge that their extinction 30,000 years ago, when our subspecies of Homo sapiens replaced them in their European home, was their own dumb fault. Modern humans mounted a genocidal assault on them, goes one explanation, triumphing through superior skills. Moderns drove them into extinction through greater evolutionary fitness, says another, especially the moderns' greater intelligence or social advances like the sexual division of labor.
Winners—of prehistory no less than history—get to write the textbooks. So it is no surprise that we, the children of the humans who replaced Neanderthals, "portray ourselves in the role of victors and reduce the rest [of the human lineage] to the lower echelons of vanquished," Finlayson writes. "To accept our existence as the product of chance requires a large dose of humility." But in a provocative new book, The Humans Who Went Extinct: Why Neanderthals Died Out and We Survived, he argues that chance is precisely what got us here. "A slight change of fortunes and the descendants of the Neanderthals would today be debating the demise of those other people that lived long ago," he argues.
Evolutionary biologists have long recognized the role serendipity plays in which species thrive and which wither on the Darwinian vine. Without the asteroid impact 65 million years ago, for instance, mammals would not have spread so soon into almost every ecological niche on Earth (dinosaurs were in the way). Yet when the subject strikes as close to home as why our ancestors survived and Neanderthals did not, scientists have resisted giving chance a starring role, preferring to credit the superiority of ancient H. sapiens. Both are descendants of Homo erectus: some spread across Eurasia beginning 1.8 million years ago and evolved into Neanderthal by 300,000 years ago, and others evolved in Africa, becoming anatomically modern by 200,000 years ago and reaching Europe some 45,000 years ago.
These arrivistes are often portrayed as technologically and culturally more advanced, with their bone and ivory (not just stone) tools and weapons, their jewelry making and cave painting—the last two evidence of symbolic thought. Finlayson has his doubts. Neanderthals may have painted, too (but on perishable surfaces); they were no slouches as toolmakers; and studies of their DNA show they had the same genes for speech that we do. "They survived for nearly 300,000 years," Finlayson says by phone from Gibraltar, where he is director of the Gibraltar Museum. "That modern humans got to Australia before they penetrated Europe suggests that Neanderthals held them off for millennia. That suggests they weren't that backward."
Instead, moderns were very, very lucky—so lucky that Finlayson calls what happened "survival of the weakest." About 30,000 years ago, the vast forests of Eurasia began to retreat, leaving treeless steppes and tundra and forcing forest animals to disperse over vast distances. Because they evolved in the warm climate of Africa before spreading into Europe, modern humans had a body like marathon runners, adapted to track prey over such distances. But Neanderthals were built like wrestlers. That was great for ambush hunting, which they practiced in the once ubiquitous forests, but a handicap on the steppes, where endurance mattered more. This is the luck part: the open, African type of terrain in which modern humans evolved their less-muscled, more-slender body type "subsequently expanded so greatly" in Europe, writes Finlayson. And that was "pure chance."
Because Neanderthals were not adept at tracking herds on the tundra, they had to retreat with the receding woodlands. They made their last stand where pockets of woodland survived, including in a cave in the Rock of Gibraltar. There, Finlayson and colleagues discovered in 2005, Neanderthals held on at least 2,000 years later than anywhere else before going extinct, victims of bad luck more than any evolutionary failings, let alone any inherent superiority of their successors.
Friday, November 6, 2009
Tuesday, November 3, 2009
What does a Smart Brain Look Like?
November 2009
Scientific American Mind
By: Richard J. Haier
A new neuroscience of intelligence is revealing that not all brains work in the same way
We all know someone who is not as smart as we are—and someone who is smarter. At the same time, we all know people who are better or worse than we are in a particular area or task, say, remembering facts or performing rapid mental math calculations. These variations in abilities and talents presumably arise from differences among our brains, and many studies have linked certain very specific tasks with cerebral activity in localized areas. Answers about how the brain as a whole integrates activity among areas, however, have proved elusive. Just what does a “smart” brain look like?
Now, for the first time, intelligence researchers are beginning to put together a bigger picture. Imaging studies are uncovering clues to how neural structure and function give rise to individual differences in intelligence. The results so far are confirming a view many experts have had for decades: not all brains work in the same way. People with the same IQ may solve a problem with equal speed and accuracy, using a different combination of brain areas. [For more on IQ and intelligence, see “Rational and Irrational Thought: The Thinking That IQ Tests Miss,” by Keith E. Stanovich]
Men and women show group average differences on neuroimaging measures, as do older and younger groups, even at the same level of intelligence. But newer studies are demonstrating that individual differences in brain structure and function, as they relate to intelligence, are key—and the latest studies have exposed only the tip of the iceberg. These studies hint at a new definition of intelligence, based on the size of certain brain areas and the efficiency of information flow among them. Even more tantalizing, brain scans soon may be able to reveal an individual’s aptitude for certain academic subjects or jobs, enabling accurate and useful education and career counseling. As we learn more about intelligence, we will better understand how to help individuals fulfill or perhaps enhance their intellectual potential and success.
For 100 years intelligence research relied on pencil-and-paper testing for metrics such as IQ. Psychologists used statistical methods to characterize the different components of intelligence and how they change over people’s lifetimes. They determined that virtually all tests of mental ability, irrespective of content, are positively related to one another—that is, those who score high on one test tend to score high on the others. This fact implies that all tests share a common factor, which was dubbed g, a general factor of intelligence. The g factor is a powerful predictor of success and is the focus of many studies. [For more on g, see “Solving the IQ Puzzle,” by James R. Flynn; Scientific American Mind, October/November 2007.]
In addition to the g factor, psychologists also have established other primary components of intelligence, including spatial, numerical and verbal factors, reasoning abilities known as fluid intelligence, and knowledge of factual information, called crystallized intelligence. But the brain mechanisms and structures underlying g and the other factors could not be inferred from test scores or even individuals with brain damage and thus remained hidden.
The advent of neuroscience techniques about 20 years ago finally offered a way forward. New methods, particularly neuroimaging, now allow a different approach to defining intelligence based on physical properties of the brain. In 1988 my colleagues and I at the University of California, Irvine, conducted one of the first studies to use such techniques. Using positron-emission tomography (PET), which produces images of metabolism in the brain by detecting the amount of low-level radioactive glucose used by neurons as they fire, we traced the brain’s energy use while a small sample of volunteers solved nonverbal abstract reasoning problems on a test called the Raven’s Advanced Progressive Matrices.
This test is known to be a good indicator of g, so we were hoping to answer the question of where general intelligence arises in the brain by determining which areas showed increased activation while solving the test problems. To our surprise, greater energy use (that is, increased glucose metabolism) was associated with poorer test performance. Smarter people were using less energy to solve the problems—their brains were more efficient.
The next obvious question was whether energy efficiency can arise through practice. In 1992 we used PET before and after subjects learned the computer game Tetris (a fast paced visuospatial puzzle), and we found less energy use in several brain areas after 50 days of practice and increased skill. The data suggest that over time the brain learns what areas are not necessary for better performance, and activity in those areas diminishes—leading to greater overall efficiency. Moreover, the individuals in the study with high g showed more brain efficiency after practice than the people with lower g.
Scientific American Mind
By: Richard J. Haier
A new neuroscience of intelligence is revealing that not all brains work in the same way
- Brain structure and metabolic efficiency may underlie individual differences in intelligence, and imaging research is pinpointing which regions are key players.
- Smart brains work in many different ways. Women and men who have the same IQ show different underlying brain architectures.
- The latest research suggests that an individual’s pattern of gray and white matter might underlie his or her specific cognitive strengths and weaknesses
We all know someone who is not as smart as we are—and someone who is smarter. At the same time, we all know people who are better or worse than we are in a particular area or task, say, remembering facts or performing rapid mental math calculations. These variations in abilities and talents presumably arise from differences among our brains, and many studies have linked certain very specific tasks with cerebral activity in localized areas. Answers about how the brain as a whole integrates activity among areas, however, have proved elusive. Just what does a “smart” brain look like?
Now, for the first time, intelligence researchers are beginning to put together a bigger picture. Imaging studies are uncovering clues to how neural structure and function give rise to individual differences in intelligence. The results so far are confirming a view many experts have had for decades: not all brains work in the same way. People with the same IQ may solve a problem with equal speed and accuracy, using a different combination of brain areas. [For more on IQ and intelligence, see “Rational and Irrational Thought: The Thinking That IQ Tests Miss,” by Keith E. Stanovich]
Men and women show group average differences on neuroimaging measures, as do older and younger groups, even at the same level of intelligence. But newer studies are demonstrating that individual differences in brain structure and function, as they relate to intelligence, are key—and the latest studies have exposed only the tip of the iceberg. These studies hint at a new definition of intelligence, based on the size of certain brain areas and the efficiency of information flow among them. Even more tantalizing, brain scans soon may be able to reveal an individual’s aptitude for certain academic subjects or jobs, enabling accurate and useful education and career counseling. As we learn more about intelligence, we will better understand how to help individuals fulfill or perhaps enhance their intellectual potential and success.
For 100 years intelligence research relied on pencil-and-paper testing for metrics such as IQ. Psychologists used statistical methods to characterize the different components of intelligence and how they change over people’s lifetimes. They determined that virtually all tests of mental ability, irrespective of content, are positively related to one another—that is, those who score high on one test tend to score high on the others. This fact implies that all tests share a common factor, which was dubbed g, a general factor of intelligence. The g factor is a powerful predictor of success and is the focus of many studies. [For more on g, see “Solving the IQ Puzzle,” by James R. Flynn; Scientific American Mind, October/November 2007.]
In addition to the g factor, psychologists also have established other primary components of intelligence, including spatial, numerical and verbal factors, reasoning abilities known as fluid intelligence, and knowledge of factual information, called crystallized intelligence. But the brain mechanisms and structures underlying g and the other factors could not be inferred from test scores or even individuals with brain damage and thus remained hidden.
The advent of neuroscience techniques about 20 years ago finally offered a way forward. New methods, particularly neuroimaging, now allow a different approach to defining intelligence based on physical properties of the brain. In 1988 my colleagues and I at the University of California, Irvine, conducted one of the first studies to use such techniques. Using positron-emission tomography (PET), which produces images of metabolism in the brain by detecting the amount of low-level radioactive glucose used by neurons as they fire, we traced the brain’s energy use while a small sample of volunteers solved nonverbal abstract reasoning problems on a test called the Raven’s Advanced Progressive Matrices.
This test is known to be a good indicator of g, so we were hoping to answer the question of where general intelligence arises in the brain by determining which areas showed increased activation while solving the test problems. To our surprise, greater energy use (that is, increased glucose metabolism) was associated with poorer test performance. Smarter people were using less energy to solve the problems—their brains were more efficient.
The next obvious question was whether energy efficiency can arise through practice. In 1992 we used PET before and after subjects learned the computer game Tetris (a fast paced visuospatial puzzle), and we found less energy use in several brain areas after 50 days of practice and increased skill. The data suggest that over time the brain learns what areas are not necessary for better performance, and activity in those areas diminishes—leading to greater overall efficiency. Moreover, the individuals in the study with high g showed more brain efficiency after practice than the people with lower g.
Subscribe to:
Comments (Atom)