Friday, November 6, 2009
Survival of the Weakest- Why Neanderthals went extinct
From the magazine issue dated Nov 9, 2009
Sharon Begley
Thanks to recent discoveries that they were canny hunters, clever toolmakers, and probably endowed with the gift of language, Neanderthals have overcome some of the nastier calumnies hurled at them, especially that they were the "dumb brutes of the North," as evolutionary ecologist Clive Finlayson describes their popular image. But they have never managed to shake the charge that their extinction 30,000 years ago, when our subspecies of Homo sapiens replaced them in their European home, was their own dumb fault. Modern humans mounted a genocidal assault on them, goes one explanation, triumphing through superior skills. Moderns drove them into extinction through greater evolutionary fitness, says another, especially the moderns' greater intelligence or social advances like the sexual division of labor.
Winners—of prehistory no less than history—get to write the textbooks. So it is no surprise that we, the children of the humans who replaced Neanderthals, "portray ourselves in the role of victors and reduce the rest [of the human lineage] to the lower echelons of vanquished," Finlayson writes. "To accept our existence as the product of chance requires a large dose of humility." But in a provocative new book, The Humans Who Went Extinct: Why Neanderthals Died Out and We Survived, he argues that chance is precisely what got us here. "A slight change of fortunes and the descendants of the Neanderthals would today be debating the demise of those other people that lived long ago," he argues.
Evolutionary biologists have long recognized the role serendipity plays in which species thrive and which wither on the Darwinian vine. Without the asteroid impact 65 million years ago, for instance, mammals would not have spread so soon into almost every ecological niche on Earth (dinosaurs were in the way). Yet when the subject strikes as close to home as why our ancestors survived and Neanderthals did not, scientists have resisted giving chance a starring role, preferring to credit the superiority of ancient H. sapiens. Both are descendants of Homo erectus: some spread across Eurasia beginning 1.8 million years ago and evolved into Neanderthal by 300,000 years ago, and others evolved in Africa, becoming anatomically modern by 200,000 years ago and reaching Europe some 45,000 years ago.
These arrivistes are often portrayed as technologically and culturally more advanced, with their bone and ivory (not just stone) tools and weapons, their jewelry making and cave painting—the last two evidence of symbolic thought. Finlayson has his doubts. Neanderthals may have painted, too (but on perishable surfaces); they were no slouches as toolmakers; and studies of their DNA show they had the same genes for speech that we do. "They survived for nearly 300,000 years," Finlayson says by phone from Gibraltar, where he is director of the Gibraltar Museum. "That modern humans got to Australia before they penetrated Europe suggests that Neanderthals held them off for millennia. That suggests they weren't that backward."
Instead, moderns were very, very lucky—so lucky that Finlayson calls what happened "survival of the weakest." About 30,000 years ago, the vast forests of Eurasia began to retreat, leaving treeless steppes and tundra and forcing forest animals to disperse over vast distances. Because they evolved in the warm climate of Africa before spreading into Europe, modern humans had a body like marathon runners, adapted to track prey over such distances. But Neanderthals were built like wrestlers. That was great for ambush hunting, which they practiced in the once ubiquitous forests, but a handicap on the steppes, where endurance mattered more. This is the luck part: the open, African type of terrain in which modern humans evolved their less-muscled, more-slender body type "subsequently expanded so greatly" in Europe, writes Finlayson. And that was "pure chance."
Because Neanderthals were not adept at tracking herds on the tundra, they had to retreat with the receding woodlands. They made their last stand where pockets of woodland survived, including in a cave in the Rock of Gibraltar. There, Finlayson and colleagues discovered in 2005, Neanderthals held on at least 2,000 years later than anywhere else before going extinct, victims of bad luck more than any evolutionary failings, let alone any inherent superiority of their successors.
Tuesday, November 3, 2009
What does a Smart Brain Look Like?
Scientific American Mind
By: Richard J. Haier
A new neuroscience of intelligence is revealing that not all brains work in the same way
- Brain structure and metabolic efficiency may underlie individual differences in intelligence, and imaging research is pinpointing which regions are key players.
- Smart brains work in many different ways. Women and men who have the same IQ show different underlying brain architectures.
- The latest research suggests that an individual’s pattern of gray and white matter might underlie his or her specific cognitive strengths and weaknesses
We all know someone who is not as smart as we are—and someone who is smarter. At the same time, we all know people who are better or worse than we are in a particular area or task, say, remembering facts or performing rapid mental math calculations. These variations in abilities and talents presumably arise from differences among our brains, and many studies have linked certain very specific tasks with cerebral activity in localized areas. Answers about how the brain as a whole integrates activity among areas, however, have proved elusive. Just what does a “smart” brain look like?
Now, for the first time, intelligence researchers are beginning to put together a bigger picture. Imaging studies are uncovering clues to how neural structure and function give rise to individual differences in intelligence. The results so far are confirming a view many experts have had for decades: not all brains work in the same way. People with the same IQ may solve a problem with equal speed and accuracy, using a different combination of brain areas. [For more on IQ and intelligence, see “Rational and Irrational Thought: The Thinking That IQ Tests Miss,” by Keith E. Stanovich]
Men and women show group average differences on neuroimaging measures, as do older and younger groups, even at the same level of intelligence. But newer studies are demonstrating that individual differences in brain structure and function, as they relate to intelligence, are key—and the latest studies have exposed only the tip of the iceberg. These studies hint at a new definition of intelligence, based on the size of certain brain areas and the efficiency of information flow among them. Even more tantalizing, brain scans soon may be able to reveal an individual’s aptitude for certain academic subjects or jobs, enabling accurate and useful education and career counseling. As we learn more about intelligence, we will better understand how to help individuals fulfill or perhaps enhance their intellectual potential and success.
For 100 years intelligence research relied on pencil-and-paper testing for metrics such as IQ. Psychologists used statistical methods to characterize the different components of intelligence and how they change over people’s lifetimes. They determined that virtually all tests of mental ability, irrespective of content, are positively related to one another—that is, those who score high on one test tend to score high on the others. This fact implies that all tests share a common factor, which was dubbed g, a general factor of intelligence. The g factor is a powerful predictor of success and is the focus of many studies. [For more on g, see “Solving the IQ Puzzle,” by James R. Flynn; Scientific American Mind, October/November 2007.]
In addition to the g factor, psychologists also have established other primary components of intelligence, including spatial, numerical and verbal factors, reasoning abilities known as fluid intelligence, and knowledge of factual information, called crystallized intelligence. But the brain mechanisms and structures underlying g and the other factors could not be inferred from test scores or even individuals with brain damage and thus remained hidden.
The advent of neuroscience techniques about 20 years ago finally offered a way forward. New methods, particularly neuroimaging, now allow a different approach to defining intelligence based on physical properties of the brain. In 1988 my colleagues and I at the University of California, Irvine, conducted one of the first studies to use such techniques. Using positron-emission tomography (PET), which produces images of metabolism in the brain by detecting the amount of low-level radioactive glucose used by neurons as they fire, we traced the brain’s energy use while a small sample of volunteers solved nonverbal abstract reasoning problems on a test called the Raven’s Advanced Progressive Matrices.
This test is known to be a good indicator of g, so we were hoping to answer the question of where general intelligence arises in the brain by determining which areas showed increased activation while solving the test problems. To our surprise, greater energy use (that is, increased glucose metabolism) was associated with poorer test performance. Smarter people were using less energy to solve the problems—their brains were more efficient.
The next obvious question was whether energy efficiency can arise through practice. In 1992 we used PET before and after subjects learned the computer game Tetris (a fast paced visuospatial puzzle), and we found less energy use in several brain areas after 50 days of practice and increased skill. The data suggest that over time the brain learns what areas are not necessary for better performance, and activity in those areas diminishes—leading to greater overall efficiency. Moreover, the individuals in the study with high g showed more brain efficiency after practice than the people with lower g.
Thursday, October 29, 2009
Naked Mole Rat Wins the War on Cancer
ScienceNOW Daily News
26 October 2009
With its wrinkled skin and bucked teeth, the naked mole rat isn't going to win any beauty contests. But the burrowing, desert rodent is exceptional in another way: It doesn't get cancer. The naked mole rat's cells hate to be crowded, it turns out, so they stop growing before they can form tumors. The details could someday lead to a new strategy for treating cancer in people.
In search of clues to aging, cell biologists Vera Gorbunova, Andrei Seluanov, and colleagues at the University of Rochester have been comparing rodents that vary in size and life span, from mice to beavers. The naked mole rat stands out because it's small yet can live more than 28 years--seven times as long as a house mouse. Resistance to cancer could be a major factor; whereas most laboratory mice and rats die from the disease, it has never been observed in naked mole rats.
Gorbunova's team looked at the mole rat's cells for an answer. Normal human and mouse cells will grow and divide in a petri dish until they mash tightly against one another in a single, dense layer--a mechanism known as "contact inhibition." Naked mole rat cells are even more sensitive to their neighbors, the researchers found. The cells stop growing as soon as they touch. The strategy likely helps keep the rodents cancer-free, as contact inhibition fails in cancerous cells, causing them to pile up.
The reason, the researchers discovered, is that naked mole rat cells rely on two proteins--named p27Kip1 and p16Ink4a--to stop cell growth when they touch, whereas human and mouse cells rely mainly on p27Kip1. "They use an additional checkpoint," says Gorbunova, whose study appears online today in the Proceedings of the National Academy of Sciences (PNAS). When the team mutated the naked mole rat cells so that they grew much closer together than they had before, levels of p16Ink4a dropped.
The naked mole rat's kind of cancer prevention may prove relevant to humans because the same genes are involved, says Brown University cancer biologist John Sedivy. The rat's defenses "evolved separately but use the same nuts and bolts," he says. Sedivy writes in an accompanying commentary in PNAS that it may be possible to "tweak the entire network [of tumor-suppressing pathways] to develop new prevention strategies."
The next step, Gorbunova says, is to find other proteins and molecules that make up this new contact inhibition pathway. One obstacle is that little is known about the naked mole rat's genes. The critter has been proposed for genome sequencing but so far has been turned down. "I hope Vera's study will put the naked mole rate higher up in the queue," says George Martin, a researcher who studies aging and a professor emeritus at the University of Washington.
Tuesday, October 27, 2009
Secrets of frog killer laid bare
BBC News website
Scientists have unravelled the mechanism by which the fungal disease chytridiomycosis kills its victims.
The fungus is steadily spreading through populations of frogs and other amphibians worldwide, and has sent some species extinct in just a few years.
Researchers now report in the journal Science that the fungus kills by changing the animals' electrolyte balance, resulting in cardiac arrest.
The finding is described as a "key step" in understanding the epidemic.
Karen Lips, one of the world authorities on the spread of chytridiomycosis, said the research was "compelling".
"They've done an incredible amount of work, been very thorough, and I don't think anybody will have problems with this.
"We suspected something like this all along, but it's great to know this is in fact what is happening," the University of Maryland professor told BBC News.
Skin deep
Amphibian skin plays several roles in the animals' life.
Most species can breathe through it, and it is also used as a membrane through which electrolytes such as sodium and potassium are exchanged with the outside world.
The mainly Australian research group took skin samples from healthy and diseased green tree frogs, and found that these compounds passed through the skin much less readily when chytrid was present.
Samples of blood and urine from infected frogs showed much lower sodium and potassium concentrations than in healthy animals - potassium was down by half.
In other animals including humans, this kind of disturbance is known to be capable of causing cardiac arrest.
The scientists also took electrocardiogram recordings of the frogs' hearts in the hours before death; and found changes to the rhythm culminating in arrest.
Drugs that restore electrolyte balance brought the animals a few hours or days of better health, some showing enough vigour to climb out of their bowls of water; but all died in the end.
Grail quest
Lead scientist Jamie Voyles, from James Cook University in Townsville, said the next step was to look for the same phenomenon in other species.
"This is lethal across a broad range of hosts, whether terrestrial or aquatic, so it's really important to look at what's happening in other susceptible amphibians," she said.
Another step will be to examine how the chytrid fungus (Batrachochytrium dendrobatidis - Bd) impairs electrolyte transfer.
"What this work doesn't tell us is the mechanism by which chytrid causes this problem with sodium," said Matthew Fisher from Imperial College London.
"It could be that Bd is excreting a toxin, or it could be causing cell damage. This causative action is actually the 'holy grail' - so that's another obvious next step."
The finding is unlikely to plot an immediate route to ways of preventing or treating or curing the disease in the wild.
Curing infected amphibians in captivity is straightforward using antifungal chemicals; but currently there is no way to tackle it outside.
Various research teams are exploring the potential of bacteria that occur naturally on the skin of some amphibians, and may play a protective role.
Understanding the genetics of how Bd disrupts electrolyte balance might lead to more precise identification of protective bacteria, suggested Professor Lips, and so eventually play a role in curbing the epidemic.
Gene therapy transforms eyesight of 12 born with rare defect
By Thomas H. Maugh II
October 25, 2009
Pennsylvania researchers using gene therapy have made significant improvements in vision in 12 patients with a rare inherited visual defect, a finding that suggests it may be possible to produce similar improvements in a much larger number of patients with retinitis pigmentosa and macular degeneration.
The team last year reported success with three adult patients, an achievement that was hailed as a major accomplishment for gene therapy. They have now treated an additional nine patients, including five children, and find that the best results are achieved in the youngest patients, whose defective retinal cells have not had time to die off.
The youngest patient, 9-year-old Corey Haas, was considered legally blind before the treatment began. He was confined largely to his house and driveway when playing, had immense difficulties in navigating an obstacle course and required special enlarging equipment for books and help in the classroom.
Today, after a single injection of a gene-therapy product in one eye, he rides his bike around the neighborhood, needs no assistance in the classroom, navigates the obstacle course quickly and has even played his first game of softball.
The results are "astounding," said Stephen Rose, chief scientific officer of Foundation Fighting Blindness, which supported the work but was not involved directly. "The big take-home message from this is that every individual in the group had improvement… and there were no safety issues at all."
The study "holds great promise for the future" and "is appealing because of its simplicity," wrote researchers from the Nijmegen Medical Center in the Netherlands in an editorial accompanying the report, which was published online Saturday by the journal Lancet.
The 12 patients had Leber's congenital amaurosis, which affects about 3,000 people in the United States and perhaps 130,000 worldwide. Victims are born with severely impaired vision that deteriorates until they are totally blind, usually in childhood or adolescence. There is no treatment.Leber's is a good candidate for gene therapy because most of the visual apparatus is intact, particularly at birth and in childhood. Mistakes in 13 different genes are known to cause it, but all 12 of the patients suffered a defect in a gene called RPE65. This gene produces a vitamin A derivative that is crucial for detecting light.
About five children are born each year in the United States with that defect, which was chosen because researchers at the Children's Hospital of Philadelphia and the University of Pennsylvania School of Medicine had cloned the gene, making copies available for use.
The study, led by Dr. Katherine A. High, Dr. Albert M. Maguire and Dr. Jean Bennett of those two institutions, enrolled five people in the United States, five from Italy and two from Belgium. Five were children, and the oldest was 44.
The good copy of the RPE65 gene was inserted into a defanged version of a human adenovirus. The engineered virus then invaded retinal cells and inserted the gene into the cells' DNA.
Maguire used a long, thin needle to insert the preparation into the retina of the worst eye in each of the patients. Within two weeks, the treated eyes began to become more sensitive to light, and within a few more weeks, vision began to improve. The younger the patients were, the better they responded. That was expected, Bennett said, because similar results had been observed in dogs and rodents.
By both objective and subjective measures, vision improved for all the patients. They were able to navigate obstacle courses, read eye charts and perform most of the tasks of daily living. The improvement has now persisted for as long as two years.
The children who were treated "are now able to walk and play just like any normally sighted child," Maguire said.
Bennett noted that the oldest patient in the trial, a mother, had not been able to walk down the street to meet her children at school. "Now she can. She also achieved her primary goal, which was to see her daughter hit a home run."
There are clear limitations to the study. The patients' vision was not corrected to normal because of the damage that had already been done to the retina, and only one eye was treated.
"The big elephant in the room is: Can you treat the other eye?" Rose said.
The foundation will put more funding into the research "to make sure that if you go back and treat the other eye, it won't ablate the positive results in the first eye due to an immune reaction or something else."
Researchers also have not optimized the dosage of the adenovirus used to carry the gene into the eye. Those issues will be studied in Phase 2, a larger clinical trial that they hope to begin soon.
Meanwhile, the team has begun treating some patients at the University of Iowa.
Researchers also hope they will be able to translate the results to other congenital conditions using different genes.
Leber's is one form of retinitis pigmentosa, which affects an estimated 100,000 Americans.
The findings might be applicable to macular degeneration, which affects an estimated 1.25 million Americans and is the major cause of visual impairment in the elderly.
Friday, October 2, 2009
World’s oldest human-linked skeleton found
By Randolph E. Schmid
updated 6:23 p.m. CT, Thurs., Oct . 1, 2009

J.H. Matternes
An artist's rendering shows Ardipithecus ramidus as it might have looked in life.
The 110-pound, 4-foot female roamed forests a million years before the famous Lucy, long studied as the earliest skeleton of a human ancestor.
This older skeleton reverses the common wisdom of human evolution, said anthropologist C. Owen Lovejoy of Kent State University.
Rather than humans evolving from an ancient chimplike creature, the new find provides evidence that chimps and humans evolved from some long-ago common ancestor — but each evolved and changed separately along the way.
“This is not that common ancestor, but it’s the closest we have ever been able to come,” said Tim White, director of the Human Evolution Research Center at the University of California, Berkeley.
The lines that evolved into modern humans and living apes probably shared an ancestor 6 million to 7 million years ago, White said in a telephone interview.
But Ardi has many traits that do not appear in modern-day African apes, leading to the conclusion that the apes evolved extensively since we shared that last common ancestor.
A study of Ardi, under way since the first bones were discovered in 1994, indicates the species lived in the woodlands and could climb on all fours along tree branches, but the development of their arms and legs indicates they didn’t spend much time in the trees. And they could walk upright, on two legs, when on the ground.
Formally dubbed Ardipithecus ramidus — which means root of the ground ape — the find is detailed in 11 research papers published Thursday by the journal Science.
“This is one of the most important discoveries for the study of human evolution,” said David Pilbeam, curator of paleoanthropology at Harvard’s Peabody Museum of Archaeology and Ethnology.
“It is relatively complete in that it preserves head, hands, feet and some critical parts in between. It represents a genus plausibly ancestral to Australopithecus — itself ancestral to our genus Homo,” said Pilbeam, who was not part of the research teams.
Scientists assembled the skeleton from 125 pieces.
The area where "Ardi" was found is rich in sites where the fossils of human ancestors have been found.
Lucy, also found in Africa, thrived a million years after Ardi and was of the more humanlike genus Australopithecus.
“In Ardipithecus we have an unspecialized form that hasn’t evolved very far in the direction of Australopithecus. So when you go from head to toe, you’re seeing a mosaic creature that is neither chimpanzee, nor is it human. It is Ardipithecus,” said White.
White noted that Charles Darwin, whose research in the 19th century paved the way for the science of evolution, was cautious about the last common ancestor between humans and apes.
“Darwin said we have to be really careful. The only way we’re really going to know what this last common ancestor looked like is to go and find it. Well, at 4.4 million years ago we found something pretty close to it,” White said. “And, just like Darwin appreciated, evolution of the ape lineages and the human lineage has been going on independently since the time those lines split, since that last common ancestor we shared.”
J.H. Matternes
An artist's rendering shows Ardipithecus ramidus as it might have looked in life.
Some details about Ardi in the collection of papers:
- Ardi was found in Ethiopia’s Afar Rift, where many fossils of ancient plants and animals have been discovered. Findings near the skeleton indicate that at the time it was a wooded environment. Fossils of 29 species of birds and 20 species of small mammals were found at the site.
- Geologist Giday WoldeGabriel of Los Alamos National Laboratory was able to use volcanic layers above and below the fossil to date it to 4.4 million years ago.
- Ardi’s upper canine teeth are more like the stubby ones of modern humans than the long, sharp, pointed ones of male chimpanzees and most other primates. An analysis of the tooth enamel suggests a diverse diet, including fruit and other woodland-based foods such as nuts and leaves.
- Paleoanthropologist Gen Suwa of the University of Tokyo reported that Ardi’s face had a projecting muzzle, giving her an ape-like appearance. But it didn’t thrust forward quite as much as the lower faces of modern African apes do. Some features of her skull, such as the ridge above the eye socket, are quite different from those of chimpanzees. The details of the bottom of the skull, where nerves and blood vessels enter the brain, indicate that Ardi’s brain was positioned in a way similar to modern humans, possibly suggesting that the hominid brain may have been already poised to expand areas involving aspects of visual and spatial perception.
- Ardi’s hand and wrist were a mix of primitive traits and a few new ones, but they don’t include the hallmark traits of the modern tree-hanging, knuckle-walking chimps and gorillas. She had relatively short palms and fingers which were flexible, allowing her to support her body weight on her palms while moving along tree branches, but she had to be a careful climber because she lacked the anatomical features that allow modern-day African apes to swing, hang and easily move through the trees.
- The pelvis and hip show the gluteal muscles were positioned so she could walk upright.
- Her feet were rigid enough for walking but still had a grasping big toe for use in climbing.
The research was funded by the National Science Foundation, the Institute of Geophysics and Planetary Physics of the University of California, Los Alamos National Laboratory, the Japan Society for the Promotion of Science and others.
Monday, September 21, 2009
Born to be Big
By Sharon Begley NEWSWEEK
Published Sep 11, 2009
From the magazine issue dated Sep 21, 2009
It’s easy enough to find culprits in the nation's epidemic of obesity, starting with tubs of buttered popcorn at the multiplex and McDonald's 1,220-calorie deluxe breakfasts, and moving on to the couch potatofication of America. Potent as they are, however, these causes cannot explain the ballooning of one particular segment of the population, a segment that doesn't go to movies, can't chew, and was never that much into exercise: babies. In 2006 scientists at the Harvard School of Public Health reported that the prevalence of obesity in infants under 6 months had risen 73 percent since 1980. "This epidemic of obese 6-month-olds," as endocrinologist Robert Lustig of the University of California, San Francisco, calls it, poses a problem for conventional explanations of the fattening of America. "Since they're eating only formula or breast milk, and never exactly got a lot of exercise, the obvious explanations for obesity don't work for babies," he points out. "You have to look beyond the obvious."
The search for the non-obvious has led to a familiar villain: early-life exposure to traces of chemicals in the environment. Evidence has been steadily accumulating that certain hormone-mimicking pollutants, ubiquitous in the food chain, have two previously unsuspected effects. They act on genes in the developing fetus and newborn to turn more precursor cells into fat cells, which stay with you for life. And they may alter metabolic rate, so that the body hoards calories rather than burning them, like a physiological Scrooge. "The evidence now emerging says that being overweight is not just the result of personal choices about what you eat, combined with inactivity," says Retha Newbold of the National Institute of Environmental Health Sciences (NIEHS) in North Carolina, part of the National Institutes of Health (NIH). "Exposure to environmental chemicals during development may be contributing to the obesity epidemic." They are not the cause of extra pounds in every person who is overweight—for older adults, who were less likely to be exposed to so many of the compounds before birth, the standard explanations of genetics and lifestyle probably suffice—but environmental chemicals may well account for a good part of the current epidemic, especially in those under 50. And at the individual level, exposure to the compounds during a critical period of development may explain one of the most frustrating aspects of weight gain: you eat no more than your slim friends, and exercise no less, yet are still unable to shed pounds.
The new thinking about obesity comes at a pivotal time politically. As the debate over health care shines a light on the country's unsustainable spending on doctors, hospitals, and drugs, the obese make tempting scapegoats. About 60 percent of Americans are overweight or obese, and their health-care costs are higher: $3,400 in annual spending for a normal-weight adult versus $4,870 for an obese adult, mostly due to their higher levels of type 2 diabetes, heart disease, and other conditions. If those outsize costs inspire greater efforts to prevent and treat obesity, fine. But if they lead to demonizing the obese—caricaturing them as indolent pigs raising insurance premiums for the rest of us—that's a problem, and not only for ethical reasons: it threatens to obscure that one potent cause of weight gain may be largely beyond an individual's control.
That idea did not have a very auspicious genesis. In 2002 an unknown academic published a paper in an obscure journal. Paula Baillie-Hamilton, a doctor at Stirling University in Scotland whose only previous scientific paper, in 1997, was titled "Elimination of Firearms Would Do Little to Reduce Premature Deaths," reported a curious correlation. Obesity rates, she noted in The Journal of Alternative and Complementary Medicine, had risen in lockstep with the use of chemicals such as pesticides and plasticizers over the previous 40 years. True enough. But to suggest that the chemicals caused obesity made as much sense as blaming the rise in obesity on, say, hip-hop. After all, both of those took off in the 1970s and 1980s.
Despite that obvious hole in logic, the suggestion of a link between synthetic chemicals and obesity caught the eye of a few scientists. For one thing, there was no question that exposure in the womb to hormonelike chemicals can cause serious illness decades later. Women whose mothers took the antimiscarriage, estrogenlike drug DES during pregnancy, for instance, have a high risk of cervical and vaginal cancer. In that context, the idea that exposure to certain chemicals during fetal or infant development might "program" someone for obesity didn't seem so crazy, says Jerrold Heindel of NIEHS. In 2003 he therefore wrote a commentary, mentioning Baillie-Hamilton's idea, in a widely read toxicology journal, bringing what he called its "provocative hypothesis" more attention. He underlined one fact in particular. When many of the chemicals Baillie-Hamilton discussed had been tested for toxicity, researchers focused on whether they caused weight loss, which is considered a toxic effect. They overlooked instances when the chemicals caused weight gain. But if you go back to those old studies, Heindel pointed out, you see that a number of chemicals caused weight gain—and at low doses, akin to those that fetuses and newborns are exposed to, not the proverbial 800 cans of diet soda a day. Those results, he says, had "generally been overlooked."
Scientists in Japan, whose work Heindel focused on, were also finding that low levels of certain compounds, such as bisphenol A (the building block of hard, polycarbonate plastic, including that in baby bottles), had surprising effects on cells growing in lab dishes. Usually the cells become fibroblasts, which make up the body's connective tissue. These prefibroblasts, however, are like the kid who isn't sure what he wants to be when he grows up. With a little nudge, they can take an entirely different road. They can become adipocytes—fat cells. And that's what the Japanese team found: bisphenol A, and some other industrial compounds, pushed prefibroblasts to become fat cells. The compounds also stimulated the proliferation of existing fat cells. "The fact that an environmental chemical has the potential to stimulate growth of 'preadipocytes' has enormous implications," Heindel wrote. If this happened in living animals as it did in cells in lab dishes, "the result would be an animal [with] the tendency to become obese."
It took less than two years for Heindel's "if" to become reality. For 30 years his colleague Newbold had been studying the effects of estrogens, but she had never specifically looked for links to obesity. Now she did. Newbold gave low doses (equivalent to what people are exposed to in the environment) of hormone-mimicking compounds to newborn mice. In six months, the mice were 20 percent heavier and had 36 percent more body fat than unexposed mice. Strangely, these results seemed to contradict the first law of thermodynamics, which implies that weight gain equals calories consumed minus calories burned. "What was so odd was that the overweight mice were not eating more or moving less than the normal mice," Newbold says. "We meas-ured that very carefully, and there was no statistical difference."
On the other side of the country, Bruce Blumberg of the University of California, Irvine, had also read the 2002 Baillie-Hamilton paper. He wasn't overly impressed. "She was peddling a book with questionable claims about diets that 'detoxify' the body," he recalls. "And to find a correlation between rising levels of obesity and chemicals didn't mean much. There's a correlation between obesity and a lot of things." Nevertheless, her claim stuck in the back of his mind as he tested environmental compounds for their effects on the endocrine (hormone) system. "People were testing these compounds for all sorts of things, saying, 'Let's see what they do in my [experimental] system,' " Blumberg says. "But cells in culture are not identical to cells in the body. We had to see whether this occurred in live animals."
In 2006 he fed pregnant mice tributyltin, a disinfectant and fungicide used in marine paints, plastics production, and other products, which enters the food chain in seafood and drinking water. "The offspring were born with more fat already stored, more fat cells, and became 5 to 20 percent fatter by adulthood," Blumberg says. Genetic tests revealed how that had happened. The tributyltin activated a receptor called PPAR gamma, which acts like a switch for cells' fate: in one position it allows cells to remain fibroblasts, in another it guides them to become fat cells. (It is because the diabetes drugs Actos and Avandia activate PPAR gamma that one of their major side effects is obesity.) The effect was so strong and so reliable that Blumberg thought compounds that reprogram cells' fate like this deserved a name of their own: obesogens. As later tests would show, tributyltin is not the only obesogen that acts on the PPAR pathway, leading to more fat cells. So do some phthalates (used to make vinyl plastics, such as those used in shower curtains and, until the 1990s, plastic food wrap), bisphenol A, and perfluoroalkyl compounds (used in stain repellents and nonstick cooking surfaces).
Programming the fetus to make more fat cells leaves an enduring physiological legacy. "The more adipocytes, the fatter you are," says UCSF's Lustig. But adipocytes are more than passive storage sites. They also fine-tune appetite, producing hormones that act on the brain to make us feel hungry or sated. With more adipocytes, an animal is doubly cursed: it is hungrier more often, and the extra food it eats has more places to go—and remain.
Within a year of Blumberg's groundbreaking work, it became clear that altering cells' fate isn't the only way obesogens can act, and that exotic pollutants aren't the only potential obesogens. In 2005 Newbold began feeding newborn rats genistein, an estrogenlike compound found in soy, at doses like those in soy milk and soy formula. By the age of 3 or 4 months, the rats had higher stores of fat and a noticeable increase in body weight. And once again, mice fed genistein did not eat significantly more—not enough more, anyway, to account for their extra avoirdupois, suggesting that the compound threw a wrench in the workings of the body's metabolic rate. "The only way to gain weight is to take in more calories than you burn," says Blumberg. "But there are lots of variables, such as how efficiently calories are used." Someone who uses calories very efficiently, and burns fewer to stay warm, has more left over to turn into fat. "One of the messages of the obesogens research is that prenatal exposure can reprogram metabolism so that you are predisposed to become fat," says Blumberg.
The jury is still out on whether soy programs babies to be overweight—some studies find that it does, other studies that it doesn't—but Newbold didn't want her new grandchild to be a guinea pig in this unintentional experiment. When her daughter mentioned that she was planning to feed the baby soy formula, as about 20 percent of American mothers do, Newbold said she would cover the cost of a year's worth of regular formula if her daughter would change her mind. (She did.) As a scientist rather than a grandmother, however, Newbold hedged her bets. "Whether our results can be extrapolated to humans," she said in 2005, "remains to be determined."
Another challenge to the simplistic calories-in/calories-out model came just this month. The time of day when mice eat, scientists reported, can greatly affect weight gain. Mice fed a high-fat diet during their normal sleeping hours gained more than twice as much weight as mice eating the same type and amount of food during their normal waking hours, Fred Turek of Northwestern University and colleagues reported in the journal Obesity. And just as Newbold found, the two groups did not differ enough in caloric intake or activity levels to account for the difference in weight gain. Turek suspects that one possible cause of the difference is the disruption in the animals' circadian rhythms. Genes that govern our daily cycle of sleeping and waking "also regulate at least 10 percent of the other genes in our cells, including metabolic genes," says Turek. "Mess up the cellular clock and you may mess up metabolic rate." That would account for why the mice that ate when they should have slept gained more weight: the disruption in their clock genes lowered their metabolic rate, so they burned fewer calories to keep their body running. Studies in people have linked eating at odd times with weight gain, too.
Mice are all well and good, but many a theory has imploded when results in lab animals failed to show up in people. Unfortunately, that is not the case with obesogens. In 2005 scientists in Spain reported that the more pesticides children were exposed to as fetuses, the greater their risk of being overweight as toddlers. And last January scientists in Belgium found that children exposed to higher levels of PCBs and DDE (the breakdown product of the pesticide DDT) before birth were fatter than those exposed to lower levels. Neither study proves causation, but they "support the findings in experimental animals," says Newbold. They "show a link between exposure to environmental chemicals … and the development of obesity."
Given the ubiquity of obesogens, traces of which are found in the blood or tissue of virtually every American, why isn't everyone overweight? For now, all scientists can say is that even a slight variation in the amounts and timing of exposures might matter, as could individual differences in physiology. "Even in genetically identical mice," notes Blumberg, "you get a range of reactions to the same chemical exposure." More problematic is the question of how to deal with this cause of obesity. If obesogens have converted more precursor cells into fat cells, or have given you a "thrifty" metabolism that husbands calories like a famine victim, you face an uphill climb. "It doesn't mean you can't work out like a demon and strictly control what you eat," says Blumberg, "but you have to work at it that much harder." He and others are quick to add that obesogens do not account for all cases of obesity, especially in adults. "I'd like to avoid the simplistic story that chemicals make you fat," says Blumberg. For instance, someone who was slim throughout adolescence and then packed on pounds in adulthood probably cannot blame it on exposure to obesogens prenatally or in infancy: if that were the cause, the extra fat cells and lower metabolic rate that obesogens cause would have shown themselves in childhood chubbiness.
This fall, scientists from NIH, the Food and Drug Administration, the Environmental Protection Agency, and academia will discuss obesogens at the largest-ever government-sponsored meeting on the topic. "The main message is that obesogens are a factor that we hadn't thought about at all before this," says Blumberg. But they're one that could clear up at least some of the mystery of why so many of us put on pounds that refuse to come off.