From Ars Technica
by Beth Mole – Jun 20, 2016 3:00pm EDT
Sampling bias and a belief in malleable intelligence may be behind small IQ changes.
Who wouldn’t want to be smarter? After all, high intelligence can help you get better grades in school, more promotions at work, fatter pay checks through your career, and a cushier life overall. Those are pretty good outcomes by any measure.
For years, scientific studies suggested that smarts were mostly heritable and fixed through young adulthood—nothing one could willfully boost. But some recent studies hint that a segment of smarts, called fluid intelligence—where you use logic and patterns, rather than knowledge, to analyze and solve novel problems—can improve slightly with memory exercises. The alluring finding quickly gave life to a $1 billion brain training industry. This industry, including companies such as Lumosity, Cogmed, and NeuroNation, has since promised everything from higher IQs to the ability to stay sharp through aging. The industry even boasts that it can help users overcome mental impairments from health conditions, such as attention deficit hyperactivity disorder (ADHD), traumatic brain injury, and the side effects of chemotherapy.
Those claims are clearly overblown and have been roundly criticized by scientists, the media, and federal regulators. Earlier this year, Lumosity agreed to pay $2 million to the Federal Trade Commission over claims of deceptive advertising. The FTC said Lumosity “preyed on consumers’ fears about age-related cognitive decline.” In the settlement, the FTC forbid the company from making any such claims that the training could sharpen consumers’ minds in life-altering ways.
But what of the initial research that suggested slight positive effects of such brain training? While brain training companies have publicly taken heat for their hyped-up claims, recent scientific reviews of the literature have largely upheld the initial findings. In fact, a 2015 meta-analysis concluded that the training could increase IQ scores by three to four points.
With a new report published Monday in the Proceedings of the National Academy of Sciences, that research might be nearing a blistering rebuff of its own.
In a study designed to assess the experimental methods of earlier brain-training studies, researchers found that sampling bias and the placebo effect explained the positive results seen in the past. “Indeed, to our knowledge, the rigor of double-blind randomized clinical trials is nonexistent in this research area,” the authors report. They even suggest that the overblown claims from brain training companies may have created a positive feedback loop, convincing people that brain training works and biasing follow-up research on the topic.
“The specter of a placebo may arise in any intervention when the desired outcome is known to the participant—an intervention like cognitive training,” the authors note. Coupled with evidence that “people tend to hold strong implicit beliefs regarding whether or not intelligence is malleable” and that those beliefs may skew research findings, the authors conclude that past research is basically bunk.
In their study, the authors—psychologists at George Mason University—recruited 50 participants using two different posters put up around campus. One poster advertised the study using the specific terms “brain training” and “cognitive enhancement” and then stated that previous research has shown brain training to be effective. “Participate in a study today!” the poster concluded. The second poster was visually similar to the first but merely encouraged viewers to participate in a study in order to earn credits.
The 25 recruits lured by the first poster were considered a “placebo” group, as the authors called it, while the 25 brought in with the second, boring poster acted as controls.
The researchers set up the study this way for a couple of reasons. First, when the researchers looked back at the 19 studies included in the 2015 meta analysis, they found that 17 of them used such “overt” recruitment strategies. Second, they picked two groups of 25 because most of those studies also included groups of 25 participants or less and because that number is big enough to statistically show large effects.
All 50 participants were first given standardized tests to measure their fluid intelligence. Participants were then allowed to play a brain training game for an hour and then had their fluid intelligence retested. After just an hour of training, the placebo group scored better on the fluid intelligence test, with improvements equivalent to about five to ten IQ points. The control group saw no such improvement.
When the researchers surveyed the recruits on their beliefs about intelligence, those in the placebo group had the highest confidence that intelligence is malleable.
Together, the researchers conclude, the findings suggest that recruitment methods used in past studies created a self-selected groups of participants that believed the training would improve cognition and thus were susceptible to the placebo effect.
Such a placebo effect isn’t worthless, the authors caution. It may be useful for future studies to assess how far the placebo effect could get brain training-believers. But to truly assess effects of the training, researchers need to turn to trials where participants don’t self-select their group or know the point of the study—randomized, controlled studies. “By using such methods, we can begin to understand whether true training effects exist and are generalizable to samples (and perhaps populations) beyond those who expect to improve,” the authors argue.
In the meantime, brain training companies should “temper their claims,” the authors suggest.