Thursday, November 20, 2014

Our Futile Efforts to Boost Children's IQ

The twin studies have always shown little influence from family environment  -- both as regards IQ and personality.   Charles Murray notes more evidence to that effect below

It’s one thing to point out that programs to improve children's cognitive functioning have had a dismal track record. We can always focus on short-term improvements, blame the long-term failures on poor execution or lack of follow-up and try, try again. It’s another to say that it's impossible to do much to permanently improve children's intellectual ability through outside interventions. But that’s increasingly where the data are pointing.

Two studies published this year have made life significantly more difficult for those who continue to be optimists. The first one is by Florida State University’s Kevin Beaver and five colleagues, who asked how much effect parenting has on IQ independently of genes. The database they used, the National Longitudinal Study of Adolescent Health, is large, nationally representative and highly regarded. The measures of parenting included indicators for parental engagement, attachment, involvement and permissiveness. The researchers controlled for age, sex, race and neighborhood disadvantage. Their analytic model, which compares adoptees with biological children, is powerful, and their statistical methods are sophisticated and rigorous.

The answer to their question? Not much. “Taken together,” the authors write, “the results … indicate that family and parenting characteristics are not significant contributors to variations in IQ scores.” It gets worse: Some of the slight effects they did find were in the “wrong” direction. For example, maternal attachment was negatively associated with IQ in the children.

There’s nothing new in the finding that the home environment doesn’t explain much about a child’s IQ after controlling for the parents’ IQ, but the quality of the data and analysis in this study address many of the objections that the environmentalists have raised about such results. Their scholarly wiggle-room for disagreement is shrinking.

The second study breaks new ground. Six of its eight authors come from King’s College London, home to what is probably the world’s leading center for the study of the interplay among genes, environment and developmental factors. The authors applied one of the powerful new methods enabled by the decoding of the genome, “Genome-wide Complex Trait Analysis,” to ask how much effect socioeconomic status has on IQ independently of genes. The technique does not identify the causal role of specific genes, but rather enables researchers to identify patterns that permit conclusions like the one they reached in this study: “When genes associated with children’s IQ are identified, the same genes will also be likely to be associated with family SES.” Specifically, the researchers calculated that 94 percent of the correlation between socioeconomic status and IQ was mediated by genes at age 7 and 56 percent at age 12.

How can parenting and socioeconomic status play such minor roles in determining IQ, when scholars on all sides of the nature-nurture debate agree that somewhere around half of the variation in IQ is environmental? The short answer is that the environment that affects IQ doesn’t consist of the advantages that most people have in mind -- parents who talk a lot to their toddlers, many books in in the house for the older children, high-quality schools and the like.

Instead, studies over the past two decades have consistently found that an amorphous thing called the “nonshared” environment accounts for most (in many studies, nearly all) of the environmentally grounded variation. Scholars are still trying to figure out what features of the nonshared environment are important. Peers? Events in the womb? Accidents? We can be sure only of this: The nonshared environment does not lend itself to policy interventions intended to affect education, parenting, income or family structure.

The relevance of these findings goes beyond questions of public policy. As a parent of four children who all turned out great (in my opinion), I’d like to take some credit. With every new study telling me that I can’t legitimately do so with regard to IQ or this or that personality trait, I try to come up with something, anything, about my children for which I can still believe my parenting made a positive difference. It’s hard.

There’s no question that we know how to physically and psychologically brutalize children so that they are permanently damaged. But it increasingly appears that once we have provided children with a merely OK environment, our contribution as parents and as society is pretty much over. I’m with most of you: I viscerally resist that conclusion. But my resistance is founded on a sustained triumph of hope over evidence.


Monday, November 3, 2014

Did rationing in World War 2 increase intelligence of Britons?

The journal article is Aging trajectories of fluid intelligence in late life: The influence of age, practice and childhood IQ on Raven's Progressive Matrices and the key passage is reproduced below:

"Standardizing the MHT [original] scores indicated a difference between the cohorts of 3.7 points. This is slightly smaller than expected and may be brought about by survival and selection bias discussed above. Late life comparisons indicate a significantly greater difference between the cohorts, comparing the cohorts at age 77; where there is overlap in data we find a difference of 10.4 raw RPM points or 16.5 IQ points, which is surprisingly large."

What this says is that both groups started out pretty much the same but by the time they had got into their 70s the younger group was much brighter.  The authors below attribute the difference to nutrition, which is pretty nonsensical.  They say that eating "rich, sugary and fatty foods" lowers IQ but where is the evidence for that?  The only studies I know are epidemiological and overlook important third factors such as social class. So those studies can only be relied on if you believe that correlation is causation, which it is not.  And one might note that average IQs in Western nations have been RISING even as consumption of fast food has been rising.  So even the epidemiology is not very supportive of the claims below.

Where important  micronutrients (iodine and iron particularly) are largely absent in the food of a population  -- as in Africa -- nutritional improvements can make a big difference but the idea that Aberdonians in the 1920s were severely deprived of such micronutrients seems fanciful. Aberdeen has long been an important  fishing port and fish are a major source of iodine -- and iron is mostly got from beef and Scots have long raised and eaten a lot of beef.  The traditional diet of poor Scots -- "mince 'n tatties" -- is certainly humble but it does include beef. Aberdeen even has an important  beef animal originating there: The widely praised "Aberdeen Angus".  You can eat meat from them in most of McDonald's restaurants these days.

So why was the IQ divergence between the two groups below not observed in early childhood when it was so strong in later life?  A divergence of that kind (though not of that magnitude) is not unprecedented for a number of reasons:  IQ measurement at age 11 is less reliable than measures taken in adulthood; IQ becomes more and more a function of genetics as we get older.  In early life environmental factors have more impact and it takes a while for (say) a handicapping early environment to be overcome. 

But I suspect that the main influence on the finding was that two different tests were used.  IQ was measured at age 11 by an educational aptitude test and in the 70s it was measured by a non-verbal test.  The two were correlated but only about .75, which does allow for considerable divergence.  So the oldsters (1921 cohort) were simply not good at non-verbal puzzles, probably because they had little experience with them.  The tests they did in 1921, however mostly used problems similar to problems they had already encountered many times in the course of their schooling.

The 1936 cohort, by contrast, had most of their education in the postwar era when people spent longer in the educational system. And IQ testing in the schools was much in vogue up until the 1960s so that generation would have had a much wider testing experience.

The retest was, in other words, invalid.  It was not comparing like with like


A study by the University of Aberdeen and NHS Grampian has found that children who grew up during the Second World War became far more intelligent than those who were born just 15 years before.

Researchers think that cutting rich, sugary and fatty foods out of the diets of growing children had a hugely beneficial impact on their growing brains.

The University of Aberdeen team examined two groups of people raised in Aberdeen, one born in 1921 and one born in 1936. These people are known as the Aberdeen Birth Cohort and were tested when they were aged 11 and when they were adults after the age of 62. The study consisted of 751 people all tested aged 11 and who were retested between 1998 and 2011 on up to five occasions.

Researchers compared the two groups at age 11 found an increase in IQ of 3.7 points which was marginally below what was expected but within the range seen in other studies. However, comparison in late life found an increase in IQ of 16.5 points which is over three times what was expected.

Before the war, more than two thirds of British food was imported. But enemy ships targeting merchant vessels prevented fruit, sugar, cereals and meat from reaching the UK.

The Ministry of Food issued ration books and rationing for bacon, butter and sugar began in January 1940.

But it was the MoF’s Dig For Victory campaign, encouraging self-sufficiency, which really changed how Britain ate. Allotment [mini  farm] numbers rose from 815,000 to 1.4 million.

Pigs, chickens and rabbits were reared domestically for meat, whilst vegetables were grown anywhere that could be cultivated. By 1940 wasting food was a criminal offence.