Sunday, December 14, 2014



Japanese north–south gradient in IQ predicts differences in stature, skin color, income, and homicide rate

By Kenya Kura 

A fascinating academic journal article from Japan below.  The Japanese and Chinese are less politically correct in talking about race than Americans are -- if only because they mostly believe that THEY are a superior race.  And in average IQ terms, they are.

And the finding below, that high IQ people in Japan are taller, richer and less prone to crime and divorce, agrees well with American findings going back as far as the 1920s.

Not mentioned in the Abstract below but mentioned in the body of the article, is that the Koreans and Chinese score a touch higher on IQ than the Japanese do  -- only by about one or two points but that is in the opposite direction to what one would expect.  The Japanese are more Westernized than the Chinese are  -- though that difference is diminishing rapidly -- so if there were any "Western" bias in the tests (which Leftists often assert there is), one would have expected the Japanese to be slightly ahead.  Clearly, any "bias" in the tests is not detectable in the far East  -- being detectable only by American Ivy League "wisdom".

But there is one point inferable from the findings below that seems at first completely regular  -- the finding that the closer you get to the equator, the browner and dumber you get.  The Japanese archipelago does cover a very considerable North/South range so there is plenty of room for that to emerge. So the really smart Japanese are in the Northern Prefectures of Honshu while the dumbest are in Okinawa.

And in South-East Asia we find the same phenomenon.  Filipinos and Malaysian Bumiputras are notably browner and less bright than North-East Asians.

But that is not as regular as one might think.  There are a number of exceptions to the rule.  South Africa has a climate similar to Europe (if you have experienced a Bloemfontein winter you will know what I mean) yet the Bantu (South African negroes) are no brighter than any other Africans as far as we can tell.  But that is only a superficial puzzle.  The Bantu are recent immigrants originating in central Africa.  The whites in fact arrived in South Africa before the Bantu did.

The Bushmen (original inhabitants) of South Africa are a little more of a puzzle as they are very primitive indeed.  They are short of stature and live these days in extremely arid regions.  Perhaps they always did live in arid regions to escape the many fierce predators in the rest of Africa.

And Tasmanian Aborigines were also at an extremely low civilizational level (they did not even use fire) before white-man diseases killed them all off.  Yet Tasmania has a climate quite similar to England.  Tasmania is however a rather small island that was cut off from the rest of Australia for many millennia -- and isolated populations are often backward.  It appears that lots of invasions are needed to perk up average IQ  -- which is why Eurasia is home to all the high IQ populations.  Invaders can very easily sweep for long distances across Eurasia -- as Genghis Khan showed.

So the "exceptions" I have noted so far are all explicable by special factors.  But there is one exception that absolutely breaks the rule:  South India.  South Indians can be very dark in skin color indeed.  Yet they are far and away the brightest populations in India. The computer programmers, scientists and technologists in India come overwhelmingly from the South.  The recent amazing Indian Mars shot was almost entirely the work of Southerners.  It is no coincidence that Bangalore, India's science and technology hub, is in the South. 

So what went on in the South to push them up the IQ scale is hard to say.  The nearest I can come to an explanation is to note that they all hate one-another.  The various regions have different languages and were often at war with one-another over the centuries.  So perhaps invasions did the trick there too. But then West Africans are are always fighting one-another as well ... 

So perhaps we have to draw into the discussion that some evolutionarily recent DNA mutations affecting brain complexity did not spread to Africa.  Evolution can of course work either via natural selection or via mutations -- or both

A final note about the correlations reported below.  They seem unusually high.  That is common in "ecological" correlations (correlations between groups rather than individuals). It was Prefecture averages that formed the raw data below.  Individual correlations between similar variables can normally be expected to be much lower -- JR


>>>>>>>>>>>>>>>>>>>>>>>>>>

Abstract

Regional differences in IQ are estimated for 47 prefectures of Japan. IQ scores obtained from official achievement tests show a gradient from north to south. Latitudes correlate with height, IQ, and skin color at r = 0.70, 0.44, 0.47, respectively. IQ also correlates with height (0.52), skin color (0.42), income (0.51) after correction, less homicide rate (− 0.60), and less divorce (− 0.69) but not with fertility infant mortality. The lower IQ in southern Japanese islands could be attributable to warmer climates with less cognitive demand for more than fifteen hundred years.

SOURCE

Wednesday, December 10, 2014



Genetic determination of social class

Using twin studies, Charles Murray showed 2 decades ago that IQ is mainly genetically inherited and that IQ underlies social class.  The rich are brighter;  the poor are dumber.  The findings below reinforce that. The researchers were able to identify the actual DNA behind that relationship.  High IQ people and high status people had different DNA to low status and low IQ people. 

The research also showed something else that people find hard to digest: That family environment matters hardly at all.  That repeatedly emerges in the twin studies but flies in the face of what people have believed for millennia: That your kid's upbringing matters.  It may matter in some ways (value acquisition?) but it has no influence on how bright the kid will be.  So now we have confirmation from a DNA study which shows that both IQ and social status are genetically determined.  Home environment has nothing to do with it.  The genes which give you a high IQ are the same ones that lead to high social status. 

People can perhaps accept the genetic determination of IQ but accepting the genetic determination of social status will be more jarring.  The wise men all tell us that a good upbringing will make you more likely to get rich.  It won't.  What you have inherited in your genes (principally IQ) is what will make you rich or poor

To specify exactly what was found:  In a representative sample of the UK population, children from high status homes were found to be genetically different from children from low status homes -- and the DNA differences concerned were also determinant of IQ



Genetic influence on family socioeconomic status and children's intelligence

Maciej Trzaskowskia et al.

Abstract

Environmental measures used widely in the behavioral sciences show nearly as much genetic influence as behavioral measures, a critical finding for interpreting associations between environmental factors and children's development. This research depends on the twin method that compares monozygotic and dizygotic twins, but key aspects of children's environment such as socioeconomic status (SES) cannot be investigated in twin studies because they are the same for children growing up together in a family. Here, using a new technique applied to DNA from 3000 unrelated children, we show significant genetic influence on family SES, and on its association with children's IQ at ages 7 and 12. In addition to demonstrating the ability to investigate genetic influence on between-family environmental measures, our results emphasize the need to consider genetics in research and policy on family SES and its association with children's IQ.

SOURCE

Tuesday, December 9, 2014


MEGA-PESKY for the Left!  Republicans found to be brighter than Democrats

Leftists never give up asserting that they are the brightest but the research results below are well founded and are clearly against them.  The findings even held among whites only.  And the ardent Democrats were dumbest of all!  The author is a bit apologetic about measuring mainly verbal ability but verbal ability is the best proxy for IQ as a whole so that need not detain us. 

The final comment below about different types of Republicans is just a speculation.  It was not examined in the research. 

The differences found were slight, however so are not something for anyone to hang their hat on.  The findings are primarily useful for shooting back at Leftist claims of superiority -- claims which are in fact intrinsic to Leftism.  They claim to "know best"

For my previous discussions of  IQ and politics see here and here and here and here


Cognitive ability and party identity in the United States

Noah Carl

Abstract

Carl (2014) analysed data from the U.S. General Social Survey (GSS), and found that individuals who identify as Republican have slightly higher verbal intelligence than those who identify as Democrat. An important qualification was that the measure of verbal intelligence used was relatively crude, namely a 10-word vocabulary test. This study examines three other measures of cognitive ability from the GSS: a test of probability knowledge, a test of verbal reasoning, and an assessment by the interviewer of how well the respondent understood the survey questions. In all three cases, individuals who identify as Republican score slightly higher than those who identify as Democrat; the unadjusted differences are 1–3 IQ points, 2–4 IQ points and 2–3 IQ points, respectively. Path analyses indicate that the associations between cognitive ability and party identity are largely but not totally accounted for by socio-economic position: individuals with higher cognitive ability tend to have better socio-economic positions, and individuals with better socio-economic positions are more likely to identify as Republican. These results are consistent with Carl's (2014) hypothesis that higher intelligence among classically liberal Republicans compensates for lower intelligence among socially conservative Republicans.

SOURCE

Monday, December 8, 2014



Church-goers are NOT dumber

That people are religious because they are stupid has been a  frequent assertion, particularly from the Left.  Some recent high-quality research (below), however, refutes that.  They found no association between church-going and IQ but did find a weak association between non-committed religiosity and IQ.  And religious people are also NOT more likely to go ga-ga as they get older.  See also here and here

Religiosity is negatively associated with later-life intelligence, but not with age-related cognitive decline

Abstract

A well-replicated finding in the psychological literature is the negative correlation between religiosity and intelligence. However, several studies also conclude that one form of religiosity, church attendance, is protective against later-life cognitive decline.

No effects of religious belief per se on cognitive decline have been found, potentially due to the restricted measures of belief used in previous studies. Here, we examined the associations between religiosity, intelligence, and cognitive change in a cohort of individuals (initial n = 550) with high-quality measures of religious belief taken at age 83 and multiple cognitive measures taken in childhood and at four waves between age 79 and 90.

We found that religious belief, but not attendance, was negatively related to intelligence. The effect size was smaller than in previous studies of younger participants. Longitudinal analyses showed no effect of either religious belief or attendance on cognitive change either from childhood to old age, or across the ninth decade of life.

We discuss differences between our cohort and those in previous studies – including in age and location – that may have led to our non-replication of the association between religious attendance and cognitive decline.

SOURCE

Sunday, December 7, 2014


Kids from affluent families start out smarter than the poor and the gap between them and the poor widens further as they grow up

It has long been known that the rich are smarter.  Charles Murray got heavy flak when he showed that two decades ago but it's logical that people who are in general smart should also be smart with money.  But the gorgeous Sophie von Stumm has amplified that in the research below.  My previous comments about some of her research were rather derogatory but I find no fault with the work below.

Explaining the finding is the challenge.  An obvious comment is that measuring the IQ of young children is difficult  -- but not impossible -- and that the widening gap simply reflected more accurate measurements in later life. 

I would reject the explanation that the better home life in a rich family helped improve the child's IQ -- because all the twin studies show that the family environment is a negligible contributor to IQ -- counter-intuitive though that might be. 

The present findings do however tie in well with previous findings that the genetic influence on IQ gets greater as people get older.  People shed some environmental influences as they get older and become more and more what their genetics would dictate



Sophie von Stumm

Poverty affects the intelligence of children as young as two, a study has found - and its impact increases as the child ages.  Deprived young children were found to have IQ scores six points lower, on average, than children from wealthier families.

And the gap got wider throughout childhood, with the early difference tripling by the time the children reached adolescence.

Scientists from Goldsmiths, University of London compared data on almost 15,000 children and their parents as part of the Twins Early Development Study (Teds).  The study is an on-going investigation socio-economic and genetic links to intelligence.

Children were assessed nine times between the ages of two and 16, using a mixture of parent-administered, web and telephone-based tests.

The results, published in the journal Intelligence, revealed that children from wealthier backgrounds with more opportunities scored higher in IQ tests at the age of two, and experienced greater IQ gains over time.

Dr Sophie von Stumm, from Goldsmiths, University of London, who led the study, said: 'We’ve known for some time that children from low socioeconomic status (SES) backgrounds perform on average worse on intelligence tests than children from higher SES backgrounds, but the developmental relationship between intelligence and SES had not been previously shown.  'Our research establishes that relationship, highlighting the link between SES and IQ.

SOURCE

Socioeconomic status and the growth of intelligence from infancy through adolescence

By Sophie von Stumm &  Robert Plomin

Abstract

Low socioeconomic status (SES) children perform on average worse on intelligence tests than children from higher SES backgrounds, but the developmental relationship between intelligence and SES has not been adequately investigated. Here, we use latent growth curve (LGC) models to assess associations between SES and individual differences in the intelligence starting point (intercept) and in the rate and direction of change in scores (slope and quadratic term) from infancy through adolescence in 14,853 children from the Twins Early Development Study (TEDS), assessed 9 times on IQ between the ages of 2 and 16 years. SES was significantly associated with intelligence growth factors: higher SES was related both to a higher starting point in infancy and to greater gains in intelligence over time. Specifically, children from low SES families scored on average 6 IQ points lower at age 2 than children from high SES backgrounds; by age 16, this difference had almost tripled. Although these key results did not vary across girls and boys, we observed gender differences in the development of intelligence in early childhood. Overall, SES was shown to be associated with individual differences in intercepts as well as slopes of intelligence. However, this finding does not warrant causal interpretations of the relationship between SES and the development of intelligence.

SOURCE


Monday, December 1, 2014


There is NO American Dream?

Gregory Clark is very good at both social history and economic history.  His latest work, however, leans on what I see as a very weak reed.  He finds surnames that are associated with wealth and tracks those surnames down the generations.  And he finds that in later generations those surnames continue to be associated with wealth. 

That is all well and good but he is using only a very small sampling of the population so can tell us nothing about the society at large.  The well-known effect of a man making a lot of money only for his grandchildren to blow the lot is not captured by his methods. 

So if the American dream consists of raising up a whole new lineage of wealth, we can agree that such a raising up is rare, though not unknown.  But if we see the American Dream as just one man "making it" (regardless of what his descendants do) Clark has nothing to tell us about it.  And I think that latter version of the dream is the usual one.

But his findings that SOME lineages stay wealthy is an interesting one.  And he explains it well.  He says (to simplify a little) that what is inherited is not wealth but IQ.  As Charles Murray showed some years back, smarter people tend to be richer and tend to marry other smart people.  So their descendant stay smart and smart people are mostly smart about money too.

And note that although IQ is about two thirds genetically inherited, genetic inheritance can throw up surprises at times.  I once for instance knew two brown-haired parents who had three red-headed kids.  The hair was still genetically inherited (there would have been redheads among their ancestors), but just WHICH genes you get out of the parental pool when you are conceived seems to be random.  So you do get the phenomenon of two ordinary people having a very bright child.  And that child can do very well in various ways -- monetary and otherwise.  I was such a child.


>>>>>>>>>>>>>>>>>>>>>

It has powered the hopes and dreams of U.S. citizens for generations.  But the American Dream does not actually exist, according to one economics professor.

Gregory Clark, who works at the University of California, Davis, claims the national ethos is simply an illusion and that social mobility in the country is no higher than in the rest of the world.

'America has no higher rate of social mobility than medieval England or pre-industrial Sweden,' he said. 'That’s the most difficult part of talking about social mobility - it's shattering people's dreams.'

After studying figures from the past 100 years and applying a formula to them, Mr Clark concluded that disadvantaged Americans will not be granted more opportunities if they are hard-working.

Instead, they will be stuck in their social status for the rest of their lives - and their position will, in turn, affect the statuses of their children, grandchildren and great-grandchildren, he said.

'The United States is not exceptional in its rates of social mobility,' the professor wrote in an essay published by the Council on Foreign Relations.  'It can perform no special alchemy on the disadvantaged populations of any society in order to transform their life opportunities.'

Speaking to CBS Sacramento, he added: 'The status of your children, grandchildren, great grandchildren, great-great grandchildren will be quite closely related to your average status now.'

However, not all of Mr Clark's students agree with his findings, with some pointing out that although parents' wealth has an effect on a child's life, 'it is not the ultimate deciding factor'.

 SOURCE.  More HERE.

Thursday, November 20, 2014

Our Futile Efforts to Boost Children's IQ

The twin studies have always shown little influence from family environment  -- both as regards IQ and personality.   Charles Murray notes more evidence to that effect below

It’s one thing to point out that programs to improve children's cognitive functioning have had a dismal track record. We can always focus on short-term improvements, blame the long-term failures on poor execution or lack of follow-up and try, try again. It’s another to say that it's impossible to do much to permanently improve children's intellectual ability through outside interventions. But that’s increasingly where the data are pointing.

Two studies published this year have made life significantly more difficult for those who continue to be optimists. The first one is by Florida State University’s Kevin Beaver and five colleagues, who asked how much effect parenting has on IQ independently of genes. The database they used, the National Longitudinal Study of Adolescent Health, is large, nationally representative and highly regarded. The measures of parenting included indicators for parental engagement, attachment, involvement and permissiveness. The researchers controlled for age, sex, race and neighborhood disadvantage. Their analytic model, which compares adoptees with biological children, is powerful, and their statistical methods are sophisticated and rigorous.

The answer to their question? Not much. “Taken together,” the authors write, “the results … indicate that family and parenting characteristics are not significant contributors to variations in IQ scores.” It gets worse: Some of the slight effects they did find were in the “wrong” direction. For example, maternal attachment was negatively associated with IQ in the children.

There’s nothing new in the finding that the home environment doesn’t explain much about a child’s IQ after controlling for the parents’ IQ, but the quality of the data and analysis in this study address many of the objections that the environmentalists have raised about such results. Their scholarly wiggle-room for disagreement is shrinking.

The second study breaks new ground. Six of its eight authors come from King’s College London, home to what is probably the world’s leading center for the study of the interplay among genes, environment and developmental factors. The authors applied one of the powerful new methods enabled by the decoding of the genome, “Genome-wide Complex Trait Analysis,” to ask how much effect socioeconomic status has on IQ independently of genes. The technique does not identify the causal role of specific genes, but rather enables researchers to identify patterns that permit conclusions like the one they reached in this study: “When genes associated with children’s IQ are identified, the same genes will also be likely to be associated with family SES.” Specifically, the researchers calculated that 94 percent of the correlation between socioeconomic status and IQ was mediated by genes at age 7 and 56 percent at age 12.

How can parenting and socioeconomic status play such minor roles in determining IQ, when scholars on all sides of the nature-nurture debate agree that somewhere around half of the variation in IQ is environmental? The short answer is that the environment that affects IQ doesn’t consist of the advantages that most people have in mind -- parents who talk a lot to their toddlers, many books in in the house for the older children, high-quality schools and the like.

Instead, studies over the past two decades have consistently found that an amorphous thing called the “nonshared” environment accounts for most (in many studies, nearly all) of the environmentally grounded variation. Scholars are still trying to figure out what features of the nonshared environment are important. Peers? Events in the womb? Accidents? We can be sure only of this: The nonshared environment does not lend itself to policy interventions intended to affect education, parenting, income or family structure.

The relevance of these findings goes beyond questions of public policy. As a parent of four children who all turned out great (in my opinion), I’d like to take some credit. With every new study telling me that I can’t legitimately do so with regard to IQ or this or that personality trait, I try to come up with something, anything, about my children for which I can still believe my parenting made a positive difference. It’s hard.

There’s no question that we know how to physically and psychologically brutalize children so that they are permanently damaged. But it increasingly appears that once we have provided children with a merely OK environment, our contribution as parents and as society is pretty much over. I’m with most of you: I viscerally resist that conclusion. But my resistance is founded on a sustained triumph of hope over evidence.

SOURCE

Monday, November 3, 2014

Did rationing in World War 2 increase intelligence of Britons?

The journal article is Aging trajectories of fluid intelligence in late life: The influence of age, practice and childhood IQ on Raven's Progressive Matrices and the key passage is reproduced below:

"Standardizing the MHT [original] scores indicated a difference between the cohorts of 3.7 points. This is slightly smaller than expected and may be brought about by survival and selection bias discussed above. Late life comparisons indicate a significantly greater difference between the cohorts, comparing the cohorts at age 77; where there is overlap in data we find a difference of 10.4 raw RPM points or 16.5 IQ points, which is surprisingly large."


What this says is that both groups started out pretty much the same but by the time they had got into their 70s the younger group was much brighter.  The authors below attribute the difference to nutrition, which is pretty nonsensical.  They say that eating "rich, sugary and fatty foods" lowers IQ but where is the evidence for that?  The only studies I know are epidemiological and overlook important third factors such as social class. So those studies can only be relied on if you believe that correlation is causation, which it is not.  And one might note that average IQs in Western nations have been RISING even as consumption of fast food has been rising.  So even the epidemiology is not very supportive of the claims below.

Where important  micronutrients (iodine and iron particularly) are largely absent in the food of a population  -- as in Africa -- nutritional improvements can make a big difference but the idea that Aberdonians in the 1920s were severely deprived of such micronutrients seems fanciful. Aberdeen has long been an important  fishing port and fish are a major source of iodine -- and iron is mostly got from beef and Scots have long raised and eaten a lot of beef.  The traditional diet of poor Scots -- "mince 'n tatties" -- is certainly humble but it does include beef. Aberdeen even has an important  beef animal originating there: The widely praised "Aberdeen Angus".  You can eat meat from them in most of McDonald's restaurants these days.

So why was the IQ divergence between the two groups below not observed in early childhood when it was so strong in later life?  A divergence of that kind (though not of that magnitude) is not unprecedented for a number of reasons:  IQ measurement at age 11 is less reliable than measures taken in adulthood; IQ becomes more and more a function of genetics as we get older.  In early life environmental factors have more impact and it takes a while for (say) a handicapping early environment to be overcome. 

But I suspect that the main influence on the finding was that two different tests were used.  IQ was measured at age 11 by an educational aptitude test and in the 70s it was measured by a non-verbal test.  The two were correlated but only about .75, which does allow for considerable divergence.  So the oldsters (1921 cohort) were simply not good at non-verbal puzzles, probably because they had little experience with them.  The tests they did in 1921, however mostly used problems similar to problems they had already encountered many times in the course of their schooling.

The 1936 cohort, by contrast, had most of their education in the postwar era when people spent longer in the educational system. And IQ testing in the schools was much in vogue up until the 1960s so that generation would have had a much wider testing experience.

The retest was, in other words, invalid.  It was not comparing like with like


>>>>>>>>>>>>>>>>>>>>>>>>>>

A study by the University of Aberdeen and NHS Grampian has found that children who grew up during the Second World War became far more intelligent than those who were born just 15 years before.

Researchers think that cutting rich, sugary and fatty foods out of the diets of growing children had a hugely beneficial impact on their growing brains.

The University of Aberdeen team examined two groups of people raised in Aberdeen, one born in 1921 and one born in 1936. These people are known as the Aberdeen Birth Cohort and were tested when they were aged 11 and when they were adults after the age of 62. The study consisted of 751 people all tested aged 11 and who were retested between 1998 and 2011 on up to five occasions.

Researchers compared the two groups at age 11 found an increase in IQ of 3.7 points which was marginally below what was expected but within the range seen in other studies. However, comparison in late life found an increase in IQ of 16.5 points which is over three times what was expected.

Before the war, more than two thirds of British food was imported. But enemy ships targeting merchant vessels prevented fruit, sugar, cereals and meat from reaching the UK.

The Ministry of Food issued ration books and rationing for bacon, butter and sugar began in January 1940.

But it was the MoF’s Dig For Victory campaign, encouraging self-sufficiency, which really changed how Britain ate. Allotment [mini  farm] numbers rose from 815,000 to 1.4 million.

Pigs, chickens and rabbits were reared domestically for meat, whilst vegetables were grown anywhere that could be cultivated. By 1940 wasting food was a criminal offence.

More HERE

Sunday, October 19, 2014



America's most "incorrect" man reflects

"The Bell Curve" 20 years later: A Q&A with Charles Murray

October marks the 20th anniversary of “The Bell Curve: Intelligence and Class Structure in American Life,” the extraordinarily influential and controversial book by AEI scholar Charles Murray and Richard Herrnstein. Here, Murray answers a few questions about the predictions, controversy, and legacy of his book.

Q. It’s been 20 years since “The Bell Curve” was published. Which theses of the book do you think are the most relevant right now to American political and social life?

American political and social life today is pretty much one great big “Q.E.D.” for the two main theses of “The Bell Curve.” Those theses were, first, that changes in the economy over the course of the 20th century had made brains much more valuable in the job market; second, that from the 1950s onward, colleges had become much more efficient in finding cognitive talent wherever it was and shipping that talent off to the best colleges. We then documented all the ways in which cognitive ability is associated with important outcomes in life — everything from employment to crime to family structure to parenting styles. Put those all together, we said, and we’re looking at some serious problems down the road. Let me give you a passage to quote directly from the close of the book:

Q. Predicting the course of society is chancy, but certain tendencies seem strong enough to worry about:

An increasingly isolated cognitive elite.

A merging of the cognitive elite with the affluent.

A deteriorating quality of life for people at the bottom end of the cognitive distribution.

Unchecked, these trends will lead the U.S. toward something resembling a caste society, with the underclass mired ever more firmly at the bottom and the cognitive elite ever more firmly anchored at the top, restructuring the rules of society so that it becomes harder and harder for them to lose. (p. 509)

Remind you of anything you’ve noticed about the US recently? If you look at the first three chapters of the book I published in 2012, “Coming Apart,” you’ll find that they amount to an update of “The Bell Curve,” showing how the trends that we wrote about in the early 1990s had continued and in some cases intensified since 1994. I immodestly suggest that “The Bell Curve” was about as prescient as social science gets.

Q. But none of those issues has anything to do with race, and let’s face it: the firestorm of controversy about “The Bell Curve” was all about race. We now have 20 more years of research and data since you published the book. How does your position hold up?

First, a little background: Why did Dick and I talk about race at all? Not because we thought it was important on its own. In fact, if we lived in a society where people were judged by what they brought to the table as individuals, group differences in IQ would be irrelevant. But we were making pronouncements about America’s social structure (remember that the book’s subtitle is “Intelligence and Class Structure in American Life”). If we hadn’t discussed race, “The Bell Curve” would have been dismissed on grounds that “Herrnstein and Murray refuse to confront the reality that IQ tests are invalid for blacks, which makes their whole analysis meaningless.” We had to establish that in fact IQ tests measure the same thing in blacks as in whites, and doing so required us to discuss the elephant in the corner, the mean difference in test scores between whites and blacks.

Here’s what Dick and I said: "There is a mean difference in black and white scores on mental tests, historically about one standard deviation in magnitude on IQ tests (IQ tests are normed so that the mean is 100 points and the standard deviation is 15). This difference is not the result of test bias, but reflects differences in cognitive functioning. The predictive validity of IQ scores for educational and socioeconomic outcomes is about the same for blacks and whites."

Those were our confidently stated conclusions about the black-white difference in IQ, and none of them was scientifically controversial. See the report of the task force on intelligence that the American Psychological Association formed in the wake of the furor over “The Bell Curve.”

What’s happened in the 20 years since then? Not much. The National Assessment of Educational Progress shows a small narrowing of the gap between 1994 and 2012 on its reading test for 9-year-olds and 13-year-olds (each by the equivalent of about 3 IQ points), but hardly any change for 17-year-olds (about 1 IQ-point-equivalent). For the math test, the gap remained effectively unchanged for all three age groups.

On the SAT, the black-white difference increased slightly from 1994 to 2014 on both the verbal and math tests. On the reading test, it rose from .91 to .96 standard deviations. On the math test, it rose from .95 to 1.03 standard deviations.

If you want to say that the NAEP and SAT results show an academic achievement gap instead of an IQ gap, that’s fine with me, but it doesn’t change anything. The mean group difference for white and African American young people as they complete high school and head to college or the labor force is effectively unchanged since 1994. Whatever the implications were in 1994, they are about the same in 2014.

There is a disturbing codicil to this pattern. A few years ago, I wrote a long technical article about black-white changes in IQ scores by birth cohort. I’m convinced that the convergence of IQ scores for blacks and whites born before the early 1970s was substantial, though there’s still room for argument. For blacks and whites born thereafter, there has been no convergence.

Q. The flashpoint of the controversy about race and IQ was about genes. If you mention “The Bell Curve” to someone, they’re still likely to say “Wasn’t that the book that tried to prove blacks were genetically inferior to whites?” How do you respond to that?

Actually, Dick and I got that reaction even while we were working on the book. As soon as someone knew we were writing a book about IQ, the first thing they assumed was that it would focus on race, and the second thing they assumed was that we would be talking about genes. I think psychiatrists call that “projection.” Fifty years from now, I bet those claims about “The Bell Curve” will be used as a textbook case of the hysteria that has surrounded the possibility that black-white differences in IQ are genetic. Here is the paragraph in which Dick Herrnstein and I stated our conclusion:

"If the reader is now convinced that either the genetic or environmental explanation has won out to the exclusion of the other, we have not done a sufficiently good job of presenting one side or the other. It seems highly likely to us that both genes and the environment have something to do with racial differences. What might the mix be? We are resolutely agnostic on that issue; as far as we can determine, the evidence does not yet justify an estimate." (p. 311)

That’s it. The whole thing. The entire hateful Herrnstein-Murray pseudoscientific racist diatribe about the role of genes in creating the black-white IQ difference. We followed that paragraph with a couple pages explaining why it really doesn’t make any difference whether the differences are caused by genes or the environment. But nothing we wrote could have made any difference. The lesson, subsequently administered to James Watson of DNA fame, is that if you say it is likely that there is any genetic component to the black-white difference in test scores, the roof crashes in on you.

On this score, the roof is about to crash in on those who insist on a purely environmental explanation of all sorts of ethnic differences, not just intelligence. Since the decoding of the genome, it has been securely established that race is not a social construct, evolution continued long after humans left Africa along different paths in different parts of the world, and recent evolution involves cognitive as well as physiological functioning.

The best summary of the evidence is found in the early chapters of Nicholas Wade’s recent book, “A Troublesome Inheritance.” We’re not talking about another 20 years before the purely environmental position is discredited, but probably less than a decade. What happens when a linchpin of political correctness becomes scientifically untenable? It should be interesting to watch. I confess to a problem with schadenfreude.

Q. Let’s talk about the debate over the minimum wage for a moment. You predicted in the book that the “natural” wage for low-skill labor would be low, and that raising the wage artificially could backfire by “making alternatives to human labor more affordable” and “making the jobs disappear altogether.” This seems to be coming true today. What will the labor landscape look like in the next 20 years?

Terrible. I think the best insights on this issue are Tyler Cowen’s in “Average Is Over.” He points out something that a lot of people haven’t thought about: it’s not blue-collar jobs that are going to be hit the hardest. In fact, many kinds of skilled blue-collar work are going to be needed indefinitely. It’s mid-level white-collar jobs that are going to be hollowed out. Think about travel agents. In 1994, I always used a travel agent, and so did just about everybody who traveled a lot. But then came Expedia and Orbitz and good airline websites, and I haven’t used a travel agent for 15 years.

Now think about all the white collar jobs that consist of applying a moderately complex body of interpretive rules to repetitive situations. Not everybody is smart enough to do those jobs, so they have paid pretty well. But now computers combined with machines can already do many of them—think about lab technicians who used to do your blood work, and the machines that do it now. For that matter, how long is it before you’re better off telling a medical diagnostic software package your symptoms than telling a physician?

Then Cowen points out something else I hadn’t thought of: One of the qualities that the new job market will value most highly is conscientiousness. Think of all the jobs involving personal service—working in homes for the elderly or as nannies, for example—for which we don’t need brilliance, but we absolutely need conscientiousness along with basic competence. Cowen’s right—and that has some troubling implications for guys, because, on average, women in such jobs are more conscientious than men.

My own view is that adapting to the new labor market, and making sure that working hard pays a decent wage, are among the most important domestic challenges facing us over the next few decades.

Q. In the book you ask, “How should policy deal with the twin realities that people differ in intelligence for reasons that are not their fault and that intelligence has a powerful bearing on how well people do in life?” How would you answer this question now?

I gave my answer in a book called “In Our Hands: A Plan to Replace the Welfare State,” that I published in 2006. I want to dismantle all the bureaucracies that dole out income transfers, whether they be public housing benefits or Social Security or corporate welfare, and use the money they spend to provide everyone over the age of 21 with a guaranteed income, deposited electronically every month into a bank account. It takes a book to explain why such a plan could not only work, but could revitalize civil society, but it takes only a few sentences to explain why a libertarian would advocate such a plan.

Certain mental skillsets are now the “open sesame” to wealth and social position in ways that are qualitatively different from the role they played in earlier times. Nobody deserves the possession of those skillsets. None of us has earned our IQ. Those of us who are lucky should be acutely aware that it is pure luck (too few are), and be committed to behaving accordingly. Ideally, we would do that without government stage-managing it. That’s not an option. Massive government redistribution is an inevitable feature of advanced postindustrial societies.

Our only option is to do that redistribution in the least destructive way. Hence my solution. It is foreshadowed in the final chapter of “The Bell Curve” where Dick and I talk about “valued places.” The point is not just to pass out enough money so that everyone has the means to live a decent existence. Rather, we need to live in a civil society that naturally creates valued places for people with many different kinds and levels of ability. In my experience, communities that are left alone to solve their own problems tend to produce those valued places. Bureaucracies destroy them. So my public policy message is: Let government does what it does best, cut checks. Let individuals, families, and communities do what they do best, respond to human needs on a one-by-one basis.

Q. Reflecting on the legacy of “The Bell Curve,” what stands out to you?

I’m not going to try to give you a balanced answer to that question, but take it in the spirit you asked it—the thing that stands out in my own mind, even though it may not be the most important. I first expressed it in the Afterword I wrote for the softcover edition of “The Bell Curve.” It is this: The reaction to “The Bell Curve” exposed a profound corruption of the social sciences that has prevailed since the 1960s. “The Bell Curve” is a relentlessly moderate book — both in its use of evidence and in its tone — and yet it was excoriated in remarkably personal and vicious ways, sometimes by eminent academicians who knew very well they were lying. Why? Because the social sciences have been in the grip of a political orthodoxy that has had only the most tenuous connection with empirical reality, and too many social scientists think that threats to the orthodoxy should be suppressed by any means necessary. Corruption is the only word for it.

Now that I’ve said that, I’m also thinking of all the other social scientists who have come up to me over the years and told me what a wonderful book “The Bell Curve” is. But they never said it publicly. So corruption is one thing that ails the social sciences. Cowardice is another.

SOURCE

Friday, October 17, 2014


"Slate" rediscovers IQ -- though they dare not to call it that

They recoil with horror about applying the findings to intergroup differences however, and claim without explanation that what is true of individuals cannot be true of groups of individuals.  That is at least counterintuitive.  They even claim that there is no evidence of IQ differences between groups being predictive of anything. 

I suppose that one has to pity their political correctness, however, because the thing they are greatly at pains to avoid -- the black-white IQ gap -- is superb validation of the fact that group differences in IQ DO matter.  From their abysmal average IQ score, we we would predict that blacks would be at the bottom of every heap (income, education, crime etc.)  -- and that is exactly where they are.  Clearly, group differences in IQ DO matter and the IQ tests are an excellent and valid measure of them



We are not all created equal where our genes and abilities are concerned.

A decade ago, Magnus Carlsen, who at the time was only 13 years old, created a sensation in the chess world when he defeated former world champion Anatoly Karpov at a chess tournament in Reykjavik, Iceland, and the next day played then-top-rated Garry Kasparov—who is widely regarded as the best chess player of all time—to a draw. Carlsen’s subsequent rise to chess stardom was meteoric: grandmaster status later in 2004; a share of first place in the Norwegian Chess Championship in 2006; youngest player ever to reach World No. 1 in 2010; and highest-rated player in history in 2012.

What explains this sort of spectacular success? What makes someone rise to the top in music, games, sports, business, or science? This question is the subject of one of psychology’s oldest debates. In the late 1800s, Francis Galton—founder of the scientific study of intelligence and a cousin of Charles Darwin—analyzed the genealogical records of hundreds of scholars, artists, musicians, and other professionals and found that greatness tends to run in families. For example, he counted more than 20 eminent musicians in the Bach family. (Johann Sebastian was just the most famous.) Galton concluded that experts are “born.” Nearly half a century later, the behaviorist John Watson countered that experts are “made” when he famously guaranteed that he could take any infant at random and “train him to become any type of specialist [he] might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents.”

The experts-are-made view has dominated the discussion in recent decades. In a pivotal 1993 article published in Psychological Review—psychology’s most prestigious journal—the Swedish psychologist K. Anders Ericsson and his colleagues proposed that performance differences across people in domains such as music and chess largely reflect differences in the amount of time people have spent engaging in “deliberate practice,” or training exercises specifically designed to improve performance. To test this idea, Ericsson and colleagues recruited violinists from an elite Berlin music academy and asked them to estimate the amount of time per week they had devoted to deliberate practice for each year of their musical careers. The major finding of the study was that the most accomplished musicians had accumulated the most hours of deliberate practice. For example, the average for elite violinists was about 10,000 hours, compared with only about 5,000 hours for the least accomplished group. In a second study, the difference for pianists was even greater—an average of more than 10,000 hours for experts compared with only about 2,000 hours for amateurs. Based on these findings, Ericsson and colleagues argued that prolonged effort, not innate talent, explained differences between experts and novices.

These findings filtered their way into pop culture. They were the inspiration for what Malcolm Gladwell termed the “10,000 Hour Rule” in his book Outliers, which in turn was the inspiration for the song “Ten Thousand Hours” by the hip-hop duo Macklemore and Ryan Lewis, the opening track on their Grammy-award winning album The Heist. However, recent research has demonstrated that deliberate practice, while undeniably important, is only one piece of the expertise puzzle—and not necessarily the biggest piece. In the first study to convincingly make this point, the cognitive psychologists Fernand Gobet and Guillermo Campitelli found that chess players differed greatly in the amount of deliberate practice they needed to reach a given skill level in chess. For example, the number of hours of deliberate practice to first reach “master” status (a very high level of skill) ranged from 728 hours to 16,120 hours. This means that one player needed 22 times more deliberate practice than another player to become a master.             

A recent meta-analysis by Case Western Reserve University psychologist Brooke Macnamara and her colleagues (including the first author of this article for Slate) came to the same conclusion. We searched through more than 9,000 potentially relevant publications and ultimately identified 88 studies that collected measures of activities interpretable as deliberate practice and reported their relationships to corresponding measures of skill. (Analyzing a set of studies can reveal an average correlation between two variables that is statistically more precise than the result of any individual study.) With very few exceptions, deliberate practice correlated positively with skill. In other words, people who reported practicing a lot tended to perform better than those who reported practicing less. But the correlations were far from perfect: Deliberate practice left more of the variation in skill unexplained than it explained. For example, deliberate practice explained 26 percent of the variation for games such as chess, 21 percent for music, and 18 percent for sports. So, deliberate practice did not explain all, nearly all, or even most of the performance variation in these fields. In concrete terms, what this evidence means is that racking up a lot of deliberate practice is no guarantee that you’ll become an expert. Other factors matter.

What are these other factors? There are undoubtedly many. One may be the age at which a person starts an activity. In their study, Gobet and Campitelli found that chess players who started playing early reached higher levels of skill as adults than players who started later, even after taking into account the fact that the early starters had accumulated more deliberate practice than the later starters. There may be a critical window during childhood for acquiring certain complex skills, just as there seems to be for language.

There is now compelling evidence that genes matter for success, too. In a study led by the King’s College London psychologist Robert Plomin, more than 15,000 twins in the United Kingdom were identified through birth records and recruited to perform a battery of tests and questionnaires, including a test of drawing ability in which the children were asked to sketch a person. In a recently published analysis of the data, researchers found that there was a stronger correspondence in drawing ability for the identical twins than for the fraternal twins. In other words, if one identical twin was good at drawing, it was quite likely that his or her identical sibling was, too. Because identical twins share 100 percent of their genes, whereas fraternal twins share only 50 percent on average, this finding indicates that differences across people in basic artistic ability are in part due to genes. In a separate study based on this U.K. sample, well over half of the variation between expert and less skilled readers was found to be due to genes. 

In another study, a team of researchers at the Karolinska Institute in Sweden led by psychologist Miriam Mosing had more than 10,000 twins estimate the amount of time they had devoted to music practice and complete tests of basic music abilities, such as determining whether two melodies carry the same rhythm. The surprising discovery of this study was that although the music abilities were influenced by genes—to the tune of about 38 percent, on average—there was no evidence they were influenced by practice. For a pair of identical twins, the twin who practiced music more did not do better on the tests than the twin who practiced less. This finding does not imply that there is no point in practicing if you want to become a musician. The sort of abilities captured by the tests used in this study aren’t the only things necessary for playing music at a high level; things such as being able to read music, finger a keyboard, and commit music to memory also matter, and they require practice. But it does imply that there are limits on the transformative power of practice. As Mosing and her colleagues concluded, practice does not make perfect.

Along the same lines, biologist Michael Lombardo and psychologist Robert Deaner examined the biographies of male and female Olympic sprinters such as Jesse Owens, Marion Jones, and Usain Bolt, and found that, in all cases, they were exceptional compared with their competitors from the very start of their sprinting careers—before they had accumulated much more practice than their peers.

What all of this evidence indicates is that we are not created equal where our abilities are concerned. This conclusion might make you uncomfortable, and understandably so. Throughout history, so much wrong has been done in the name of false beliefs about genetic inequality between different groups of people—males vs. females, blacks vs. whites, and so on. War, slavery, and genocide are the most horrifying examples of the dangers of such beliefs, and there are countless others. In the United States, women were denied the right to vote until 1920 because too many people believed that women were constitutionally incapable of good judgment; in some countries, such as Saudi Arabia, they still are believed to be. Ever since John Locke laid the groundwork for the Enlightenment by proposing that we are born as tabula rasa—blank slates—the idea that we are created equal has been the central tenet of the “modern” worldview. Enshrined as it is in the Declaration of Independence as a “self-evident truth,” this idea has special significance for Americans. Indeed, it is the cornerstone of the American dream—the belief that anyone can become anything they want with enough determination.

It is therefore crucial to differentiate between the influence of genes on differences in abilities across individuals and the influence of genes on differences across groups. The former has been established beyond any reasonable doubt by decades of research in a number of fields, including psychology, biology, and behavioral genetics. There is now an overwhelming scientific consensus that genes contribute to individual differences in abilities. The latter has never been established, and any claim to the contrary is simply false.

Another reason the idea of genetic inequality might make you uncomfortable is because it raises the specter of an anti-meritocratic society in which benefits such as good educations and high-paying jobs go to people who happen to be born with “good” genes. As the technology of genotyping progresses, it is not far-fetched to think that we will all one day have information about our genetic makeup, and that others—physicians, law enforcement, even employers or insurance companies—may have access to this information and use it to make decisions that profoundly affect our lives. However, this concern conflates scientific evidence with how that evidence might be used—which is to say that information about genetic diversity can just as easily be used for good as for ill.

Take the example of intelligence, as measured by IQ. We know from many decades of research in behavioral genetics that about half of the variation across people in IQ is due to genes. Among many other outcomes, IQ predicts success in school, and so once we have identified specific genes that account for individual differences in IQ, this information could be used to identify, at birth, children with the greatest genetic potential for academic success and channel them into the best schools. This would probably create a society even more unequal than the one we have. But this information could just as easily be used to identify children with the least genetic potential for academic success and channel them into the best schools. This would probably create a more equal society than the one we have, and it would do so by identifying those who are likely to face learning challenges and provide them with the support they might need. Science and policy are two different things, and when we dismiss the former because we assume it will influence the latter in a particular and pernicious way, we limit the good that can be done. 

Wouldn’t it be better to just act as if we are equal, evidence to the contrary notwithstanding? That way, no people will be discouraged from chasing their dreams—competing in the Olympics or performing at Carnegie Hall or winning a Nobel Prize. The answer is no, for two reasons. The first is that failure is costly, both to society and to individuals. Pretending that all people are equal in their abilities will not change the fact that a person with an average IQ is unlikely to become a theoretical physicist, or the fact that a person with a low level of music ability is unlikely to become a concert pianist. It makes more sense to pay attention to people’s abilities and their likelihood of achieving certain goals, so people can make good decisions about the goals they want to spend their time, money, and energy pursuing. Moreover, genes influence not only our abilities, but the environments we create for ourselves and the activities we prefer—a phenomenon known as gene-environment correlation. For example, yet another recent twin study (and the Karolinska Institute study) found that there was a genetic influence on practicing music. Pushing someone into a career for which he or she is genetically unsuited will likely not work.

SOURCE

Thursday, September 11, 2014


IQ in decline across the world as scientists say we’re getting dumber

This is a generally good article below but it needs a little more background.  In particular, one needs to know why IQ scores rose for most of the 20th century (the "Flynn effect").  The evidence seems to converge on more schooling. As people got more and more  schooling (as they mostly did throughout the 20th century) they learned more and more test-taking strategies and that helped when they did IQ tests.  But that process obviously had its limits and that limit has now generally been reached.  Now that the Flynn effect has run its course we see what the underlying tendency is -- towards a dumbing down of the population.  With dumb women having most of the babies, any other result would be a surprise

FOR at least a century, average IQ has been on the rise, thanks to improved nutrition, living conditions and technology.  But now, scientists think the trend is going into reverse.

In Denmark, every man aged 18 is given an IQ test, to assess them in case of military conscription. It means around 30,000 people have been taking the same test for years — and scores have fallen by 1.5 points since 1998.

The pattern is repeated around the world, according to New Scientist, with tests showing the same thing happening everywhere from Australia and the UK to Brazil and China.

The most rapid signs of IQ growth in the US appeared between the 1950s and 1980s, the magazine reported, with “intelligence” rocketing by around 3 points per decade.

The trend for rising IQs was first documented by New Zealand scientist James Flynn, and is known as the Flynn Effect. It has been attributed to advances in health and medicine, as well as ever-expanding technology and culture forcing us to contend with a multi-layered world.

Now, the theory is that in developed countries, improvements such as public sanitation and more stimulating environments may have gone as far as they can in terms of increasing our intelligence.

The first evidence of a dip in IQ was reported in Norway in 2004, closely followed by similar studies emerging from developed countries including Sweden and the Netherlands.

Dr Flynn has said that such minor decreases could be attributable to reversible issues with social conditions, such as falling income, unhealthy diet or problems with education.

But some experts believe our IQs are in a state of permanent decline.

Some researchers suggest that the Flynn effect has masked an underlying decline in our genetic intelligence — meaning more people have been developing closer to their full potential, but that potential has been dropping.

This has been attributed in some quarters to the fact that the most highly educated people in society are having fewer children than the general population.

It is an uncomfortable thought, and one that strays worryingly close to controversial theories on genetic modification and even eugenics.

Richard Lynn of the University of Ulster in the UK says our IQ has declined by 1 point between 1950 and 2000, which seems very small.

But Michael Woodley, a psychologist at Free University of Brussels in Belgium, said even such a small drop can mean a dramatic reduction in the number of highly intelligent people — those geniuses who are responsible for our greatest innovations.

In fact, Dr Woodley says our IQ has been in decline since Victorian times, while Professor Gerald Crabtree says it happened as soon as we started to live in densely populated areas with a steady supply of food — 5000 to 12,000 years ago.

The importance of IQ trends is up for debate in itself, since IQ tests can be an unreliable measure of intelligence, skewed by education and preparation for solving certain kinds of problems.

Furthermore, many experts say there are multiple forms of intelligence. While academic intelligence is important, it is often people with other qualities, such as determination and self-control, who are most successful or socially productive.

When we say we are becoming more intelligent, are we simply learning different ways of thinking?

As Dr Flynn himself said: “There are other intellectual qualities, namely, critical acumen and wisdom, that IQ tests were not designed to measure and do not measure and these are equally worthy of attention.

“Our obsession with IQ is one indication that rising wisdom has not characterised our time.”

SOURCE

Tuesday, July 22, 2014



Genes and Race: The Distant Footfalls of Evidence:  A review of Nicholas Wade’s book, “A Troublesome Inheritance: Genes, Race and Human History“.

Despite the great care the author below took not to tread on any toes, waves of shrieks emanated from the always irrational Left in response to it.  As a result SciAm issued an apology for publishing it.  The author, Ashutosh Jogalekar,  was eventually fired over it.  He is a chemist of apparently Indian origin so has obviously missed some of the political indoctrination that dominates the social sciences and humanities in America today.

SciAm is not really interested in science, however, as their advocacy for the global warming cult shows. Theory contradicted by the evidence does not bother them. They are really The Unscientific American.  A conservative boycott of the publication  would be fitting -- JR


In this book NYT science writer Nicholas Wade advances two simple premises: firstly, that we should stop looking only toward culture as a determinant of differences between populations and individuals, and secondly, that those who claim that race is only a social construct are ignoring increasingly important findings from modern genetics and science. The guiding thread throughout the book is that “human evolution is recent, copious and regional” and that this has led to the genesis of distinct differences and classifications between human groups. What we do with this evidence should always be up for social debate, but the evidence itself cannot be ignored.

That is basically the gist of the book. It’s worth noting at the outset that at no point does Wade downplay the effects of culture and environment in dictating social, cognitive or behavioral differences – in fact he mentions culture as an important factor at least ten times by my count – but all he is saying is that, based on a variety of scientific studies enabled by the explosive recent growth of genomics and sequencing, we need to now recognize a strong genetic component to these differences.

The book can be roughly divided into three parts. The first part details the many horrific and unseemly uses that the concept of race has been put to by loathsome racists and elitists ranging from Social Darwinists to National Socialists. Wade reminds us that while these perpetrators had a fundamentally misguided, crackpot definition of race, that does not mean race does not exist in a modern incarnation. This part also clearly serves to delineate the difference between a scientific fact and what we as human beings decide to do with it, and it tells us that an idea should not be taboo just because murderous tyrants might have warped its definition and used it to enslave and decimate their fellow humans.

The second part of the book is really the meat of the story and Wade is on relatively firm ground here. He details a variety of studies based on tools like tandem DNA repeats and single nucleotide polymorphisms (SNPs) that point to very distinctive genetic differences between populations dictating both physical and mental traits. Many of the genes responsible for these differences have been subject to selection in the last five thousand years or so, refuting the belief that humans have somehow “stopped evolving” since they settled down into agricultural communities. For me the most striking evidence that something called race is real comes from the fact that when you ask computer algorithms to cluster genes based on differences and similarities in an unbiased manner, these statistical programs consistently settle on the five continental races as genetically distinct groups – Caucasian, East Asian, African, Native American and Australian Aboriginal. Very few people would deny that there are clear genetic underpinnings behind traits like skin color or height among people on different continents, but Wade’s achievement here is to clearly explain how it’s not just one or two genes underlying such traits but a combination of genes – the effects of many of which are not obvious – that distinguish between races. The other point that he drives home is that even minor differences between gene frequencies can lead to significant phenotypic dissimilarities because of additive effects, so boiling down these differences to percentages and then interpreting these numbers can be quite misleading.

Wade also demolishes the beliefs of many leading thinkers who would rather have differences defined almost entirely by culture – these include Stephen Jay Gould who thought that humans evolved very little in the last ten thousand years (as Wade points out, about 14% of the genome has been under active selection since modern humans appeared on the scene), and Richard Lewontin who perpetuated a well-known belief that the dominance of intra as opposed to inter individual differences makes any discussion of race meaningless. As Wade demonstrates through citations of solid research, this belief is simply erroneous since even small differences between populations can translate to large differences in physical, mental and social features depending on what alleles are involved; Lewontin and his followers’ frequent plea that inter-group differences are “only 15%” thus ends up essentially translating to obfuscation through numbers. Jared Diamond’s writings are also carefully scrutinized and criticized; Diamond’s contention that the presence of the very recently evolved gene for malaria resistance can somehow be advanced as a dubious argument for race is at best simplistic and at worst a straw man. The main point is that just because there can be more than one method to define race, or because definitions of race seem to fray at their edges, does not mean that race is non-existent and there is no good way to parse it.

The last part of the book is likely to be regarded as more controversial because it deals mainly with effects of genetics on cognitive, social and personality traits and is much more speculative. However Wade fully realizes this and also believes that “there is nothing wrong with speculation, of course, as long as its premises are made clear”, and this statement could be part of a scientist’s credo. The crux of the matter is to logically ask why genes would also not account for mental and social differences between races if they do account for physical differences. The problem there is that although the hypothesis is valid, the evidence is slim for now. Some of the topics that Wade deals with in this third part are thus admittedly hazy in terms of corroboration. For instance there is ample contemplation about whether a set of behavioral and genetic factors might have made the West progress faster than the East and inculcated its citizens with traits conducive to material success. However Wade also makes it clear that “progressive” does not mean “superior”; what he is rather doing is sifting through the evidence and asking if some of it might account for these more complex differences in social systems. Similarly, while there are pronounced racial differences in IQ, one must recognize the limitations of IQ, but more importantly should recognize that IQ says nothing about whether one human is “better” or “worse” than another; in fact the question is meaningless.

Wade brings a similar approach to exploring genetic influences on cognitive abilities and personality traits; evidently, as he recognizes, the evidence on this topic is just emerging and therefore not definitive. He looks at the effects of genes on attributes as diverse as language, reciprocity and propensity to dole out punishment. This discussion makes it clear that we are just getting started and there are many horizons that will be uncovered in the near future; for instance, tantalizing hints of links between genes for certain enzymes and aggressive or amiable behavior are just emerging. Some of the other paradigms Wade writes about, such as the high intelligence of Ashkenazi Jews, the gene-driven contrast between chimp and human societies and the rise of the West are interesting but have been covered by authors like Steven Pinker, Greg Cochran and Gregory Clark. If I have a criticism of the book it is that in his efforts to cover extensive ground, Wade sometimes gives short shrift to research on interesting topics like oxytocin and hormonal influences. But what he does make clear is that the research opportunities in the field are definitely exciting, and scientists should not have to tiptoe around these topics for political reasons.

Overall I found this book extremely well-researched, thoughtfully written and objectively argued. Wade draws on several sources, including the peer reviewed literature and work by other thinkers and scientists. The many researchers whose work Wade cites makes the writing authoritative; on the other hand, where speculation is warranted or noted he usually explicitly points it out as such. Some of these speculations such as the effects of genetics on the behavior of entire societies are quite far flung but I don’t see any reason why, based on what we do know about the spread of genes among groups, they should be dismissed out of hand. At the very least they serve as reasonable hypotheses to be pondered, thrashed out and tested. Science is about ideas, not answers.

But the real lesson of the book should not be lost on us: A scientific topic cannot be declared off limits or whitewashed because its findings can be socially or politically controversial; as Wade notes, “Whether or not a thesis might be politically incendiary should have no bearing on the estimate of its scientific validity.” He gives nuclear physics as a good analogy; knowledge of the atom can lead to both destruction and advancement, but without this knowledge there will still be destruction. More importantly, one cannot hide the fruits of science; how they are used as instruments of social or political policy is a matter of principle and should be decoupled from the science itself. In fact, knowing the facts provides us with a clear basis for making progressive decisions and gives us a powerful weapon for defeating the nefarious goals of demagogues who would use pseudoscience to support their dubious claims. In that sense, I agree with Wade that even if genetic differences between races become enshrined into scientific fact, it does not mean at all that we will immediately descend into 19th-century racism; our moral compass has already decided the direction of that particular current.

Ultimately Wade’s argument is about the transparency of knowledge. He admonishes some of the critics – especially some liberal academics and the American Anthropological Association – for espousing a “culture only” philosophy that is increasingly at odds with scientific facts and designed mainly for political correctness and a straitjacketed worldview. I don’t think liberal academics are the only ones guilty of this attitude but some of them certainly embrace it. Liberal academics, however, have also always prided themselves on being objective examiners of the scientific truth. Wade rightly says that they should join hands with all of us in bringing that same critical and honest attitude to examining the recent evidence about race and genetics. Whatever it reveals, we can be sure that as human beings we will try our best not to let it harm the cause of our fellow beings. After all we are, all of us, human beings first and scientists second.

SOURCE

Sunday, July 13, 2014


Chimpanzee intelligence is determined by their genes not their environment, researchers say

A chimpanzee’s intelligence is largely determined by the genes they inherit from their parents, reveals a new study.

It found Chimpanzees raised by humans turn out to be no cleverer than those given an ape upbringing.

Research into chimp intelligence could help scientists get a better handle on human IQ, say scientists.

The study involved 99 chimpanzees, ranging in age from nine to 54, who completed 13 cognitive tasks designed to test a variety of abilities.

The scientists then analysed the genetics of the chimps and compared their ability to complete the tasks in relation to their genetic similarities.

Genes were found to play a role in overall cognitive abilities, as well as the performance on tasks in several categories, the scientists discovered.

This is because while genes also play a major role in human intelligence, factors such as schooling, home life, economic status, and the culture a person is born in complicate the picture.

Previous studies have suggested that genetics account for around a quarter to a half of variations in human intelligence.

The new research involving 99 chimpanzees from a wide range of ages showed that genes explained about 50% of the differences seen in their intelligence test scores.

Chimps raised by human caretakers did no better in the tasks than individuals brought up by their chimpanzee mothers.

'Intelligence runs in families,' Dr. William Hopkins from the Yerkes National Primate Research Center in Atlanta, who ran the study, said.

'The suggestion here is that genes play a really important role in their performance on tasks while non-genetic factors didn’t seem to explain a lot. So that’s new.'

He believes the experiment could shed new light on human intelligence.  'Chimps offer a really simple way of thinking about how genes might influence intelligence without, in essence, the baggage of these other mechanisms that are confounded with genes in research on human intelligence.

'What specific genes underlie the observed individual differences in cognition is not clear, but pursuing this question may lead to candidate genes that changed in human evolution and allowed for the emergence of some human-specific specialisations in cognition.

SOURCE

Monday, June 16, 2014


Are Conservatives Dumber Than Liberals?

It depends on how you define "conservative." The research shows that libertarian conservatives are smartest of all

Ronald Bailey

Conservatives exhibit less cognitive ability than liberals do. Or that's what it says in the social science literature, anyway. A 2010 study using data from the National Longitudinal Study of Adolescent Health, for example, found that the IQs of young adults who described themselves as "very liberal" averaged 106.42, whereas the mean of those who identified as "very conservative" was 94.82. Similarly, when a 2009 study correlated cognitive capacity with political beliefs among 1,254 community college students and 1,600 foreign students seeking entry to U.S. universities, it found that conservatism is "related to low performance on cognitive ability tests." In 2012, a paper reported that people endorse more conservative views when drunk or under cognitive pressure; it concluded that "political conservatism may be a process consequence of low-effort thought."

So have social scientists really proved that conservatives are dumber than liberals? It depends crucially on how you define "conservative."

For an inkling of what some social scientists think conservatives believe, parse a 2008 study by the University of Nevada at Reno sociologist Markus Kemmelmeier. To probe the political and social beliefs of nearly 7,000 undergraduates at an elite university, Kemmelmeier devised a set of six questions asking whether abortion, same-sex marriage, and gay sex should be legal, whether handguns and racist/sexist speech on campus should be banned, and whether higher taxes should be imposed on the wealthy. The first three were supposed to measure the students' views of "conservative gender roles," and the second set was supposed to gauge their "anti-regulation" beliefs. Kemmelmeier clearly thought that "liberals" would tend to be OK with legal abortion, same-sex marriage, and gay sex, and would opt to ban handguns and offensive speech and to tax the rich. Conservatives would supposedly hold the opposite views.

Savvy readers may recognize a problem with using these questions to sort people into just two ideological categories. And sure enough, Kemmelmeier got some results that puzzled him. He found that students who held more traditional views on gender and sex roles averaged lower on their verbal SAT and Achievement Test scores. "Surprisingly," he continued, this was not true of students with anti-regulation attitudes. With them, "all else being equal, more conservative respondents scored higher than more liberal respondents." Kemmelmeier ruefully notes that "this result was not anticipated" and "diametrically contradicts" the hypothesis that conservatism is linked to lower cognitive ability. Kemmelmeier is so evidently lost in the intellectual fog of contemporary progressivism that he does not realize that his questionnaire is impeccably designed to identify classical liberals, a.k.a. libertarians, who endorse liberty in both the social and economic realms.

So how smart are libertarians compared to liberals and conservatives? In a May 2014 study in the journal Intelligence, the Oxford sociologist Noah Carl attempts to answer to that question. Because research has "consistently shown that intelligence is positively correlated with socially liberal beliefs and negatively correlated with religious beliefs," Carl suggests that in the American political context, social scientists would expect Republicans to be less intelligent than Democrats. Instead, Republicans have slightly higher verbal intelligence scores (2–5 IQ points) than Democrats. How could that be?

Carl begins by pointing out that there is data suggesting that a segment of the American population holding classical liberal beliefs tends to vote Republican. Classical liberals, Carl notes, believe that an individual should be free to make his own lifestyle choices and to enjoy the profits derived from voluntary transactions with others. He proposes that intelligence actually correlates with classically liberal beliefs.

To test this hypothesis, Carl uses data on political attitudes and intelligence derived from the General Social Survey, which has been administered to representative samples of American adults every couple of years since 1972. Using GSS data, respondents are classified on a continuum ranging from strong Republican through independent to strong Democrat. Carl then creates a measure of socially liberal beliefs based on respondents' attitudes toward homosexuality, marijuana consumption, abortion, and free speech for communists, racists, and advocates for military dictatorship. He similarly probes liberal economic views, with an assessment of attitudes toward government provision of jobs, industry subsidies, income redistribution, price controls, labor unions, and military spending. Verbal Intelligence is evaluated using the GSS WORDSUM test results.

Comparing strong Republicans with strong Democrats, Carl finds that Republicans have a 5.48 IQ point advantage over Democrats. Broadening party affiliation to include moderate to merely leaning respondents still results in a Republican advantage of 3.47 IQ points and 2.47 IQ points respectively. Carl reconciles his findings with the social science literature that reports that liberals are more intelligent than conservatives by proposing that Americans with classically liberal beliefs are even smarter. Carl further reports that those who endorse both social conservatism and economic statism also have lower verbal IQ scores.

"Overall, my findings suggest that higher intelligence among classically liberal Republicans compensates for lower intelligence among socially conservative Republicans," concludes Carl. If the dumb, I mean socially conservative, Republicans keep disrespecting us classical liberals, we'll take our IQ points and go home.

As gratifying as Carl's research findings are, it is still a deep puzzle to me why it apparently takes high intelligence to understand that the government should stay out of both the bedroom and the boardroom.

SOURCE

Bailey covers the issues pretty well above but could have emphasized even more strongly that it all depends on how you define conservative.  Most of the relevant research has been done by Leftists and thanks to their general lack of contact with reality, most of them have not got a blind clue about what conservatism is.  All they know is what they have picked up from their fellow Leftists.  So they define conservatism very narrowly and miss out that the central issue for conservatives is  individual liberty. 

One result of that is that their lists of questions that are supposed to index conservatism usually show no correlation with vote!  Many of the people who are critical of homosexuality, for instance, are Democrat voters, not Republicans.  Blacks, for instance, are often religious and are also conservative on many social issues so a low average score on IQ for religious conservatives could simply reflect the low average IQ score of blacks while telling us nothing about whites

Just to give you the feel of black attitudes, a common Caribbean word for a homosexual is "Poopman"


Friday, April 25, 2014



Charles Murray on allegations of racism

Since the flap about Paul Ryan’s remarks last week, elements of the blogosphere, and now Paul Krugman in The New York Times, have stated that I tried to prove the genetic inferiority of blacks in The Bell Curve.

The position that Richard Herrnstein and I took about the role of race, IQ and genes in The Bell Curve is contained in a single paragraph in an 800-page book. It is found on page 311, and consists in its entirety of the following text:

If the reader is now convinced that either the genetic or environmental explanation has won out to the exclusion of the other, we have not done a sufficiently good job of presenting one side or the other. It seems highly likely to us that both genes and the environment have something to do with racial differences. What might the mix be? We are resolutely agnostic on that issue; as far as we can determine, the evidence does not justify an estimate.

That’s it. The four pages following that quote argue that the hysteria about race and genes is misplaced. I think our concluding paragraph (page 315) is important enough to repeat here:

In sum: If tomorrow you knew beyond a shadow of a doubt that all the cognitive differences between races were 100 percent genetic in origin, nothing of any significance should change. The knowledge would give you no reason to treat individuals differently than if ethnic differences were 100 percent environmental. By the same token, knowing that the differences are 100 percent environmental in origin would not suggest a single program or policy that is not already being tried. It would justify no optimism about the time it will take to narrow the existing gaps. It would not even justify confidence that genetically based differences will not be upon us within a few generations. The impulse to think that environmental sources of differences are less threatening than genetic ones is natural but illusory.

Our sin was to openly discuss the issue, not to advocate a position. But for the last 40 years, that’s been sin enough.

I’ll be happy to respond at more length to allegations of racism made by anyone who can buttress them with a direct quote from anything I’ve written. I’ll leave you with this thought: in all the critiques of The Bell Curve in particular and my work more generally, no one ever accompanies their charges with direct quotes of what I’ve actually said. There’s a reason for that.

SOURCE

Wednesday, April 23, 2014


Yes, IQ Really Matters

Critics of the SAT and other standardized testing are disregarding the data.  Leftists hate it because it shows that all men are NOT equal

By David Z. Hambrick and Christopher Chabris writing in "Slate" (!)

The College Board—the standardized testing behemoth that develops and administers the SAT and other tests—has redesigned its flagship product again. Beginning in spring 2016, the writing section will be optional, the reading section will no longer test “obscure” vocabulary words, and the math section will put more emphasis on solving problems with real-world relevance. Overall, as the College Board explains on its website, “The redesigned SAT will more closely reflect the real work of college and career, where a flexible command of evidence—whether found in text or graphic [sic]—is more important than ever.” 

A number of pressures may be behind this redesign. Perhaps it’s competition from the ACT, or fear that unless the SAT is made to seem more relevant, more colleges will go the way of Wake Forest, Brandeis, and Sarah Lawrence and join the “test optional admissions movement,” which already boasts several hundred members. Or maybe it’s the wave of bad press that standardized testing, in general, has received over the past few years.

Critics of standardized testing are grabbing this opportunity to take their best shot at the SAT. They make two main arguments. The first is simply that a person’s SAT score is essentially meaningless—that it says nothing about whether that person will go on to succeed in college. Leon Botstein, president of Bard College and longtime standardized testing critic, wrote in Time that the SAT “needs to be abandoned and replaced,” and added:

"The blunt fact is that the SAT has never been a good predictor of academic achievement in college. High school grades adjusted to account for the curriculum and academic programs in the high school from which a student graduates are. The essential mechanism of the SAT, the multiple choice test question, is a bizarre relic of long outdated 20th century social scientific assumptions and strategies."

Calling use of SAT scores for college admissions a “national scandal,” Jennifer Finney Boylan, an English professor at Colby College, argued in the New York Times that:

"The only way to measure students’ potential is to look at the complex portrait of their lives: what their schools are like; how they’ve done in their courses; what they’ve chosen to study; what progress they’ve made over time; how they’ve reacted to adversity.
Along the same lines, Elizabeth Kolbert wrote in The New Yorker that “the SAT measures those skills—and really only those skills—necessary for the SATs.”

But this argument is wrong. The SAT does predict success in college—not perfectly, but relatively well, especially given that it takes just a few hours to administer. And, unlike a “complex portrait” of a student’s life, it can be scored in an objective way. (In a recent New York Times op-ed, the University of New Hampshire psychologist John D. Mayer aptly described the SAT’s validity as an “astonishing achievement.”)

In a study published in Psychological Science, University of Minnesota researchers Paul Sackett, Nathan Kuncel, and their colleagues investigated the relationship between SAT scores and college grades in a very large sample: nearly 150,000 students from 110 colleges and universities. SAT scores predicted first-year college GPA about as well as high school grades did, and the best prediction was achieved by considering both factors.

Botstein, Boylan, and Kolbert are either unaware of this directly relevant, easily accessible, and widely disseminated empirical evidence, or they have decided to ignore it and base their claims on intuition and anecdote—or perhaps on their beliefs about the way the world should be rather than the way it is. 

Furthermore, contrary to popular belief, it’s not just first-year college GPA that SAT scores predict. In a four-year study that started with nearly 3,000 college students, a team of Michigan State University researchers led by Neal Schmitt found that test score (SAT or ACT—whichever the student took) correlated strongly with cumulative GPA at the end of the fourth year. If the students were ranked on both their test scores and cumulative GPAs, those who had test scores in the top half (above the 50th percentile, or median) would have had a roughly two-thirds chance of having a cumulative GPA in the top half. By contrast, students with bottom-half SAT scores would be only one-third likely to make it to the top half in GPA.

Test scores also predicted whether the students graduated: A student who scored in the 95th percentile on the SAT or ACT was about 60 percent more likely to graduate than a student who scored in the 50th percentile. Similarly impressive evidence supports the validity of the SAT’s graduate school counterparts: the Graduate Record Examinations, the Law School Admissions Test, and the Graduate Management Admission Test. A 2007 Science article summed up the evidence succinctly: “Standardized admissions tests have positive and useful relationships with subsequent student accomplishments.”

SAT scores even predict success beyond the college years. For more than two decades, Vanderbilt University researchers David Lubinski, Camilla Benbow, and their colleagues have tracked the accomplishments of people who, as part of a youth talent search, scored in the top 1 percent on the SAT by age 13. Remarkably, even within this group of gifted students, higher scorers were not only more likely to earn advanced degrees but also more likely to succeed outside of academia. For example, compared with people who “only” scored in the top 1 percent, those who scored in the top one-tenth of 1 percent—the extremely gifted—were more than twice as likely as adults to have an annual income in the top 5 percent of Americans.

The second popular anti-SAT argument is that, if the test measures anything at all, it’s not cognitive skill but socioeconomic status. In other words, some kids do better than others on the SAT not because they’re smarter, but because their parents are rich. Boylan argued in her Times article that the SAT “favors the rich, who can afford preparatory crash courses” like those offered by Kaplan and the Princeton Review. Leon Botstein claimed in his Time article that “the only persistent statistical result from the SAT is the correlation between high income and high test scores.” And according to a Washington Post Wonkblog infographic (which is really more of a disinfographic) “your SAT score says more about your parents than about you.” 

It’s true that economic background correlates with SAT scores. Kids from well-off families tend to do better on the SAT. However, the correlation is far from perfect. In the University of Minnesota study of nearly 150,000 students, the correlation between socioeconomic status, or SES, and SAT was not trivial but not huge. (A perfect correlation has a value of 1; this one was .25.) What this means is that there are plenty of low-income students who get good scores on the SAT; there are even likely to be low-income students among those who achieve a perfect score on the SAT.

Thus, just as it was originally designed to do, the SAT in fact goes a long way toward leveling the playing field, giving students an opportunity to distinguish themselves regardless of their background. Scoring well on the SAT may in fact be the only such opportunity for students who graduate from public high schools that are regarded by college admissions offices as academically weak. In a letter to the editor, a reader of Elizabeth Kolbert’s New Yorker article on the SAT made this point well:

The SAT may be the bane of upper-middle-class parents trying to launch their children on a path to success. But sometimes one person’s obstacle is another person’s springboard. I am the daughter of a single, immigrant father who never attended college, and a good SAT score was one of the achievements that catapulted me into my state’s flagship university and, from there, on to medical school. Flawed though it is, the SAT afforded me, as it has thousands of others, a way to prove that a poor, public-school kid who never had any test prep can do just as well as, if not better than, her better-off peers.

The sort of admissions approach that Botstein advocates—adjusting high school GPA “to account for the curriculum and academic programs in the high school from which a student graduates” and abandoning the SAT—would do the opposite of leveling the playing field. A given high school GPA would be adjusted down for a poor, public-school kid, and adjusted up for a rich, private-school kid. 

Furthermore, contrary to what Boylan implies in her Times piece, “preparatory crash courses” don’t change SAT scores much. Research has consistently shown that prep courses have only a small effect on SAT scores—and a much smaller effect than test prep companies claim they do. For example, in one study of a random sample of more than 4,000 students, average improvement in overall score on the “old” SAT, which had a range from 400 to 1600, was no more than about 30 points.

Finally, it is clear that SES is not what accounts for the fact that SAT scores predict success in college. In the University of Minnesota study, the correlation between high school SAT and college GPA was virtually unchanged after the researchers statistically controlled for the influence of SES. If SAT scores were just a proxy for privilege, then putting SES into the mix should have removed, or at least dramatically decreased, the association between the SAT and college performance. But it didn’t. This is more evidence that Boylan overlooks or chooses to ignore. 

What this all means is that the SAT measures something—some stable characteristic of high school students other than their parents’ income—that translates into success in college. And what could that characteristic be? General intelligence. The content of the SAT is practically indistinguishable from that of standardized intelligence tests that social scientists use to study individual differences, and that psychologists and psychiatrists use to determine whether a person is intellectually disabled—and even whether a person should be spared execution in states that have the death penalty. Scores on the SAT correlate very highly with scores on IQ tests—so highly that the Harvard education scholar Howard Gardner, known for his theory of multiple intelligences, once called the SAT and other scholastic measures “thinly disguised” intelligence tests. 

One could of course argue that IQ is also meaningless—and many have. For example, in his bestseller The Social Animal, David Brooks claimed that “once you get past some pretty obvious correlations (smart people make better mathematicians), there is a very loose relationship between IQ and life outcomes.” And in a recent Huffington Post article, psychologists Tracy Alloway and Ross Alloway wrote that

IQ won’t help you in the things that really matter: It won’t help you find happiness, it won’t help you make better decisions, and it won’t help you manage your kids’ homework and the accounts at the same time. It isn’t even that useful at its raison d'être: predicting success.

But this argument is wrong, too. Indeed, we know as well as anything we know in psychology that IQ predicts many different measures of success. Exhibit A is evidence from research on job performance by the University of Iowa industrial psychologist Frank Schmidt and his late colleague John Hunter. Synthesizing evidence from nearly a century of empirical studies, Schmidt and Hunter established that general mental ability—the psychological trait that IQ scores reflect—is the single best predictor of job training success, and that it accounts for differences in job performance even in workers with more than a decade of experience. It’s more predictive than interests, personality, reference checks, and interview performance. Smart people don’t just make better mathematicians, as Brooks observed—they make better managers, clerks, salespeople, service workers, vehicle operators, and soldiers.

IQ predicts other things that matter, too, like income, employment, health, and even longevity. In a 2001 study published in the British Medical Journal, Scottish researchers Lawrence Whalley and Ian Deary identified more than 2,000 people who had taken part in the Scottish Mental Survey of 1932, a nationwide assessment of IQ. Remarkably, people with high IQs at age 11 were more considerably more likely to survive to old age than were people with lower IQs. For example, a person with an IQ of 100 (the average for the general population) was 21 percent more likely to live to age 76 than a person with an IQ of 85. And the relationship between IQ and longevity remains statistically significant even after taking SES into account. Perhaps IQ reflects the mental resources—the reasoning and problem-solving skills—that people can bring to bear on maintaining their health and making wise decisions throughout life. This explanation is supported by evidence that higher-IQ individuals engage in more positive health behaviors, such as deciding to quit smoking.

IQ is of course not the only factor that contributes to differences in outcomes like academic achievement and job performance (and longevity). Psychologists have known for many decades that certain personality traits also have an impact. One is conscientiousness, which reflects a person’s self-control, discipline, and thoroughness. People who are high in conscientiousness delay gratification to get their work done, finish tasks that they start, and are careful in their work, whereas people who are low in conscientiousness are impulsive, undependable, and careless (compare Lisa and Bart Simpson). The University of Pennsylvania psychologist Angela Duckworth has proposed a closely related characteristic that she calls “grit,” which she defines as a person’s “tendency to sustain interest in and effort toward very long-term goals,” like building a career or family.  

Duckworth has argued that such factors may be even more important as predictors of success than IQ. In one study, she and UPenn colleague Martin Seligman found that a measure of self-control collected at the start of eighth grade correlated more than twice as strongly with year-end grades than IQ did. However, the results of meta-analyses, which are more telling than the results of any individual study, indicate that these factors do not have a larger effect than IQ does on measures of academic achievement and job performance. So, while it seems clear that factors like conscientiousness—not to mention social skill, creativity, interest, and motivation—do influence success, they cannot take the place of IQ.

None of this is to say that IQ, whether measured with the SAT or a traditional intelligence test, is an indicator of value or worth. Nobody should be judged, negatively or positively, on the basis of a test score. A test score is a prediction, not a prophecy, and doesn’t say anything specific about what a person will or will not achieve in life. A high IQ doesn’t guarantee success, and a low IQ doesn’t guarantee failure. Furthermore, the fact that IQ is at present a powerful predictor of certain socially relevant outcomes doesn’t mean it always will be. If there were less variability in income—a smaller gap between the rich and the poor—then IQ would have a weaker correlation with income. For the same reason, if everyone received the same quality of health care, there would be a weaker correlation between IQ and health.

But the bottom line is that there are large, measurable differences among people in intellectual ability, and these differences have consequences for people’s lives. Ignoring these facts will only distract us from discovering and implementing wise policies.

Given everything that social scientists have learned about IQ and its broad predictive validity, it is reasonable to make it a factor in decisions such as whom to hire for a particular job or admit to a particular college or university. In fact, disregarding IQ—by admitting students to colleges or hiring people for jobs in which they are very likely to fail—is harmful both to individuals and to society. For example, in occupations where safety is paramount, employers could be incentivized to incorporate measures of cognitive ability into the recruitment process. Above all, the policies of public and private organizations should be based on evidence rather than ideology or wishful thinking.

SOURCE