Tuesday, July 22, 2014



Genes and Race: The Distant Footfalls of Evidence:  A review of Nicholas Wade’s book, “A Troublesome Inheritance: Genes, Race and Human History“.

Despite the great care the author below took not to tread on any toes, waves of shrieks emanated from the always irrational Left in response to it.  As a result SciAm issued an apology for publishing it.  The author, Ashutosh Jogalekar,  was eventually fired over it.  He is a chemist of apparently Indian origin so has obviously missed some of the political indoctrination that dominates the social sciences and humanities in America today.

SciAm is not really interested in science, however, as their advocacy for the global warming cult shows. Theory contradicted by the evidence does not bother them. They are really The Unscientific American.  A conservative boycott of the publication  would be fitting -- JR


In this book NYT science writer Nicholas Wade advances two simple premises: firstly, that we should stop looking only toward culture as a determinant of differences between populations and individuals, and secondly, that those who claim that race is only a social construct are ignoring increasingly important findings from modern genetics and science. The guiding thread throughout the book is that “human evolution is recent, copious and regional” and that this has led to the genesis of distinct differences and classifications between human groups. What we do with this evidence should always be up for social debate, but the evidence itself cannot be ignored.

That is basically the gist of the book. It’s worth noting at the outset that at no point does Wade downplay the effects of culture and environment in dictating social, cognitive or behavioral differences – in fact he mentions culture as an important factor at least ten times by my count – but all he is saying is that, based on a variety of scientific studies enabled by the explosive recent growth of genomics and sequencing, we need to now recognize a strong genetic component to these differences.

The book can be roughly divided into three parts. The first part details the many horrific and unseemly uses that the concept of race has been put to by loathsome racists and elitists ranging from Social Darwinists to National Socialists. Wade reminds us that while these perpetrators had a fundamentally misguided, crackpot definition of race, that does not mean race does not exist in a modern incarnation. This part also clearly serves to delineate the difference between a scientific fact and what we as human beings decide to do with it, and it tells us that an idea should not be taboo just because murderous tyrants might have warped its definition and used it to enslave and decimate their fellow humans.

The second part of the book is really the meat of the story and Wade is on relatively firm ground here. He details a variety of studies based on tools like tandem DNA repeats and single nucleotide polymorphisms (SNPs) that point to very distinctive genetic differences between populations dictating both physical and mental traits. Many of the genes responsible for these differences have been subject to selection in the last five thousand years or so, refuting the belief that humans have somehow “stopped evolving” since they settled down into agricultural communities. For me the most striking evidence that something called race is real comes from the fact that when you ask computer algorithms to cluster genes based on differences and similarities in an unbiased manner, these statistical programs consistently settle on the five continental races as genetically distinct groups – Caucasian, East Asian, African, Native American and Australian Aboriginal. Very few people would deny that there are clear genetic underpinnings behind traits like skin color or height among people on different continents, but Wade’s achievement here is to clearly explain how it’s not just one or two genes underlying such traits but a combination of genes – the effects of many of which are not obvious – that distinguish between races. The other point that he drives home is that even minor differences between gene frequencies can lead to significant phenotypic dissimilarities because of additive effects, so boiling down these differences to percentages and then interpreting these numbers can be quite misleading.

Wade also demolishes the beliefs of many leading thinkers who would rather have differences defined almost entirely by culture – these include Stephen Jay Gould who thought that humans evolved very little in the last ten thousand years (as Wade points out, about 14% of the genome has been under active selection since modern humans appeared on the scene), and Richard Lewontin who perpetuated a well-known belief that the dominance of intra as opposed to inter individual differences makes any discussion of race meaningless. As Wade demonstrates through citations of solid research, this belief is simply erroneous since even small differences between populations can translate to large differences in physical, mental and social features depending on what alleles are involved; Lewontin and his followers’ frequent plea that inter-group differences are “only 15%” thus ends up essentially translating to obfuscation through numbers. Jared Diamond’s writings are also carefully scrutinized and criticized; Diamond’s contention that the presence of the very recently evolved gene for malaria resistance can somehow be advanced as a dubious argument for race is at best simplistic and at worst a straw man. The main point is that just because there can be more than one method to define race, or because definitions of race seem to fray at their edges, does not mean that race is non-existent and there is no good way to parse it.

The last part of the book is likely to be regarded as more controversial because it deals mainly with effects of genetics on cognitive, social and personality traits and is much more speculative. However Wade fully realizes this and also believes that “there is nothing wrong with speculation, of course, as long as its premises are made clear”, and this statement could be part of a scientist’s credo. The crux of the matter is to logically ask why genes would also not account for mental and social differences between races if they do account for physical differences. The problem there is that although the hypothesis is valid, the evidence is slim for now. Some of the topics that Wade deals with in this third part are thus admittedly hazy in terms of corroboration. For instance there is ample contemplation about whether a set of behavioral and genetic factors might have made the West progress faster than the East and inculcated its citizens with traits conducive to material success. However Wade also makes it clear that “progressive” does not mean “superior”; what he is rather doing is sifting through the evidence and asking if some of it might account for these more complex differences in social systems. Similarly, while there are pronounced racial differences in IQ, one must recognize the limitations of IQ, but more importantly should recognize that IQ says nothing about whether one human is “better” or “worse” than another; in fact the question is meaningless.

Wade brings a similar approach to exploring genetic influences on cognitive abilities and personality traits; evidently, as he recognizes, the evidence on this topic is just emerging and therefore not definitive. He looks at the effects of genes on attributes as diverse as language, reciprocity and propensity to dole out punishment. This discussion makes it clear that we are just getting started and there are many horizons that will be uncovered in the near future; for instance, tantalizing hints of links between genes for certain enzymes and aggressive or amiable behavior are just emerging. Some of the other paradigms Wade writes about, such as the high intelligence of Ashkenazi Jews, the gene-driven contrast between chimp and human societies and the rise of the West are interesting but have been covered by authors like Steven Pinker, Greg Cochran and Gregory Clark. If I have a criticism of the book it is that in his efforts to cover extensive ground, Wade sometimes gives short shrift to research on interesting topics like oxytocin and hormonal influences. But what he does make clear is that the research opportunities in the field are definitely exciting, and scientists should not have to tiptoe around these topics for political reasons.

Overall I found this book extremely well-researched, thoughtfully written and objectively argued. Wade draws on several sources, including the peer reviewed literature and work by other thinkers and scientists. The many researchers whose work Wade cites makes the writing authoritative; on the other hand, where speculation is warranted or noted he usually explicitly points it out as such. Some of these speculations such as the effects of genetics on the behavior of entire societies are quite far flung but I don’t see any reason why, based on what we do know about the spread of genes among groups, they should be dismissed out of hand. At the very least they serve as reasonable hypotheses to be pondered, thrashed out and tested. Science is about ideas, not answers.

But the real lesson of the book should not be lost on us: A scientific topic cannot be declared off limits or whitewashed because its findings can be socially or politically controversial; as Wade notes, “Whether or not a thesis might be politically incendiary should have no bearing on the estimate of its scientific validity.” He gives nuclear physics as a good analogy; knowledge of the atom can lead to both destruction and advancement, but without this knowledge there will still be destruction. More importantly, one cannot hide the fruits of science; how they are used as instruments of social or political policy is a matter of principle and should be decoupled from the science itself. In fact, knowing the facts provides us with a clear basis for making progressive decisions and gives us a powerful weapon for defeating the nefarious goals of demagogues who would use pseudoscience to support their dubious claims. In that sense, I agree with Wade that even if genetic differences between races become enshrined into scientific fact, it does not mean at all that we will immediately descend into 19th-century racism; our moral compass has already decided the direction of that particular current.

Ultimately Wade’s argument is about the transparency of knowledge. He admonishes some of the critics – especially some liberal academics and the American Anthropological Association – for espousing a “culture only” philosophy that is increasingly at odds with scientific facts and designed mainly for political correctness and a straitjacketed worldview. I don’t think liberal academics are the only ones guilty of this attitude but some of them certainly embrace it. Liberal academics, however, have also always prided themselves on being objective examiners of the scientific truth. Wade rightly says that they should join hands with all of us in bringing that same critical and honest attitude to examining the recent evidence about race and genetics. Whatever it reveals, we can be sure that as human beings we will try our best not to let it harm the cause of our fellow beings. After all we are, all of us, human beings first and scientists second.

SOURCE

Sunday, July 13, 2014


Chimpanzee intelligence is determined by their genes not their environment, researchers say

A chimpanzee’s intelligence is largely determined by the genes they inherit from their parents, reveals a new study.

It found Chimpanzees raised by humans turn out to be no cleverer than those given an ape upbringing.

Research into chimp intelligence could help scientists get a better handle on human IQ, say scientists.

The study involved 99 chimpanzees, ranging in age from nine to 54, who completed 13 cognitive tasks designed to test a variety of abilities.

The scientists then analysed the genetics of the chimps and compared their ability to complete the tasks in relation to their genetic similarities.

Genes were found to play a role in overall cognitive abilities, as well as the performance on tasks in several categories, the scientists discovered.

This is because while genes also play a major role in human intelligence, factors such as schooling, home life, economic status, and the culture a person is born in complicate the picture.

Previous studies have suggested that genetics account for around a quarter to a half of variations in human intelligence.

The new research involving 99 chimpanzees from a wide range of ages showed that genes explained about 50% of the differences seen in their intelligence test scores.

Chimps raised by human caretakers did no better in the tasks than individuals brought up by their chimpanzee mothers.

'Intelligence runs in families,' Dr. William Hopkins from the Yerkes National Primate Research Center in Atlanta, who ran the study, said.

'The suggestion here is that genes play a really important role in their performance on tasks while non-genetic factors didn’t seem to explain a lot. So that’s new.'

He believes the experiment could shed new light on human intelligence.  'Chimps offer a really simple way of thinking about how genes might influence intelligence without, in essence, the baggage of these other mechanisms that are confounded with genes in research on human intelligence.

'What specific genes underlie the observed individual differences in cognition is not clear, but pursuing this question may lead to candidate genes that changed in human evolution and allowed for the emergence of some human-specific specialisations in cognition.

SOURCE

Monday, June 16, 2014


Are Conservatives Dumber Than Liberals?

It depends on how you define "conservative." The research shows that libertarian conservatives are smartest of all

Ronald Bailey

Conservatives exhibit less cognitive ability than liberals do. Or that's what it says in the social science literature, anyway. A 2010 study using data from the National Longitudinal Study of Adolescent Health, for example, found that the IQs of young adults who described themselves as "very liberal" averaged 106.42, whereas the mean of those who identified as "very conservative" was 94.82. Similarly, when a 2009 study correlated cognitive capacity with political beliefs among 1,254 community college students and 1,600 foreign students seeking entry to U.S. universities, it found that conservatism is "related to low performance on cognitive ability tests." In 2012, a paper reported that people endorse more conservative views when drunk or under cognitive pressure; it concluded that "political conservatism may be a process consequence of low-effort thought."

So have social scientists really proved that conservatives are dumber than liberals? It depends crucially on how you define "conservative."

For an inkling of what some social scientists think conservatives believe, parse a 2008 study by the University of Nevada at Reno sociologist Markus Kemmelmeier. To probe the political and social beliefs of nearly 7,000 undergraduates at an elite university, Kemmelmeier devised a set of six questions asking whether abortion, same-sex marriage, and gay sex should be legal, whether handguns and racist/sexist speech on campus should be banned, and whether higher taxes should be imposed on the wealthy. The first three were supposed to measure the students' views of "conservative gender roles," and the second set was supposed to gauge their "anti-regulation" beliefs. Kemmelmeier clearly thought that "liberals" would tend to be OK with legal abortion, same-sex marriage, and gay sex, and would opt to ban handguns and offensive speech and to tax the rich. Conservatives would supposedly hold the opposite views.

Savvy readers may recognize a problem with using these questions to sort people into just two ideological categories. And sure enough, Kemmelmeier got some results that puzzled him. He found that students who held more traditional views on gender and sex roles averaged lower on their verbal SAT and Achievement Test scores. "Surprisingly," he continued, this was not true of students with anti-regulation attitudes. With them, "all else being equal, more conservative respondents scored higher than more liberal respondents." Kemmelmeier ruefully notes that "this result was not anticipated" and "diametrically contradicts" the hypothesis that conservatism is linked to lower cognitive ability. Kemmelmeier is so evidently lost in the intellectual fog of contemporary progressivism that he does not realize that his questionnaire is impeccably designed to identify classical liberals, a.k.a. libertarians, who endorse liberty in both the social and economic realms.

So how smart are libertarians compared to liberals and conservatives? In a May 2014 study in the journal Intelligence, the Oxford sociologist Noah Carl attempts to answer to that question. Because research has "consistently shown that intelligence is positively correlated with socially liberal beliefs and negatively correlated with religious beliefs," Carl suggests that in the American political context, social scientists would expect Republicans to be less intelligent than Democrats. Instead, Republicans have slightly higher verbal intelligence scores (2–5 IQ points) than Democrats. How could that be?

Carl begins by pointing out that there is data suggesting that a segment of the American population holding classical liberal beliefs tends to vote Republican. Classical liberals, Carl notes, believe that an individual should be free to make his own lifestyle choices and to enjoy the profits derived from voluntary transactions with others. He proposes that intelligence actually correlates with classically liberal beliefs.

To test this hypothesis, Carl uses data on political attitudes and intelligence derived from the General Social Survey, which has been administered to representative samples of American adults every couple of years since 1972. Using GSS data, respondents are classified on a continuum ranging from strong Republican through independent to strong Democrat. Carl then creates a measure of socially liberal beliefs based on respondents' attitudes toward homosexuality, marijuana consumption, abortion, and free speech for communists, racists, and advocates for military dictatorship. He similarly probes liberal economic views, with an assessment of attitudes toward government provision of jobs, industry subsidies, income redistribution, price controls, labor unions, and military spending. Verbal Intelligence is evaluated using the GSS WORDSUM test results.

Comparing strong Republicans with strong Democrats, Carl finds that Republicans have a 5.48 IQ point advantage over Democrats. Broadening party affiliation to include moderate to merely leaning respondents still results in a Republican advantage of 3.47 IQ points and 2.47 IQ points respectively. Carl reconciles his findings with the social science literature that reports that liberals are more intelligent than conservatives by proposing that Americans with classically liberal beliefs are even smarter. Carl further reports that those who endorse both social conservatism and economic statism also have lower verbal IQ scores.

"Overall, my findings suggest that higher intelligence among classically liberal Republicans compensates for lower intelligence among socially conservative Republicans," concludes Carl. If the dumb, I mean socially conservative, Republicans keep disrespecting us classical liberals, we'll take our IQ points and go home.

As gratifying as Carl's research findings are, it is still a deep puzzle to me why it apparently takes high intelligence to understand that the government should stay out of both the bedroom and the boardroom.

SOURCE

Bailey covers the issues pretty well above but could have emphasized even more strongly that it all depends on how you define conservative.  Most of the relevant research has been done by Leftists and thanks to their general lack of contact with reality, most of them have not got a blind clue about what conservatism is.  All they know is what they have picked up from their fellow Leftists.  So they define conservatism very narrowly and miss out that the central issue for conservatives is  individual liberty. 

One result of that is that their lists of questions that are supposed to index conservatism usually show no correlation with vote!  Many of the people who are critical of homosexuality, for instance, are Democrat voters, not Republicans.  Blacks, for instance, are often religious and are also conservative on many social issues so a low average score on IQ for religious conservatives could simply reflect the low average IQ score of blacks while telling us nothing about whites

Just to give you the feel of black attitudes, a common Caribbean word for a homosexual is "Poopman"


Friday, April 25, 2014



Charles Murray on allegations of racism

Since the flap about Paul Ryan’s remarks last week, elements of the blogosphere, and now Paul Krugman in The New York Times, have stated that I tried to prove the genetic inferiority of blacks in The Bell Curve.

The position that Richard Herrnstein and I took about the role of race, IQ and genes in The Bell Curve is contained in a single paragraph in an 800-page book. It is found on page 311, and consists in its entirety of the following text:

If the reader is now convinced that either the genetic or environmental explanation has won out to the exclusion of the other, we have not done a sufficiently good job of presenting one side or the other. It seems highly likely to us that both genes and the environment have something to do with racial differences. What might the mix be? We are resolutely agnostic on that issue; as far as we can determine, the evidence does not justify an estimate.

That’s it. The four pages following that quote argue that the hysteria about race and genes is misplaced. I think our concluding paragraph (page 315) is important enough to repeat here:

In sum: If tomorrow you knew beyond a shadow of a doubt that all the cognitive differences between races were 100 percent genetic in origin, nothing of any significance should change. The knowledge would give you no reason to treat individuals differently than if ethnic differences were 100 percent environmental. By the same token, knowing that the differences are 100 percent environmental in origin would not suggest a single program or policy that is not already being tried. It would justify no optimism about the time it will take to narrow the existing gaps. It would not even justify confidence that genetically based differences will not be upon us within a few generations. The impulse to think that environmental sources of differences are less threatening than genetic ones is natural but illusory.

Our sin was to openly discuss the issue, not to advocate a position. But for the last 40 years, that’s been sin enough.

I’ll be happy to respond at more length to allegations of racism made by anyone who can buttress them with a direct quote from anything I’ve written. I’ll leave you with this thought: in all the critiques of The Bell Curve in particular and my work more generally, no one ever accompanies their charges with direct quotes of what I’ve actually said. There’s a reason for that.

SOURCE

Wednesday, April 23, 2014


Yes, IQ Really Matters

Critics of the SAT and other standardized testing are disregarding the data.  Leftists hate it because it shows that all men are NOT equal

By David Z. Hambrick and Christopher Chabris writing in "Slate" (!)

The College Board—the standardized testing behemoth that develops and administers the SAT and other tests—has redesigned its flagship product again. Beginning in spring 2016, the writing section will be optional, the reading section will no longer test “obscure” vocabulary words, and the math section will put more emphasis on solving problems with real-world relevance. Overall, as the College Board explains on its website, “The redesigned SAT will more closely reflect the real work of college and career, where a flexible command of evidence—whether found in text or graphic [sic]—is more important than ever.” 

A number of pressures may be behind this redesign. Perhaps it’s competition from the ACT, or fear that unless the SAT is made to seem more relevant, more colleges will go the way of Wake Forest, Brandeis, and Sarah Lawrence and join the “test optional admissions movement,” which already boasts several hundred members. Or maybe it’s the wave of bad press that standardized testing, in general, has received over the past few years.

Critics of standardized testing are grabbing this opportunity to take their best shot at the SAT. They make two main arguments. The first is simply that a person’s SAT score is essentially meaningless—that it says nothing about whether that person will go on to succeed in college. Leon Botstein, president of Bard College and longtime standardized testing critic, wrote in Time that the SAT “needs to be abandoned and replaced,” and added:

"The blunt fact is that the SAT has never been a good predictor of academic achievement in college. High school grades adjusted to account for the curriculum and academic programs in the high school from which a student graduates are. The essential mechanism of the SAT, the multiple choice test question, is a bizarre relic of long outdated 20th century social scientific assumptions and strategies."

Calling use of SAT scores for college admissions a “national scandal,” Jennifer Finney Boylan, an English professor at Colby College, argued in the New York Times that:

"The only way to measure students’ potential is to look at the complex portrait of their lives: what their schools are like; how they’ve done in their courses; what they’ve chosen to study; what progress they’ve made over time; how they’ve reacted to adversity.
Along the same lines, Elizabeth Kolbert wrote in The New Yorker that “the SAT measures those skills—and really only those skills—necessary for the SATs.”

But this argument is wrong. The SAT does predict success in college—not perfectly, but relatively well, especially given that it takes just a few hours to administer. And, unlike a “complex portrait” of a student’s life, it can be scored in an objective way. (In a recent New York Times op-ed, the University of New Hampshire psychologist John D. Mayer aptly described the SAT’s validity as an “astonishing achievement.”)

In a study published in Psychological Science, University of Minnesota researchers Paul Sackett, Nathan Kuncel, and their colleagues investigated the relationship between SAT scores and college grades in a very large sample: nearly 150,000 students from 110 colleges and universities. SAT scores predicted first-year college GPA about as well as high school grades did, and the best prediction was achieved by considering both factors.

Botstein, Boylan, and Kolbert are either unaware of this directly relevant, easily accessible, and widely disseminated empirical evidence, or they have decided to ignore it and base their claims on intuition and anecdote—or perhaps on their beliefs about the way the world should be rather than the way it is. 

Furthermore, contrary to popular belief, it’s not just first-year college GPA that SAT scores predict. In a four-year study that started with nearly 3,000 college students, a team of Michigan State University researchers led by Neal Schmitt found that test score (SAT or ACT—whichever the student took) correlated strongly with cumulative GPA at the end of the fourth year. If the students were ranked on both their test scores and cumulative GPAs, those who had test scores in the top half (above the 50th percentile, or median) would have had a roughly two-thirds chance of having a cumulative GPA in the top half. By contrast, students with bottom-half SAT scores would be only one-third likely to make it to the top half in GPA.

Test scores also predicted whether the students graduated: A student who scored in the 95th percentile on the SAT or ACT was about 60 percent more likely to graduate than a student who scored in the 50th percentile. Similarly impressive evidence supports the validity of the SAT’s graduate school counterparts: the Graduate Record Examinations, the Law School Admissions Test, and the Graduate Management Admission Test. A 2007 Science article summed up the evidence succinctly: “Standardized admissions tests have positive and useful relationships with subsequent student accomplishments.”

SAT scores even predict success beyond the college years. For more than two decades, Vanderbilt University researchers David Lubinski, Camilla Benbow, and their colleagues have tracked the accomplishments of people who, as part of a youth talent search, scored in the top 1 percent on the SAT by age 13. Remarkably, even within this group of gifted students, higher scorers were not only more likely to earn advanced degrees but also more likely to succeed outside of academia. For example, compared with people who “only” scored in the top 1 percent, those who scored in the top one-tenth of 1 percent—the extremely gifted—were more than twice as likely as adults to have an annual income in the top 5 percent of Americans.

The second popular anti-SAT argument is that, if the test measures anything at all, it’s not cognitive skill but socioeconomic status. In other words, some kids do better than others on the SAT not because they’re smarter, but because their parents are rich. Boylan argued in her Times article that the SAT “favors the rich, who can afford preparatory crash courses” like those offered by Kaplan and the Princeton Review. Leon Botstein claimed in his Time article that “the only persistent statistical result from the SAT is the correlation between high income and high test scores.” And according to a Washington Post Wonkblog infographic (which is really more of a disinfographic) “your SAT score says more about your parents than about you.” 

It’s true that economic background correlates with SAT scores. Kids from well-off families tend to do better on the SAT. However, the correlation is far from perfect. In the University of Minnesota study of nearly 150,000 students, the correlation between socioeconomic status, or SES, and SAT was not trivial but not huge. (A perfect correlation has a value of 1; this one was .25.) What this means is that there are plenty of low-income students who get good scores on the SAT; there are even likely to be low-income students among those who achieve a perfect score on the SAT.

Thus, just as it was originally designed to do, the SAT in fact goes a long way toward leveling the playing field, giving students an opportunity to distinguish themselves regardless of their background. Scoring well on the SAT may in fact be the only such opportunity for students who graduate from public high schools that are regarded by college admissions offices as academically weak. In a letter to the editor, a reader of Elizabeth Kolbert’s New Yorker article on the SAT made this point well:

The SAT may be the bane of upper-middle-class parents trying to launch their children on a path to success. But sometimes one person’s obstacle is another person’s springboard. I am the daughter of a single, immigrant father who never attended college, and a good SAT score was one of the achievements that catapulted me into my state’s flagship university and, from there, on to medical school. Flawed though it is, the SAT afforded me, as it has thousands of others, a way to prove that a poor, public-school kid who never had any test prep can do just as well as, if not better than, her better-off peers.

The sort of admissions approach that Botstein advocates—adjusting high school GPA “to account for the curriculum and academic programs in the high school from which a student graduates” and abandoning the SAT—would do the opposite of leveling the playing field. A given high school GPA would be adjusted down for a poor, public-school kid, and adjusted up for a rich, private-school kid. 

Furthermore, contrary to what Boylan implies in her Times piece, “preparatory crash courses” don’t change SAT scores much. Research has consistently shown that prep courses have only a small effect on SAT scores—and a much smaller effect than test prep companies claim they do. For example, in one study of a random sample of more than 4,000 students, average improvement in overall score on the “old” SAT, which had a range from 400 to 1600, was no more than about 30 points.

Finally, it is clear that SES is not what accounts for the fact that SAT scores predict success in college. In the University of Minnesota study, the correlation between high school SAT and college GPA was virtually unchanged after the researchers statistically controlled for the influence of SES. If SAT scores were just a proxy for privilege, then putting SES into the mix should have removed, or at least dramatically decreased, the association between the SAT and college performance. But it didn’t. This is more evidence that Boylan overlooks or chooses to ignore. 

What this all means is that the SAT measures something—some stable characteristic of high school students other than their parents’ income—that translates into success in college. And what could that characteristic be? General intelligence. The content of the SAT is practically indistinguishable from that of standardized intelligence tests that social scientists use to study individual differences, and that psychologists and psychiatrists use to determine whether a person is intellectually disabled—and even whether a person should be spared execution in states that have the death penalty. Scores on the SAT correlate very highly with scores on IQ tests—so highly that the Harvard education scholar Howard Gardner, known for his theory of multiple intelligences, once called the SAT and other scholastic measures “thinly disguised” intelligence tests. 

One could of course argue that IQ is also meaningless—and many have. For example, in his bestseller The Social Animal, David Brooks claimed that “once you get past some pretty obvious correlations (smart people make better mathematicians), there is a very loose relationship between IQ and life outcomes.” And in a recent Huffington Post article, psychologists Tracy Alloway and Ross Alloway wrote that

IQ won’t help you in the things that really matter: It won’t help you find happiness, it won’t help you make better decisions, and it won’t help you manage your kids’ homework and the accounts at the same time. It isn’t even that useful at its raison d'ĂȘtre: predicting success.

But this argument is wrong, too. Indeed, we know as well as anything we know in psychology that IQ predicts many different measures of success. Exhibit A is evidence from research on job performance by the University of Iowa industrial psychologist Frank Schmidt and his late colleague John Hunter. Synthesizing evidence from nearly a century of empirical studies, Schmidt and Hunter established that general mental ability—the psychological trait that IQ scores reflect—is the single best predictor of job training success, and that it accounts for differences in job performance even in workers with more than a decade of experience. It’s more predictive than interests, personality, reference checks, and interview performance. Smart people don’t just make better mathematicians, as Brooks observed—they make better managers, clerks, salespeople, service workers, vehicle operators, and soldiers.

IQ predicts other things that matter, too, like income, employment, health, and even longevity. In a 2001 study published in the British Medical Journal, Scottish researchers Lawrence Whalley and Ian Deary identified more than 2,000 people who had taken part in the Scottish Mental Survey of 1932, a nationwide assessment of IQ. Remarkably, people with high IQs at age 11 were more considerably more likely to survive to old age than were people with lower IQs. For example, a person with an IQ of 100 (the average for the general population) was 21 percent more likely to live to age 76 than a person with an IQ of 85. And the relationship between IQ and longevity remains statistically significant even after taking SES into account. Perhaps IQ reflects the mental resources—the reasoning and problem-solving skills—that people can bring to bear on maintaining their health and making wise decisions throughout life. This explanation is supported by evidence that higher-IQ individuals engage in more positive health behaviors, such as deciding to quit smoking.

IQ is of course not the only factor that contributes to differences in outcomes like academic achievement and job performance (and longevity). Psychologists have known for many decades that certain personality traits also have an impact. One is conscientiousness, which reflects a person’s self-control, discipline, and thoroughness. People who are high in conscientiousness delay gratification to get their work done, finish tasks that they start, and are careful in their work, whereas people who are low in conscientiousness are impulsive, undependable, and careless (compare Lisa and Bart Simpson). The University of Pennsylvania psychologist Angela Duckworth has proposed a closely related characteristic that she calls “grit,” which she defines as a person’s “tendency to sustain interest in and effort toward very long-term goals,” like building a career or family.  

Duckworth has argued that such factors may be even more important as predictors of success than IQ. In one study, she and UPenn colleague Martin Seligman found that a measure of self-control collected at the start of eighth grade correlated more than twice as strongly with year-end grades than IQ did. However, the results of meta-analyses, which are more telling than the results of any individual study, indicate that these factors do not have a larger effect than IQ does on measures of academic achievement and job performance. So, while it seems clear that factors like conscientiousness—not to mention social skill, creativity, interest, and motivation—do influence success, they cannot take the place of IQ.

None of this is to say that IQ, whether measured with the SAT or a traditional intelligence test, is an indicator of value or worth. Nobody should be judged, negatively or positively, on the basis of a test score. A test score is a prediction, not a prophecy, and doesn’t say anything specific about what a person will or will not achieve in life. A high IQ doesn’t guarantee success, and a low IQ doesn’t guarantee failure. Furthermore, the fact that IQ is at present a powerful predictor of certain socially relevant outcomes doesn’t mean it always will be. If there were less variability in income—a smaller gap between the rich and the poor—then IQ would have a weaker correlation with income. For the same reason, if everyone received the same quality of health care, there would be a weaker correlation between IQ and health.

But the bottom line is that there are large, measurable differences among people in intellectual ability, and these differences have consequences for people’s lives. Ignoring these facts will only distract us from discovering and implementing wise policies.

Given everything that social scientists have learned about IQ and its broad predictive validity, it is reasonable to make it a factor in decisions such as whom to hire for a particular job or admit to a particular college or university. In fact, disregarding IQ—by admitting students to colleges or hiring people for jobs in which they are very likely to fail—is harmful both to individuals and to society. For example, in occupations where safety is paramount, employers could be incentivized to incorporate measures of cognitive ability into the recruitment process. Above all, the policies of public and private organizations should be based on evidence rather than ideology or wishful thinking.

SOURCE

Wednesday, March 12, 2014


Does a low IQ make you right-wing? That depends on how you define left and right

Michael Hanlon makes some interesting points below but overlooks the obvious:  People with high IQs are very much advantaged in the educational system and tend to stay in that system longer.  And particularly in the later years of education, the Leftist propaganda gets all but overwhelming.  So all that the research really shows is that an exposure to overwhelming Leftist propaganda does influence some people's thinking.  They adopt Leftist attitudes where they otherwise might not

So right-wingers are stupid – it’s official. Psychologists in Canada have compared IQ scores of several thousand British children, who were born in 1958 and 1970, with their stated views as adults on things such as treatment of criminals and openness to working with or living near to people of other races. They also looked at some US data which compared IQ scores with homophobic attitudes.

The conclusion: your intelligence as a child correlates strongly with socially liberal views. People with low IQs tend to be more in favour of harsh punishments, more homophobic and more likely to be racist. Interestingly, as these were IQ scores measured when young this does seem to be a measure of something innate, not merely exposure to ‘liberal’ views through education.

The inference is that what we call conservatism is a symptom of limited intellectual ability, signified by fear of the new and of outsiders, a retreat into tradition and tribal loyalty, and an unsophisticated disgust at sexual mores that deviate even slightly from the norm. Put bluntly stupidity correlates with insecurity, hatred, pessimism and fear, intelligence with confidence, optimism and trust.

Cue howls of outrage and not just from the right. In fact, left-wingers, liberals, call them what you will (and as I will argue these terms are far from interchangeable) have maintained something of an embarrassed silence about this. Liberals tend to dislike talk of innate intelligence and are distrustful of IQ tests and any hints of biological determinism. It might suit them politically to say their opponents are dim, but (they like to think) they are too polite to say so.

So what is going on here? Are conservatives really, statistically and meaningfully, less intelligent than socialists? Or is the story more subtle?

In fact there is nothing new in pointing to a link between social attitudes and intelligence. In 2010 the evolutionary psychologist Satochi Kanazawa, who works at the London School of Economics, analysed data from 20,000 young Americans and found that average IQ increased steadily from those who described themselves as ‘very conservative’ to those who describe themselves as ‘very liberal’. A study looking at British children, carried out by Ian Deary, reached a conclusion neatly summarised by the title of the paper: 'Bright Children become Enlightened Adults'. Other studies have found correlations between strong religiosity (a traditional marker of conservatism) and low intelligence.

Are socialists really more intelligent than conservatives? That depends how you define your terms

So case closed? Not really. The problem here is how we define ‘left’ and ‘right’ thinking, what this means socially and politically. A moment’s thought shows that the faultlines are not only blurred but they are legion, cris-crossing across traditional political strata and have changed through time.

As Steven Pinker points out in The Better Angels of our Nature, his marvellous book about the history of violence, social liberalism does not equate necessarily with economic socialism. He points to a study by the economist Bryan Caplan, an economist at George Mason University in Virginia, who found that smart people tend to think like economists, being in favour of free trade, globalisation and free markets and against protectionism and state intervention in industry. This matches other findings that show that IQ correlates not with left-wing thinking as such, but with classic Enlightenment liberalism.

So a smart person (all else being equal) will probably be in favour of capitalism generally, and free-trade in particular. He or she will distrust state intervention in the markets, probably be suspicious of welfarism and deeply dislike protectionism, union closed-shops and tariffs. The smart person will believe that the have-nots should be encouraged to become haves by dint of their own labours and by the levelling of economic playing fields, NOT by taking money off the haves and giving it to them. In other words, Thatcherism. Hardly something we equate with the left.

But there is another side to what the Smarts believe. They are pro-immigration (immigration being a form of free trade, in this case in human labour). They are impeccably socially liberal. They do not care what consenting adults get up to in bed and would legalise gay marriage without a thought. They are as near as is possible to be colour blind and strongly favour sexual equality. They are internationalist and despise petty nationalism. And they are suspicious of the war on drugs and in fact of wars in general and do not believe the public should in general be allowed to own firearms. These are the social views, then, of the British metropolitan Left. So what is it then? Are dim people right or left? Here we meet the problem of defining liberalism and left-wingery.

A belief in economic redistribution of wealth does not correlate with social liberalism. The nations of the Cold War Communist bloc were ferociously ‘Left Wing’ in terms of a belief in statism, nationalised industries, basic equality and so forth but socially and in other ways they were far, far to the ‘right’ of any mainstream European or American party. The Soviet education system was brutally elitist – no wishy-washy mixed-ability nonsense there. Militarism and conscription were the norm. Communist states had and had an attachment to capital punishment, repression of homosexuals and paid only lipservice to sexual equality (Russian women were free to work, but they had to go back and do the cleaning and cooking when they had finished).

In today’s world the most ‘right wing’ attitudes are to be found not in the American Bible Belt but in sub-Saharan Africa, the Caribbean and parts of Asia as well as Russia. Across most of Africa the majority has an eye-wateringly brutal view of homosexuality (gays face long terms of imprisonment or worse in many southern and eastern African states). If you want to see robust attitudes to crime, sexuality, feminism, immigration and religious freedom go to somewhere like Sudan or Mauritania, Uganda or even Kenya and Jamaica.

The paradox is that the political discourse in nations such as these has been dominated by a leftish post-colonialism. The epitome of this paradox is, or was (attitudes have relaxed) Communist Cuba where attitudes to gays, criminals, and people of non-European descent would have softened the heart of a Mississippi Klansman.

Historical context: Homosexuality was illegal under Clement Attlee's 'left-wing' Labour government, but not under Margaret Thatcher's 'right-wing' Conservative administration

Paradox: In terms of social attitudes, Fidel Castro's communist Cuba was more 'right-wing' than Margaret Thatcher's Conservative administration

The correlation between left-wing views, liberal social attitudes and intelligence probably has a political significance only in advanced industrial societies where the values of the liberal enlightenment (a belief in freedom, fairness, reason, science, free trade, the rule of law, property rights and gentle commerce) govern society. It is probably true to say that in Britain, France, the US, Canada and so forth there is a correlation, and an interesting one, between intelligence and sexual liberalism and openness to people from a different culture and/or race. But these views can be held by some pretty stupid people as well (the politically correct anti-christmas, coffee-with-milk, crazy-islamist-welcoming brigade).

We probably need some new words. ‘Left’ and ‘Right’ have become so tarnished by a century of propaganda and ill-advised alliances that they have become almost meaningless. We have a notionally ‘right of centre’ government in the UK and yet in its historical and geographical context the Cameron administration must be one of the most ‘left-wing’ administrations in the history of humanity – a consequence of modernity as much as anything else (under Clement Attlee gays were imprisoned, under Thatcher they were not). Increasingly, traditional right-wing views (blatant racism, sexism and homophobia) are simply seen as beyond the pale. In the US the current crop of Republican candidates mostly come across as a bunch of swivel-eyed fruitcakes to us, but none of them, from Mitt Romney downwards, would express the view that ‘the only good Indian is a dead Indian’ which is what the historically revered future ‘liberal’ president, Theodore Roosevelt wrote in 1886.

Liberalism is a function then not only of intelligence but of modernity. Illiberal, ‘stupid’ states such as Mauritania and Saudi Arabia are, quite literally, stuck in the past (even if their citizens are not individually stupid). Plenty of bright people hold illiberal views (attitudes to violent crime do not fall into convenient left-right camps) and a few dim people are impeccably enlightened. Increasingly, clever people hold a series of views that may be construed as ‘right’ or ‘left’ simultaneously. The challenge for the political parties is to find a way of reflecting this and representing this voice on the national level. And that will require some very clever thinking indeed.

SOURCE

Note:  I have a more extensive comment on the research concerned here

Wednesday, February 12, 2014


Is intelligence written in the genes?

The evidence keeps piling up.  Many genes are now known to be involved, which reinforces my view that high IQ is just one aspect of general biological good functioning

A gene which may make people more intelligent has been discovered by scientists.  Researchers have found that teenagers who had a highly functioning NPTN gene performed better in intelligence tests.

It is thought the NPTN gene indirectly affects how the brain cells communicate and may control the formation of the cerebral cortex, the outermost layer of the human brain, also known as ‘grey matter.’  Previously it has been shown that grey matter plays a key role in memory, attention, perceptual awareness, thought and language.

Studies have also proved that the thickness of the cerebral cortex correlates with intellectual ability. However, until now no genes had been identified.

Teens with an underperforming NPTN gene did less well in intelligence tests.

Dr Sylvane DesriviĂšres, from King’s College London’s Institute of Psychiatry and lead author of the study, said: “We wanted to find out how structural differences in the brain relate to differences in intellectual ability.

“It’s important to point out that intelligence is influenced by many genetic and environmental factors. “The gene we identified only explains a tiny proportion of the differences in intellectual ability.”

An international team of scientists, led by King’s, analysed DNA samples and MRI scans from 1,583 healthy 14 year old teenagers.

The teenagers also underwent a series of tests to determine their verbal and non-verbal intelligence.

The researchers looked at over 54,000 genetic variants possibly involved in brain development.

They found that, on average, teenagers carrying a particular gene variant had a thinner cortex in the left cerebral hemisphere, particularly in the frontal and temporal lobes, and performed less well on tests for intellectual ability.

The genetic variation affects the expression of the NPTN gene, which encodes a protein acting at neuronal synapses and therefore affects how brain cells communicate.

Their findings suggest that some differences in intellectual abilities can result from the decreased function of the NPTN gene in particular regions of the left brain hemisphere.

Although the genetic variation identified in this study only accounts for an estimated 0.5 per cent of the total variation in intelligence.

However, the findings may have important implications for the understanding of biological mechanisms underlying several psychiatric disorders, such as schizophrenia, autism, where impaired cognitive ability is a key feature of the disorder.

The study was published in Molecular Psychiatry

SOURCE