Wednesday, April 23, 2014


Yes, IQ Really Matters

Critics of the SAT and other standardized testing are disregarding the data.  Leftists hate it because it shows that all men are NOT equal

By David Z. Hambrick and Christopher Chabris writing in "Slate" (!)

The College Board—the standardized testing behemoth that develops and administers the SAT and other tests—has redesigned its flagship product again. Beginning in spring 2016, the writing section will be optional, the reading section will no longer test “obscure” vocabulary words, and the math section will put more emphasis on solving problems with real-world relevance. Overall, as the College Board explains on its website, “The redesigned SAT will more closely reflect the real work of college and career, where a flexible command of evidence—whether found in text or graphic [sic]—is more important than ever.” 

A number of pressures may be behind this redesign. Perhaps it’s competition from the ACT, or fear that unless the SAT is made to seem more relevant, more colleges will go the way of Wake Forest, Brandeis, and Sarah Lawrence and join the “test optional admissions movement,” which already boasts several hundred members. Or maybe it’s the wave of bad press that standardized testing, in general, has received over the past few years.

Critics of standardized testing are grabbing this opportunity to take their best shot at the SAT. They make two main arguments. The first is simply that a person’s SAT score is essentially meaningless—that it says nothing about whether that person will go on to succeed in college. Leon Botstein, president of Bard College and longtime standardized testing critic, wrote in Time that the SAT “needs to be abandoned and replaced,” and added:

"The blunt fact is that the SAT has never been a good predictor of academic achievement in college. High school grades adjusted to account for the curriculum and academic programs in the high school from which a student graduates are. The essential mechanism of the SAT, the multiple choice test question, is a bizarre relic of long outdated 20th century social scientific assumptions and strategies."

Calling use of SAT scores for college admissions a “national scandal,” Jennifer Finney Boylan, an English professor at Colby College, argued in the New York Times that:

"The only way to measure students’ potential is to look at the complex portrait of their lives: what their schools are like; how they’ve done in their courses; what they’ve chosen to study; what progress they’ve made over time; how they’ve reacted to adversity.
Along the same lines, Elizabeth Kolbert wrote in The New Yorker that “the SAT measures those skills—and really only those skills—necessary for the SATs.”

But this argument is wrong. The SAT does predict success in college—not perfectly, but relatively well, especially given that it takes just a few hours to administer. And, unlike a “complex portrait” of a student’s life, it can be scored in an objective way. (In a recent New York Times op-ed, the University of New Hampshire psychologist John D. Mayer aptly described the SAT’s validity as an “astonishing achievement.”)

In a study published in Psychological Science, University of Minnesota researchers Paul Sackett, Nathan Kuncel, and their colleagues investigated the relationship between SAT scores and college grades in a very large sample: nearly 150,000 students from 110 colleges and universities. SAT scores predicted first-year college GPA about as well as high school grades did, and the best prediction was achieved by considering both factors.

Botstein, Boylan, and Kolbert are either unaware of this directly relevant, easily accessible, and widely disseminated empirical evidence, or they have decided to ignore it and base their claims on intuition and anecdote—or perhaps on their beliefs about the way the world should be rather than the way it is. 

Furthermore, contrary to popular belief, it’s not just first-year college GPA that SAT scores predict. In a four-year study that started with nearly 3,000 college students, a team of Michigan State University researchers led by Neal Schmitt found that test score (SAT or ACT—whichever the student took) correlated strongly with cumulative GPA at the end of the fourth year. If the students were ranked on both their test scores and cumulative GPAs, those who had test scores in the top half (above the 50th percentile, or median) would have had a roughly two-thirds chance of having a cumulative GPA in the top half. By contrast, students with bottom-half SAT scores would be only one-third likely to make it to the top half in GPA.

Test scores also predicted whether the students graduated: A student who scored in the 95th percentile on the SAT or ACT was about 60 percent more likely to graduate than a student who scored in the 50th percentile. Similarly impressive evidence supports the validity of the SAT’s graduate school counterparts: the Graduate Record Examinations, the Law School Admissions Test, and the Graduate Management Admission Test. A 2007 Science article summed up the evidence succinctly: “Standardized admissions tests have positive and useful relationships with subsequent student accomplishments.”

SAT scores even predict success beyond the college years. For more than two decades, Vanderbilt University researchers David Lubinski, Camilla Benbow, and their colleagues have tracked the accomplishments of people who, as part of a youth talent search, scored in the top 1 percent on the SAT by age 13. Remarkably, even within this group of gifted students, higher scorers were not only more likely to earn advanced degrees but also more likely to succeed outside of academia. For example, compared with people who “only” scored in the top 1 percent, those who scored in the top one-tenth of 1 percent—the extremely gifted—were more than twice as likely as adults to have an annual income in the top 5 percent of Americans.

The second popular anti-SAT argument is that, if the test measures anything at all, it’s not cognitive skill but socioeconomic status. In other words, some kids do better than others on the SAT not because they’re smarter, but because their parents are rich. Boylan argued in her Times article that the SAT “favors the rich, who can afford preparatory crash courses” like those offered by Kaplan and the Princeton Review. Leon Botstein claimed in his Time article that “the only persistent statistical result from the SAT is the correlation between high income and high test scores.” And according to a Washington Post Wonkblog infographic (which is really more of a disinfographic) “your SAT score says more about your parents than about you.” 

It’s true that economic background correlates with SAT scores. Kids from well-off families tend to do better on the SAT. However, the correlation is far from perfect. In the University of Minnesota study of nearly 150,000 students, the correlation between socioeconomic status, or SES, and SAT was not trivial but not huge. (A perfect correlation has a value of 1; this one was .25.) What this means is that there are plenty of low-income students who get good scores on the SAT; there are even likely to be low-income students among those who achieve a perfect score on the SAT.

Thus, just as it was originally designed to do, the SAT in fact goes a long way toward leveling the playing field, giving students an opportunity to distinguish themselves regardless of their background. Scoring well on the SAT may in fact be the only such opportunity for students who graduate from public high schools that are regarded by college admissions offices as academically weak. In a letter to the editor, a reader of Elizabeth Kolbert’s New Yorker article on the SAT made this point well:

The SAT may be the bane of upper-middle-class parents trying to launch their children on a path to success. But sometimes one person’s obstacle is another person’s springboard. I am the daughter of a single, immigrant father who never attended college, and a good SAT score was one of the achievements that catapulted me into my state’s flagship university and, from there, on to medical school. Flawed though it is, the SAT afforded me, as it has thousands of others, a way to prove that a poor, public-school kid who never had any test prep can do just as well as, if not better than, her better-off peers.

The sort of admissions approach that Botstein advocates—adjusting high school GPA “to account for the curriculum and academic programs in the high school from which a student graduates” and abandoning the SAT—would do the opposite of leveling the playing field. A given high school GPA would be adjusted down for a poor, public-school kid, and adjusted up for a rich, private-school kid. 

Furthermore, contrary to what Boylan implies in her Times piece, “preparatory crash courses” don’t change SAT scores much. Research has consistently shown that prep courses have only a small effect on SAT scores—and a much smaller effect than test prep companies claim they do. For example, in one study of a random sample of more than 4,000 students, average improvement in overall score on the “old” SAT, which had a range from 400 to 1600, was no more than about 30 points.

Finally, it is clear that SES is not what accounts for the fact that SAT scores predict success in college. In the University of Minnesota study, the correlation between high school SAT and college GPA was virtually unchanged after the researchers statistically controlled for the influence of SES. If SAT scores were just a proxy for privilege, then putting SES into the mix should have removed, or at least dramatically decreased, the association between the SAT and college performance. But it didn’t. This is more evidence that Boylan overlooks or chooses to ignore. 

What this all means is that the SAT measures something—some stable characteristic of high school students other than their parents’ income—that translates into success in college. And what could that characteristic be? General intelligence. The content of the SAT is practically indistinguishable from that of standardized intelligence tests that social scientists use to study individual differences, and that psychologists and psychiatrists use to determine whether a person is intellectually disabled—and even whether a person should be spared execution in states that have the death penalty. Scores on the SAT correlate very highly with scores on IQ tests—so highly that the Harvard education scholar Howard Gardner, known for his theory of multiple intelligences, once called the SAT and other scholastic measures “thinly disguised” intelligence tests. 

One could of course argue that IQ is also meaningless—and many have. For example, in his bestseller The Social Animal, David Brooks claimed that “once you get past some pretty obvious correlations (smart people make better mathematicians), there is a very loose relationship between IQ and life outcomes.” And in a recent Huffington Post article, psychologists Tracy Alloway and Ross Alloway wrote that

IQ won’t help you in the things that really matter: It won’t help you find happiness, it won’t help you make better decisions, and it won’t help you manage your kids’ homework and the accounts at the same time. It isn’t even that useful at its raison d'être: predicting success.

But this argument is wrong, too. Indeed, we know as well as anything we know in psychology that IQ predicts many different measures of success. Exhibit A is evidence from research on job performance by the University of Iowa industrial psychologist Frank Schmidt and his late colleague John Hunter. Synthesizing evidence from nearly a century of empirical studies, Schmidt and Hunter established that general mental ability—the psychological trait that IQ scores reflect—is the single best predictor of job training success, and that it accounts for differences in job performance even in workers with more than a decade of experience. It’s more predictive than interests, personality, reference checks, and interview performance. Smart people don’t just make better mathematicians, as Brooks observed—they make better managers, clerks, salespeople, service workers, vehicle operators, and soldiers.

IQ predicts other things that matter, too, like income, employment, health, and even longevity. In a 2001 study published in the British Medical Journal, Scottish researchers Lawrence Whalley and Ian Deary identified more than 2,000 people who had taken part in the Scottish Mental Survey of 1932, a nationwide assessment of IQ. Remarkably, people with high IQs at age 11 were more considerably more likely to survive to old age than were people with lower IQs. For example, a person with an IQ of 100 (the average for the general population) was 21 percent more likely to live to age 76 than a person with an IQ of 85. And the relationship between IQ and longevity remains statistically significant even after taking SES into account. Perhaps IQ reflects the mental resources—the reasoning and problem-solving skills—that people can bring to bear on maintaining their health and making wise decisions throughout life. This explanation is supported by evidence that higher-IQ individuals engage in more positive health behaviors, such as deciding to quit smoking.

IQ is of course not the only factor that contributes to differences in outcomes like academic achievement and job performance (and longevity). Psychologists have known for many decades that certain personality traits also have an impact. One is conscientiousness, which reflects a person’s self-control, discipline, and thoroughness. People who are high in conscientiousness delay gratification to get their work done, finish tasks that they start, and are careful in their work, whereas people who are low in conscientiousness are impulsive, undependable, and careless (compare Lisa and Bart Simpson). The University of Pennsylvania psychologist Angela Duckworth has proposed a closely related characteristic that she calls “grit,” which she defines as a person’s “tendency to sustain interest in and effort toward very long-term goals,” like building a career or family.  

Duckworth has argued that such factors may be even more important as predictors of success than IQ. In one study, she and UPenn colleague Martin Seligman found that a measure of self-control collected at the start of eighth grade correlated more than twice as strongly with year-end grades than IQ did. However, the results of meta-analyses, which are more telling than the results of any individual study, indicate that these factors do not have a larger effect than IQ does on measures of academic achievement and job performance. So, while it seems clear that factors like conscientiousness—not to mention social skill, creativity, interest, and motivation—do influence success, they cannot take the place of IQ.

None of this is to say that IQ, whether measured with the SAT or a traditional intelligence test, is an indicator of value or worth. Nobody should be judged, negatively or positively, on the basis of a test score. A test score is a prediction, not a prophecy, and doesn’t say anything specific about what a person will or will not achieve in life. A high IQ doesn’t guarantee success, and a low IQ doesn’t guarantee failure. Furthermore, the fact that IQ is at present a powerful predictor of certain socially relevant outcomes doesn’t mean it always will be. If there were less variability in income—a smaller gap between the rich and the poor—then IQ would have a weaker correlation with income. For the same reason, if everyone received the same quality of health care, there would be a weaker correlation between IQ and health.

But the bottom line is that there are large, measurable differences among people in intellectual ability, and these differences have consequences for people’s lives. Ignoring these facts will only distract us from discovering and implementing wise policies.

Given everything that social scientists have learned about IQ and its broad predictive validity, it is reasonable to make it a factor in decisions such as whom to hire for a particular job or admit to a particular college or university. In fact, disregarding IQ—by admitting students to colleges or hiring people for jobs in which they are very likely to fail—is harmful both to individuals and to society. For example, in occupations where safety is paramount, employers could be incentivized to incorporate measures of cognitive ability into the recruitment process. Above all, the policies of public and private organizations should be based on evidence rather than ideology or wishful thinking.

SOURCE

Wednesday, March 12, 2014


Does a low IQ make you right-wing? That depends on how you define left and right

Michael Hanlon makes some interesting points below but overlooks the obvious:  People with high IQs are very much advantaged in the educational system and tend to stay in that system longer.  And particularly in the later years of education, the Leftist propaganda gets all but overwhelming.  So all that the research really shows is that an exposure to overwhelming Leftist propaganda does influence some people's thinking.  They adopt Leftist attitudes where they otherwise might not

So right-wingers are stupid – it’s official. Psychologists in Canada have compared IQ scores of several thousand British children, who were born in 1958 and 1970, with their stated views as adults on things such as treatment of criminals and openness to working with or living near to people of other races. They also looked at some US data which compared IQ scores with homophobic attitudes.

The conclusion: your intelligence as a child correlates strongly with socially liberal views. People with low IQs tend to be more in favour of harsh punishments, more homophobic and more likely to be racist. Interestingly, as these were IQ scores measured when young this does seem to be a measure of something innate, not merely exposure to ‘liberal’ views through education.

The inference is that what we call conservatism is a symptom of limited intellectual ability, signified by fear of the new and of outsiders, a retreat into tradition and tribal loyalty, and an unsophisticated disgust at sexual mores that deviate even slightly from the norm. Put bluntly stupidity correlates with insecurity, hatred, pessimism and fear, intelligence with confidence, optimism and trust.

Cue howls of outrage and not just from the right. In fact, left-wingers, liberals, call them what you will (and as I will argue these terms are far from interchangeable) have maintained something of an embarrassed silence about this. Liberals tend to dislike talk of innate intelligence and are distrustful of IQ tests and any hints of biological determinism. It might suit them politically to say their opponents are dim, but (they like to think) they are too polite to say so.

So what is going on here? Are conservatives really, statistically and meaningfully, less intelligent than socialists? Or is the story more subtle?

In fact there is nothing new in pointing to a link between social attitudes and intelligence. In 2010 the evolutionary psychologist Satochi Kanazawa, who works at the London School of Economics, analysed data from 20,000 young Americans and found that average IQ increased steadily from those who described themselves as ‘very conservative’ to those who describe themselves as ‘very liberal’. A study looking at British children, carried out by Ian Deary, reached a conclusion neatly summarised by the title of the paper: 'Bright Children become Enlightened Adults'. Other studies have found correlations between strong religiosity (a traditional marker of conservatism) and low intelligence.

Are socialists really more intelligent than conservatives? That depends how you define your terms

So case closed? Not really. The problem here is how we define ‘left’ and ‘right’ thinking, what this means socially and politically. A moment’s thought shows that the faultlines are not only blurred but they are legion, cris-crossing across traditional political strata and have changed through time.

As Steven Pinker points out in The Better Angels of our Nature, his marvellous book about the history of violence, social liberalism does not equate necessarily with economic socialism. He points to a study by the economist Bryan Caplan, an economist at George Mason University in Virginia, who found that smart people tend to think like economists, being in favour of free trade, globalisation and free markets and against protectionism and state intervention in industry. This matches other findings that show that IQ correlates not with left-wing thinking as such, but with classic Enlightenment liberalism.

So a smart person (all else being equal) will probably be in favour of capitalism generally, and free-trade in particular. He or she will distrust state intervention in the markets, probably be suspicious of welfarism and deeply dislike protectionism, union closed-shops and tariffs. The smart person will believe that the have-nots should be encouraged to become haves by dint of their own labours and by the levelling of economic playing fields, NOT by taking money off the haves and giving it to them. In other words, Thatcherism. Hardly something we equate with the left.

But there is another side to what the Smarts believe. They are pro-immigration (immigration being a form of free trade, in this case in human labour). They are impeccably socially liberal. They do not care what consenting adults get up to in bed and would legalise gay marriage without a thought. They are as near as is possible to be colour blind and strongly favour sexual equality. They are internationalist and despise petty nationalism. And they are suspicious of the war on drugs and in fact of wars in general and do not believe the public should in general be allowed to own firearms. These are the social views, then, of the British metropolitan Left. So what is it then? Are dim people right or left? Here we meet the problem of defining liberalism and left-wingery.

A belief in economic redistribution of wealth does not correlate with social liberalism. The nations of the Cold War Communist bloc were ferociously ‘Left Wing’ in terms of a belief in statism, nationalised industries, basic equality and so forth but socially and in other ways they were far, far to the ‘right’ of any mainstream European or American party. The Soviet education system was brutally elitist – no wishy-washy mixed-ability nonsense there. Militarism and conscription were the norm. Communist states had and had an attachment to capital punishment, repression of homosexuals and paid only lipservice to sexual equality (Russian women were free to work, but they had to go back and do the cleaning and cooking when they had finished).

In today’s world the most ‘right wing’ attitudes are to be found not in the American Bible Belt but in sub-Saharan Africa, the Caribbean and parts of Asia as well as Russia. Across most of Africa the majority has an eye-wateringly brutal view of homosexuality (gays face long terms of imprisonment or worse in many southern and eastern African states). If you want to see robust attitudes to crime, sexuality, feminism, immigration and religious freedom go to somewhere like Sudan or Mauritania, Uganda or even Kenya and Jamaica.

The paradox is that the political discourse in nations such as these has been dominated by a leftish post-colonialism. The epitome of this paradox is, or was (attitudes have relaxed) Communist Cuba where attitudes to gays, criminals, and people of non-European descent would have softened the heart of a Mississippi Klansman.

Historical context: Homosexuality was illegal under Clement Attlee's 'left-wing' Labour government, but not under Margaret Thatcher's 'right-wing' Conservative administration

Paradox: In terms of social attitudes, Fidel Castro's communist Cuba was more 'right-wing' than Margaret Thatcher's Conservative administration

The correlation between left-wing views, liberal social attitudes and intelligence probably has a political significance only in advanced industrial societies where the values of the liberal enlightenment (a belief in freedom, fairness, reason, science, free trade, the rule of law, property rights and gentle commerce) govern society. It is probably true to say that in Britain, France, the US, Canada and so forth there is a correlation, and an interesting one, between intelligence and sexual liberalism and openness to people from a different culture and/or race. But these views can be held by some pretty stupid people as well (the politically correct anti-christmas, coffee-with-milk, crazy-islamist-welcoming brigade).

We probably need some new words. ‘Left’ and ‘Right’ have become so tarnished by a century of propaganda and ill-advised alliances that they have become almost meaningless. We have a notionally ‘right of centre’ government in the UK and yet in its historical and geographical context the Cameron administration must be one of the most ‘left-wing’ administrations in the history of humanity – a consequence of modernity as much as anything else (under Clement Attlee gays were imprisoned, under Thatcher they were not). Increasingly, traditional right-wing views (blatant racism, sexism and homophobia) are simply seen as beyond the pale. In the US the current crop of Republican candidates mostly come across as a bunch of swivel-eyed fruitcakes to us, but none of them, from Mitt Romney downwards, would express the view that ‘the only good Indian is a dead Indian’ which is what the historically revered future ‘liberal’ president, Theodore Roosevelt wrote in 1886.

Liberalism is a function then not only of intelligence but of modernity. Illiberal, ‘stupid’ states such as Mauritania and Saudi Arabia are, quite literally, stuck in the past (even if their citizens are not individually stupid). Plenty of bright people hold illiberal views (attitudes to violent crime do not fall into convenient left-right camps) and a few dim people are impeccably enlightened. Increasingly, clever people hold a series of views that may be construed as ‘right’ or ‘left’ simultaneously. The challenge for the political parties is to find a way of reflecting this and representing this voice on the national level. And that will require some very clever thinking indeed.

SOURCE

Note:  I have a more extensive comment on the research concerned here

Wednesday, February 12, 2014


Is intelligence written in the genes?

The evidence keeps piling up.  Many genes are now known to be involved, which reinforces my view that high IQ is just one aspect of general biological good functioning

A gene which may make people more intelligent has been discovered by scientists.  Researchers have found that teenagers who had a highly functioning NPTN gene performed better in intelligence tests.

It is thought the NPTN gene indirectly affects how the brain cells communicate and may control the formation of the cerebral cortex, the outermost layer of the human brain, also known as ‘grey matter.’  Previously it has been shown that grey matter plays a key role in memory, attention, perceptual awareness, thought and language.

Studies have also proved that the thickness of the cerebral cortex correlates with intellectual ability. However, until now no genes had been identified.

Teens with an underperforming NPTN gene did less well in intelligence tests.

Dr Sylvane Desrivières, from King’s College London’s Institute of Psychiatry and lead author of the study, said: “We wanted to find out how structural differences in the brain relate to differences in intellectual ability.

“It’s important to point out that intelligence is influenced by many genetic and environmental factors. “The gene we identified only explains a tiny proportion of the differences in intellectual ability.”

An international team of scientists, led by King’s, analysed DNA samples and MRI scans from 1,583 healthy 14 year old teenagers.

The teenagers also underwent a series of tests to determine their verbal and non-verbal intelligence.

The researchers looked at over 54,000 genetic variants possibly involved in brain development.

They found that, on average, teenagers carrying a particular gene variant had a thinner cortex in the left cerebral hemisphere, particularly in the frontal and temporal lobes, and performed less well on tests for intellectual ability.

The genetic variation affects the expression of the NPTN gene, which encodes a protein acting at neuronal synapses and therefore affects how brain cells communicate.

Their findings suggest that some differences in intellectual abilities can result from the decreased function of the NPTN gene in particular regions of the left brain hemisphere.

Although the genetic variation identified in this study only accounts for an estimated 0.5 per cent of the total variation in intelligence.

However, the findings may have important implications for the understanding of biological mechanisms underlying several psychiatric disorders, such as schizophrenia, autism, where impaired cognitive ability is a key feature of the disorder.

The study was published in Molecular Psychiatry

SOURCE

Wednesday, January 15, 2014



Main genes for IQ now isolated

This is much sooner than anyone expected. The .90 correlation between a gene set and IQ mentioned below is historic.  Correlations don't get much better than that in psychology.  The IQ deniers have always looked pretty silly in the light of the evidence but I cannot see that they have any room to move now at all  -- JR

Factor Analysis of Population Allele Frequencies as a Simple, Novel Method of Detecting Signals of Recent Polygenic Selection: The Example of Educational Attainment and IQ

Davide Piffer, Interdisciplinary Bio Central, November 27, 2013

Synopsis

Weak widespread (polygenic) selection is a mechanism that acts on multiple SNPs simultaneously. The aim of this paper is to suggest a methodology to detect signals of polygenic selection using educational attainment as an example. Educational attainment is a polygenic phenotype, influenced by many genetic variants with small effects. Frequencies of 10 SNPs found to be associated with educational attainment in a recent genome-wide association study were obtained from HapMap, 1000 Genomes and ALFRED. Factor analysis showed that they are strongly statistically associated at the population level, and the resulting factor score was highly related to average population IQ (r=0.90). Moreover, allele frequencies were positively correlated with aggregate measures of educational attainment in the population, average IQ, and with two intelligence increasing alleles that had been identified in different studies. This paper provides a simple method for detecting signals of polygenic selection on genes with overlapping phenotypes but located on different chromosomes. The method is therefore different from traditional estimations of linkage disequilibrium. This method can also be used as a tool in gene discovery, potentially decreasing the number of SNPs that are included in a genome-wide association study, reducing the multiple-testing problem and required sample sizes and consequently, financial costs.

SOURCE

Sunday, November 10, 2013



The Flynn effect

The Flynn effect is the fact that average IQ scores throughout the world rose substantially throughout the 20th century.  The scores for both blacks and whites rose but the gap between the two remained essentially the same.

The effect has been something of a puzzle.  Why did it happen?  There are probably a number of processes causing it  -- processes which could be broadly grouped as "modernization".  An interesting part of the effect is that scores on subtests that load most highly on 'g' (the general factor) have changed least.  This suggests that scores on a perfect test would not have changed at all.

A new researcher has fastened on to that fact and looked at what characterizes high 'g' and low 'g' subtests.  He finds that the subtests which have shown the biggest change are tests where a small group of strategies allow you to answer most of the items successfully.

And that ties in with an explanation commonly given for the Flynn effect -- that ever rising number of years spent in the educational system give students more and more practice at using test-answering strategies.  And they can use some of those strategies in answering IQ tests too. So education increases scores on the least-central question-types.  On items that strategies cannot help you to answer (such as testing how many hard words you know) there has been virtually no change over the years.

So education has now been fairly conclusively identified as the main cause of the rising scores and at the same time the rising scores have been shown as not reflecting a real  rise in underlying abilities.

Steve Sailer has the details

Wednesday, October 23, 2013



Stereotype threat

Putting it bluntly, Stereotype threat is an invented process to explain why blacks do poorly on IQ tests.  If blacks know that they are expected to do badly they allegedly get all anxious and do even worse than they otherwise would.  But shouldn't the knowledge that they are expected to do badly energize them and make them try harder -- just to prove the stereotype wrong?  I would have thought so but I am not a Leftist.

I have had a bit of a laugh at the theory before (e.g. here) and also see here

The theory has also been used to explain away the fact that women on average do badly on mathematical tasks (those nervous ladies!) and there has recently been some interesting work suggesting that the theory is wrong in that field too.  Steve Sailer summarizes:


"Although the social sciences are considered a bastion of progressivism, it's remarkable how few data-driven ideas they generate in support of their ideology. We can get a feel for this by noting how rare are the "exceptions to the rule" studies that become immensely popular due to bolstering the dominant worldview, such as Hart & Risley's finding that black people don't talk enough and Claude Steele's little study of Stereotype Threat in which he induces black students at Stanford to score lower on a low stakes test of his devising than their high stakes SAT scores would predict. (I wrote about Stereotype Threat in VDARE.com in 2004, suggesting it's not hard to get across the message to black or female students that the professor wants them to not exert themselves fully on this meaningless test. That you can "prime" groups of people to work less hard on an unimportant test does not prove that you know how to make them score higher on an important test.)

Lately, the evidence has been mounting that the existence of Stereotype Threat is quite dependent upon the file drawer function: studies finding its existence are quickly published while studies not finding its existence are in much less demand. A recent article:

An Examination of Stereotype Threat Effects on Girls' Mathematics Performance

By Colleen M. Ganley et al.

... Conclusion

Taken together, the findings from published research, unpublished articles, and the present studies reveal inconsistency in the effects of stereotype threat on girls’ mathematics performance. The discrepancy in results from published and unpublished studies suggests publication bias, which may create an inaccurate picture of the phenomenon. A recent review suggests that this publication bias may also be an issue in the literature on stereotype threat in adult women (Stoet & Geary, 2012). Overall, these results raise the possibility that stereotype threat may not be the cause of gender differences in mathematics performance prior to college. Although we feel that more nuanced research needs to be done to truly understand whether stereotype threat impacts girls’ mathematics performance, we also believe that too much focus on this one explanation may deter researchers from investigating other key factors that may be involved in gender differences in mathematics performance. For example, there are a number of factors (e.g., mathematics anxiety, mathematics interest, spatial skills; see Ceci & Williams, 2010) that have been shown to be consistently related to mathematics performance and mathematics-and science-related career choices and may warrant more research attention than does stereotype threat."

SOURCE

Monday, October 21, 2013



Kees Jan can't

Kees-Jan Kan, a young Dutchman, has recently rediscovered one of the most basic facts in IQ testing: That it's easiest to detect IQ differences if the people you are studying (Ss) have a common background.  So if the Ss are all in the same class at school, for instance, a vocabulary test (finding out how many hard words they know) will give you a quick and easy way to sort them out.  And you will find that the guys who know lots of words are also good at a whole range of puzzles, even mathematical ones. 

So a common background optimizes your chances of assessing IQ accurately. And to be a bit technical, vocab loads highly on 'g' (the general factor in intelligence), meaning that, where it can be used, it is a powerful predictor of other abilities.  Vocab is however convenient rather than essential in IQ measurement.  Tests designed for use among people who do not have a common background (such as the Raven PMs) don't use it but still work perfectly well.

On those basic facts, KJK has erected an elaborate theory, which comes to the conclusions that IQ is mostly cultural, with a genetic component much smaller that is generally thought.  And it is the cultural part which is hereditary.

To arrive at that, KJK goes via the concept of the "cultural load" of each IQ question -- which he assesses by looking at how often a question has to be altered when you are adminstering it to a new and different population.  And he finds that by removing (statistically) the influence of cultural load, all other correlations are much reduced.

When we look more closely at his data, however (e.g. Table 3.1 in KJK's doctoral dissertation) we find that only two out of 11 question types have a high cultural load:  Vocab and general knowledge.  And the cultural dependency of those two question types has been obvious to everyone since the year dot.

What is interesting however is that the remaining 9 question types have low to negligible cultural load.  In other words, we could remove the vocab and knowledge subtests from the overall test and still have a robust test.  So my conclusion is that what KJK should have done from the beginning is to remove those two flawed item types from his calculations altogether.  Once you do that all his exciting findings melt away.  His findings rely on items that he himself knows to be flawed.

There is a summary of KJK's dissertation at  The Unscientific American -- JR