Friday, April 25, 2014
Charles Murray on allegations of racism
Since the flap about Paul Ryan’s remarks last week, elements of the blogosphere, and now Paul Krugman in The New York Times, have stated that I tried to prove the genetic inferiority of blacks in The Bell Curve.
The position that Richard Herrnstein and I took about the role of race, IQ and genes in The Bell Curve is contained in a single paragraph in an 800-page book. It is found on page 311, and consists in its entirety of the following text:
If the reader is now convinced that either the genetic or environmental explanation has won out to the exclusion of the other, we have not done a sufficiently good job of presenting one side or the other. It seems highly likely to us that both genes and the environment have something to do with racial differences. What might the mix be? We are resolutely agnostic on that issue; as far as we can determine, the evidence does not justify an estimate.
That’s it. The four pages following that quote argue that the hysteria about race and genes is misplaced. I think our concluding paragraph (page 315) is important enough to repeat here:
In sum: If tomorrow you knew beyond a shadow of a doubt that all the cognitive differences between races were 100 percent genetic in origin, nothing of any significance should change. The knowledge would give you no reason to treat individuals differently than if ethnic differences were 100 percent environmental. By the same token, knowing that the differences are 100 percent environmental in origin would not suggest a single program or policy that is not already being tried. It would justify no optimism about the time it will take to narrow the existing gaps. It would not even justify confidence that genetically based differences will not be upon us within a few generations. The impulse to think that environmental sources of differences are less threatening than genetic ones is natural but illusory.
Our sin was to openly discuss the issue, not to advocate a position. But for the last 40 years, that’s been sin enough.
I’ll be happy to respond at more length to allegations of racism made by anyone who can buttress them with a direct quote from anything I’ve written. I’ll leave you with this thought: in all the critiques of The Bell Curve in particular and my work more generally, no one ever accompanies their charges with direct quotes of what I’ve actually said. There’s a reason for that.
SOURCE
Wednesday, April 23, 2014
Yes, IQ Really Matters
Critics of the SAT and other standardized testing are disregarding the data. Leftists hate it because it shows that all men are NOT equal
By David Z. Hambrick and Christopher Chabris writing in "Slate" (!)
The College Board—the standardized testing behemoth that develops and administers the SAT and other tests—has redesigned its flagship product again. Beginning in spring 2016, the writing section will be optional, the reading section will no longer test “obscure” vocabulary words, and the math section will put more emphasis on solving problems with real-world relevance. Overall, as the College Board explains on its website, “The redesigned SAT will more closely reflect the real work of college and career, where a flexible command of evidence—whether found in text or graphic [sic]—is more important than ever.”
A number of pressures may be behind this redesign. Perhaps it’s competition from the ACT, or fear that unless the SAT is made to seem more relevant, more colleges will go the way of Wake Forest, Brandeis, and Sarah Lawrence and join the “test optional admissions movement,” which already boasts several hundred members. Or maybe it’s the wave of bad press that standardized testing, in general, has received over the past few years.
Critics of standardized testing are grabbing this opportunity to take their best shot at the SAT. They make two main arguments. The first is simply that a person’s SAT score is essentially meaningless—that it says nothing about whether that person will go on to succeed in college. Leon Botstein, president of Bard College and longtime standardized testing critic, wrote in Time that the SAT “needs to be abandoned and replaced,” and added:
"The blunt fact is that the SAT has never been a good predictor of academic achievement in college. High school grades adjusted to account for the curriculum and academic programs in the high school from which a student graduates are. The essential mechanism of the SAT, the multiple choice test question, is a bizarre relic of long outdated 20th century social scientific assumptions and strategies."
Calling use of SAT scores for college admissions a “national scandal,” Jennifer Finney Boylan, an English professor at Colby College, argued in the New York Times that:
"The only way to measure students’ potential is to look at the complex portrait of their lives: what their schools are like; how they’ve done in their courses; what they’ve chosen to study; what progress they’ve made over time; how they’ve reacted to adversity.
Along the same lines, Elizabeth Kolbert wrote in The New Yorker that “the SAT measures those skills—and really only those skills—necessary for the SATs.”
But this argument is wrong. The SAT does predict success in college—not perfectly, but relatively well, especially given that it takes just a few hours to administer. And, unlike a “complex portrait” of a student’s life, it can be scored in an objective way. (In a recent New York Times op-ed, the University of New Hampshire psychologist John D. Mayer aptly described the SAT’s validity as an “astonishing achievement.”)
In a study published in Psychological Science, University of Minnesota researchers Paul Sackett, Nathan Kuncel, and their colleagues investigated the relationship between SAT scores and college grades in a very large sample: nearly 150,000 students from 110 colleges and universities. SAT scores predicted first-year college GPA about as well as high school grades did, and the best prediction was achieved by considering both factors.
Botstein, Boylan, and Kolbert are either unaware of this directly relevant, easily accessible, and widely disseminated empirical evidence, or they have decided to ignore it and base their claims on intuition and anecdote—or perhaps on their beliefs about the way the world should be rather than the way it is.
Furthermore, contrary to popular belief, it’s not just first-year college GPA that SAT scores predict. In a four-year study that started with nearly 3,000 college students, a team of Michigan State University researchers led by Neal Schmitt found that test score (SAT or ACT—whichever the student took) correlated strongly with cumulative GPA at the end of the fourth year. If the students were ranked on both their test scores and cumulative GPAs, those who had test scores in the top half (above the 50th percentile, or median) would have had a roughly two-thirds chance of having a cumulative GPA in the top half. By contrast, students with bottom-half SAT scores would be only one-third likely to make it to the top half in GPA.
Test scores also predicted whether the students graduated: A student who scored in the 95th percentile on the SAT or ACT was about 60 percent more likely to graduate than a student who scored in the 50th percentile. Similarly impressive evidence supports the validity of the SAT’s graduate school counterparts: the Graduate Record Examinations, the Law School Admissions Test, and the Graduate Management Admission Test. A 2007 Science article summed up the evidence succinctly: “Standardized admissions tests have positive and useful relationships with subsequent student accomplishments.”
SAT scores even predict success beyond the college years. For more than two decades, Vanderbilt University researchers David Lubinski, Camilla Benbow, and their colleagues have tracked the accomplishments of people who, as part of a youth talent search, scored in the top 1 percent on the SAT by age 13. Remarkably, even within this group of gifted students, higher scorers were not only more likely to earn advanced degrees but also more likely to succeed outside of academia. For example, compared with people who “only” scored in the top 1 percent, those who scored in the top one-tenth of 1 percent—the extremely gifted—were more than twice as likely as adults to have an annual income in the top 5 percent of Americans.
The second popular anti-SAT argument is that, if the test measures anything at all, it’s not cognitive skill but socioeconomic status. In other words, some kids do better than others on the SAT not because they’re smarter, but because their parents are rich. Boylan argued in her Times article that the SAT “favors the rich, who can afford preparatory crash courses” like those offered by Kaplan and the Princeton Review. Leon Botstein claimed in his Time article that “the only persistent statistical result from the SAT is the correlation between high income and high test scores.” And according to a Washington Post Wonkblog infographic (which is really more of a disinfographic) “your SAT score says more about your parents than about you.”
It’s true that economic background correlates with SAT scores. Kids from well-off families tend to do better on the SAT. However, the correlation is far from perfect. In the University of Minnesota study of nearly 150,000 students, the correlation between socioeconomic status, or SES, and SAT was not trivial but not huge. (A perfect correlation has a value of 1; this one was .25.) What this means is that there are plenty of low-income students who get good scores on the SAT; there are even likely to be low-income students among those who achieve a perfect score on the SAT.
Thus, just as it was originally designed to do, the SAT in fact goes a long way toward leveling the playing field, giving students an opportunity to distinguish themselves regardless of their background. Scoring well on the SAT may in fact be the only such opportunity for students who graduate from public high schools that are regarded by college admissions offices as academically weak. In a letter to the editor, a reader of Elizabeth Kolbert’s New Yorker article on the SAT made this point well:
The SAT may be the bane of upper-middle-class parents trying to launch their children on a path to success. But sometimes one person’s obstacle is another person’s springboard. I am the daughter of a single, immigrant father who never attended college, and a good SAT score was one of the achievements that catapulted me into my state’s flagship university and, from there, on to medical school. Flawed though it is, the SAT afforded me, as it has thousands of others, a way to prove that a poor, public-school kid who never had any test prep can do just as well as, if not better than, her better-off peers.
The sort of admissions approach that Botstein advocates—adjusting high school GPA “to account for the curriculum and academic programs in the high school from which a student graduates” and abandoning the SAT—would do the opposite of leveling the playing field. A given high school GPA would be adjusted down for a poor, public-school kid, and adjusted up for a rich, private-school kid.
Furthermore, contrary to what Boylan implies in her Times piece, “preparatory crash courses” don’t change SAT scores much. Research has consistently shown that prep courses have only a small effect on SAT scores—and a much smaller effect than test prep companies claim they do. For example, in one study of a random sample of more than 4,000 students, average improvement in overall score on the “old” SAT, which had a range from 400 to 1600, was no more than about 30 points.
Finally, it is clear that SES is not what accounts for the fact that SAT scores predict success in college. In the University of Minnesota study, the correlation between high school SAT and college GPA was virtually unchanged after the researchers statistically controlled for the influence of SES. If SAT scores were just a proxy for privilege, then putting SES into the mix should have removed, or at least dramatically decreased, the association between the SAT and college performance. But it didn’t. This is more evidence that Boylan overlooks or chooses to ignore.
What this all means is that the SAT measures something—some stable characteristic of high school students other than their parents’ income—that translates into success in college. And what could that characteristic be? General intelligence. The content of the SAT is practically indistinguishable from that of standardized intelligence tests that social scientists use to study individual differences, and that psychologists and psychiatrists use to determine whether a person is intellectually disabled—and even whether a person should be spared execution in states that have the death penalty. Scores on the SAT correlate very highly with scores on IQ tests—so highly that the Harvard education scholar Howard Gardner, known for his theory of multiple intelligences, once called the SAT and other scholastic measures “thinly disguised” intelligence tests.
One could of course argue that IQ is also meaningless—and many have. For example, in his bestseller The Social Animal, David Brooks claimed that “once you get past some pretty obvious correlations (smart people make better mathematicians), there is a very loose relationship between IQ and life outcomes.” And in a recent Huffington Post article, psychologists Tracy Alloway and Ross Alloway wrote that
IQ won’t help you in the things that really matter: It won’t help you find happiness, it won’t help you make better decisions, and it won’t help you manage your kids’ homework and the accounts at the same time. It isn’t even that useful at its raison d'ĂȘtre: predicting success.
But this argument is wrong, too. Indeed, we know as well as anything we know in psychology that IQ predicts many different measures of success. Exhibit A is evidence from research on job performance by the University of Iowa industrial psychologist Frank Schmidt and his late colleague John Hunter. Synthesizing evidence from nearly a century of empirical studies, Schmidt and Hunter established that general mental ability—the psychological trait that IQ scores reflect—is the single best predictor of job training success, and that it accounts for differences in job performance even in workers with more than a decade of experience. It’s more predictive than interests, personality, reference checks, and interview performance. Smart people don’t just make better mathematicians, as Brooks observed—they make better managers, clerks, salespeople, service workers, vehicle operators, and soldiers.
IQ predicts other things that matter, too, like income, employment, health, and even longevity. In a 2001 study published in the British Medical Journal, Scottish researchers Lawrence Whalley and Ian Deary identified more than 2,000 people who had taken part in the Scottish Mental Survey of 1932, a nationwide assessment of IQ. Remarkably, people with high IQs at age 11 were more considerably more likely to survive to old age than were people with lower IQs. For example, a person with an IQ of 100 (the average for the general population) was 21 percent more likely to live to age 76 than a person with an IQ of 85. And the relationship between IQ and longevity remains statistically significant even after taking SES into account. Perhaps IQ reflects the mental resources—the reasoning and problem-solving skills—that people can bring to bear on maintaining their health and making wise decisions throughout life. This explanation is supported by evidence that higher-IQ individuals engage in more positive health behaviors, such as deciding to quit smoking.
IQ is of course not the only factor that contributes to differences in outcomes like academic achievement and job performance (and longevity). Psychologists have known for many decades that certain personality traits also have an impact. One is conscientiousness, which reflects a person’s self-control, discipline, and thoroughness. People who are high in conscientiousness delay gratification to get their work done, finish tasks that they start, and are careful in their work, whereas people who are low in conscientiousness are impulsive, undependable, and careless (compare Lisa and Bart Simpson). The University of Pennsylvania psychologist Angela Duckworth has proposed a closely related characteristic that she calls “grit,” which she defines as a person’s “tendency to sustain interest in and effort toward very long-term goals,” like building a career or family.
Duckworth has argued that such factors may be even more important as predictors of success than IQ. In one study, she and UPenn colleague Martin Seligman found that a measure of self-control collected at the start of eighth grade correlated more than twice as strongly with year-end grades than IQ did. However, the results of meta-analyses, which are more telling than the results of any individual study, indicate that these factors do not have a larger effect than IQ does on measures of academic achievement and job performance. So, while it seems clear that factors like conscientiousness—not to mention social skill, creativity, interest, and motivation—do influence success, they cannot take the place of IQ.
None of this is to say that IQ, whether measured with the SAT or a traditional intelligence test, is an indicator of value or worth. Nobody should be judged, negatively or positively, on the basis of a test score. A test score is a prediction, not a prophecy, and doesn’t say anything specific about what a person will or will not achieve in life. A high IQ doesn’t guarantee success, and a low IQ doesn’t guarantee failure. Furthermore, the fact that IQ is at present a powerful predictor of certain socially relevant outcomes doesn’t mean it always will be. If there were less variability in income—a smaller gap between the rich and the poor—then IQ would have a weaker correlation with income. For the same reason, if everyone received the same quality of health care, there would be a weaker correlation between IQ and health.
But the bottom line is that there are large, measurable differences among people in intellectual ability, and these differences have consequences for people’s lives. Ignoring these facts will only distract us from discovering and implementing wise policies.
Given everything that social scientists have learned about IQ and its broad predictive validity, it is reasonable to make it a factor in decisions such as whom to hire for a particular job or admit to a particular college or university. In fact, disregarding IQ—by admitting students to colleges or hiring people for jobs in which they are very likely to fail—is harmful both to individuals and to society. For example, in occupations where safety is paramount, employers could be incentivized to incorporate measures of cognitive ability into the recruitment process. Above all, the policies of public and private organizations should be based on evidence rather than ideology or wishful thinking.
SOURCE
Subscribe to:
Posts (Atom)