Sunday, October 19, 2014



America's most "incorrect" man reflects

"The Bell Curve" 20 years later: A Q&A with Charles Murray

October marks the 20th anniversary of “The Bell Curve: Intelligence and Class Structure in American Life,” the extraordinarily influential and controversial book by AEI scholar Charles Murray and Richard Herrnstein. Here, Murray answers a few questions about the predictions, controversy, and legacy of his book.

Q. It’s been 20 years since “The Bell Curve” was published. Which theses of the book do you think are the most relevant right now to American political and social life?

American political and social life today is pretty much one great big “Q.E.D.” for the two main theses of “The Bell Curve.” Those theses were, first, that changes in the economy over the course of the 20th century had made brains much more valuable in the job market; second, that from the 1950s onward, colleges had become much more efficient in finding cognitive talent wherever it was and shipping that talent off to the best colleges. We then documented all the ways in which cognitive ability is associated with important outcomes in life — everything from employment to crime to family structure to parenting styles. Put those all together, we said, and we’re looking at some serious problems down the road. Let me give you a passage to quote directly from the close of the book:

Q. Predicting the course of society is chancy, but certain tendencies seem strong enough to worry about:

An increasingly isolated cognitive elite.

A merging of the cognitive elite with the affluent.

A deteriorating quality of life for people at the bottom end of the cognitive distribution.

Unchecked, these trends will lead the U.S. toward something resembling a caste society, with the underclass mired ever more firmly at the bottom and the cognitive elite ever more firmly anchored at the top, restructuring the rules of society so that it becomes harder and harder for them to lose. (p. 509)

Remind you of anything you’ve noticed about the US recently? If you look at the first three chapters of the book I published in 2012, “Coming Apart,” you’ll find that they amount to an update of “The Bell Curve,” showing how the trends that we wrote about in the early 1990s had continued and in some cases intensified since 1994. I immodestly suggest that “The Bell Curve” was about as prescient as social science gets.

Q. But none of those issues has anything to do with race, and let’s face it: the firestorm of controversy about “The Bell Curve” was all about race. We now have 20 more years of research and data since you published the book. How does your position hold up?

First, a little background: Why did Dick and I talk about race at all? Not because we thought it was important on its own. In fact, if we lived in a society where people were judged by what they brought to the table as individuals, group differences in IQ would be irrelevant. But we were making pronouncements about America’s social structure (remember that the book’s subtitle is “Intelligence and Class Structure in American Life”). If we hadn’t discussed race, “The Bell Curve” would have been dismissed on grounds that “Herrnstein and Murray refuse to confront the reality that IQ tests are invalid for blacks, which makes their whole analysis meaningless.” We had to establish that in fact IQ tests measure the same thing in blacks as in whites, and doing so required us to discuss the elephant in the corner, the mean difference in test scores between whites and blacks.

Here’s what Dick and I said: "There is a mean difference in black and white scores on mental tests, historically about one standard deviation in magnitude on IQ tests (IQ tests are normed so that the mean is 100 points and the standard deviation is 15). This difference is not the result of test bias, but reflects differences in cognitive functioning. The predictive validity of IQ scores for educational and socioeconomic outcomes is about the same for blacks and whites."

Those were our confidently stated conclusions about the black-white difference in IQ, and none of them was scientifically controversial. See the report of the task force on intelligence that the American Psychological Association formed in the wake of the furor over “The Bell Curve.”

What’s happened in the 20 years since then? Not much. The National Assessment of Educational Progress shows a small narrowing of the gap between 1994 and 2012 on its reading test for 9-year-olds and 13-year-olds (each by the equivalent of about 3 IQ points), but hardly any change for 17-year-olds (about 1 IQ-point-equivalent). For the math test, the gap remained effectively unchanged for all three age groups.

On the SAT, the black-white difference increased slightly from 1994 to 2014 on both the verbal and math tests. On the reading test, it rose from .91 to .96 standard deviations. On the math test, it rose from .95 to 1.03 standard deviations.

If you want to say that the NAEP and SAT results show an academic achievement gap instead of an IQ gap, that’s fine with me, but it doesn’t change anything. The mean group difference for white and African American young people as they complete high school and head to college or the labor force is effectively unchanged since 1994. Whatever the implications were in 1994, they are about the same in 2014.

There is a disturbing codicil to this pattern. A few years ago, I wrote a long technical article about black-white changes in IQ scores by birth cohort. I’m convinced that the convergence of IQ scores for blacks and whites born before the early 1970s was substantial, though there’s still room for argument. For blacks and whites born thereafter, there has been no convergence.

Q. The flashpoint of the controversy about race and IQ was about genes. If you mention “The Bell Curve” to someone, they’re still likely to say “Wasn’t that the book that tried to prove blacks were genetically inferior to whites?” How do you respond to that?

Actually, Dick and I got that reaction even while we were working on the book. As soon as someone knew we were writing a book about IQ, the first thing they assumed was that it would focus on race, and the second thing they assumed was that we would be talking about genes. I think psychiatrists call that “projection.” Fifty years from now, I bet those claims about “The Bell Curve” will be used as a textbook case of the hysteria that has surrounded the possibility that black-white differences in IQ are genetic. Here is the paragraph in which Dick Herrnstein and I stated our conclusion:

"If the reader is now convinced that either the genetic or environmental explanation has won out to the exclusion of the other, we have not done a sufficiently good job of presenting one side or the other. It seems highly likely to us that both genes and the environment have something to do with racial differences. What might the mix be? We are resolutely agnostic on that issue; as far as we can determine, the evidence does not yet justify an estimate." (p. 311)

That’s it. The whole thing. The entire hateful Herrnstein-Murray pseudoscientific racist diatribe about the role of genes in creating the black-white IQ difference. We followed that paragraph with a couple pages explaining why it really doesn’t make any difference whether the differences are caused by genes or the environment. But nothing we wrote could have made any difference. The lesson, subsequently administered to James Watson of DNA fame, is that if you say it is likely that there is any genetic component to the black-white difference in test scores, the roof crashes in on you.

On this score, the roof is about to crash in on those who insist on a purely environmental explanation of all sorts of ethnic differences, not just intelligence. Since the decoding of the genome, it has been securely established that race is not a social construct, evolution continued long after humans left Africa along different paths in different parts of the world, and recent evolution involves cognitive as well as physiological functioning.

The best summary of the evidence is found in the early chapters of Nicholas Wade’s recent book, “A Troublesome Inheritance.” We’re not talking about another 20 years before the purely environmental position is discredited, but probably less than a decade. What happens when a linchpin of political correctness becomes scientifically untenable? It should be interesting to watch. I confess to a problem with schadenfreude.

Q. Let’s talk about the debate over the minimum wage for a moment. You predicted in the book that the “natural” wage for low-skill labor would be low, and that raising the wage artificially could backfire by “making alternatives to human labor more affordable” and “making the jobs disappear altogether.” This seems to be coming true today. What will the labor landscape look like in the next 20 years?

Terrible. I think the best insights on this issue are Tyler Cowen’s in “Average Is Over.” He points out something that a lot of people haven’t thought about: it’s not blue-collar jobs that are going to be hit the hardest. In fact, many kinds of skilled blue-collar work are going to be needed indefinitely. It’s mid-level white-collar jobs that are going to be hollowed out. Think about travel agents. In 1994, I always used a travel agent, and so did just about everybody who traveled a lot. But then came Expedia and Orbitz and good airline websites, and I haven’t used a travel agent for 15 years.

Now think about all the white collar jobs that consist of applying a moderately complex body of interpretive rules to repetitive situations. Not everybody is smart enough to do those jobs, so they have paid pretty well. But now computers combined with machines can already do many of them—think about lab technicians who used to do your blood work, and the machines that do it now. For that matter, how long is it before you’re better off telling a medical diagnostic software package your symptoms than telling a physician?

Then Cowen points out something else I hadn’t thought of: One of the qualities that the new job market will value most highly is conscientiousness. Think of all the jobs involving personal service—working in homes for the elderly or as nannies, for example—for which we don’t need brilliance, but we absolutely need conscientiousness along with basic competence. Cowen’s right—and that has some troubling implications for guys, because, on average, women in such jobs are more conscientious than men.

My own view is that adapting to the new labor market, and making sure that working hard pays a decent wage, are among the most important domestic challenges facing us over the next few decades.

Q. In the book you ask, “How should policy deal with the twin realities that people differ in intelligence for reasons that are not their fault and that intelligence has a powerful bearing on how well people do in life?” How would you answer this question now?

I gave my answer in a book called “In Our Hands: A Plan to Replace the Welfare State,” that I published in 2006. I want to dismantle all the bureaucracies that dole out income transfers, whether they be public housing benefits or Social Security or corporate welfare, and use the money they spend to provide everyone over the age of 21 with a guaranteed income, deposited electronically every month into a bank account. It takes a book to explain why such a plan could not only work, but could revitalize civil society, but it takes only a few sentences to explain why a libertarian would advocate such a plan.

Certain mental skillsets are now the “open sesame” to wealth and social position in ways that are qualitatively different from the role they played in earlier times. Nobody deserves the possession of those skillsets. None of us has earned our IQ. Those of us who are lucky should be acutely aware that it is pure luck (too few are), and be committed to behaving accordingly. Ideally, we would do that without government stage-managing it. That’s not an option. Massive government redistribution is an inevitable feature of advanced postindustrial societies.

Our only option is to do that redistribution in the least destructive way. Hence my solution. It is foreshadowed in the final chapter of “The Bell Curve” where Dick and I talk about “valued places.” The point is not just to pass out enough money so that everyone has the means to live a decent existence. Rather, we need to live in a civil society that naturally creates valued places for people with many different kinds and levels of ability. In my experience, communities that are left alone to solve their own problems tend to produce those valued places. Bureaucracies destroy them. So my public policy message is: Let government does what it does best, cut checks. Let individuals, families, and communities do what they do best, respond to human needs on a one-by-one basis.

Q. Reflecting on the legacy of “The Bell Curve,” what stands out to you?

I’m not going to try to give you a balanced answer to that question, but take it in the spirit you asked it—the thing that stands out in my own mind, even though it may not be the most important. I first expressed it in the Afterword I wrote for the softcover edition of “The Bell Curve.” It is this: The reaction to “The Bell Curve” exposed a profound corruption of the social sciences that has prevailed since the 1960s. “The Bell Curve” is a relentlessly moderate book — both in its use of evidence and in its tone — and yet it was excoriated in remarkably personal and vicious ways, sometimes by eminent academicians who knew very well they were lying. Why? Because the social sciences have been in the grip of a political orthodoxy that has had only the most tenuous connection with empirical reality, and too many social scientists think that threats to the orthodoxy should be suppressed by any means necessary. Corruption is the only word for it.

Now that I’ve said that, I’m also thinking of all the other social scientists who have come up to me over the years and told me what a wonderful book “The Bell Curve” is. But they never said it publicly. So corruption is one thing that ails the social sciences. Cowardice is another.

SOURCE

Friday, October 17, 2014


"Slate" rediscovers IQ -- though they dare not to call it that

They recoil with horror about applying the findings to intergroup differences however, and claim without explanation that what is true of individuals cannot be true of groups of individuals.  That is at least counterintuitive.  They even claim that there is no evidence of IQ differences between groups being predictive of anything. 

I suppose that one has to pity their political correctness, however, because the thing they are greatly at pains to avoid -- the black-white IQ gap -- is superb validation of the fact that group differences in IQ DO matter.  From their abysmal average IQ score, we we would predict that blacks would be at the bottom of every heap (income, education, crime etc.)  -- and that is exactly where they are.  Clearly, group differences in IQ DO matter and the IQ tests are an excellent and valid measure of them



We are not all created equal where our genes and abilities are concerned.

A decade ago, Magnus Carlsen, who at the time was only 13 years old, created a sensation in the chess world when he defeated former world champion Anatoly Karpov at a chess tournament in Reykjavik, Iceland, and the next day played then-top-rated Garry Kasparov—who is widely regarded as the best chess player of all time—to a draw. Carlsen’s subsequent rise to chess stardom was meteoric: grandmaster status later in 2004; a share of first place in the Norwegian Chess Championship in 2006; youngest player ever to reach World No. 1 in 2010; and highest-rated player in history in 2012.

What explains this sort of spectacular success? What makes someone rise to the top in music, games, sports, business, or science? This question is the subject of one of psychology’s oldest debates. In the late 1800s, Francis Galton—founder of the scientific study of intelligence and a cousin of Charles Darwin—analyzed the genealogical records of hundreds of scholars, artists, musicians, and other professionals and found that greatness tends to run in families. For example, he counted more than 20 eminent musicians in the Bach family. (Johann Sebastian was just the most famous.) Galton concluded that experts are “born.” Nearly half a century later, the behaviorist John Watson countered that experts are “made” when he famously guaranteed that he could take any infant at random and “train him to become any type of specialist [he] might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents.”

The experts-are-made view has dominated the discussion in recent decades. In a pivotal 1993 article published in Psychological Review—psychology’s most prestigious journal—the Swedish psychologist K. Anders Ericsson and his colleagues proposed that performance differences across people in domains such as music and chess largely reflect differences in the amount of time people have spent engaging in “deliberate practice,” or training exercises specifically designed to improve performance. To test this idea, Ericsson and colleagues recruited violinists from an elite Berlin music academy and asked them to estimate the amount of time per week they had devoted to deliberate practice for each year of their musical careers. The major finding of the study was that the most accomplished musicians had accumulated the most hours of deliberate practice. For example, the average for elite violinists was about 10,000 hours, compared with only about 5,000 hours for the least accomplished group. In a second study, the difference for pianists was even greater—an average of more than 10,000 hours for experts compared with only about 2,000 hours for amateurs. Based on these findings, Ericsson and colleagues argued that prolonged effort, not innate talent, explained differences between experts and novices.

These findings filtered their way into pop culture. They were the inspiration for what Malcolm Gladwell termed the “10,000 Hour Rule” in his book Outliers, which in turn was the inspiration for the song “Ten Thousand Hours” by the hip-hop duo Macklemore and Ryan Lewis, the opening track on their Grammy-award winning album The Heist. However, recent research has demonstrated that deliberate practice, while undeniably important, is only one piece of the expertise puzzle—and not necessarily the biggest piece. In the first study to convincingly make this point, the cognitive psychologists Fernand Gobet and Guillermo Campitelli found that chess players differed greatly in the amount of deliberate practice they needed to reach a given skill level in chess. For example, the number of hours of deliberate practice to first reach “master” status (a very high level of skill) ranged from 728 hours to 16,120 hours. This means that one player needed 22 times more deliberate practice than another player to become a master.             

A recent meta-analysis by Case Western Reserve University psychologist Brooke Macnamara and her colleagues (including the first author of this article for Slate) came to the same conclusion. We searched through more than 9,000 potentially relevant publications and ultimately identified 88 studies that collected measures of activities interpretable as deliberate practice and reported their relationships to corresponding measures of skill. (Analyzing a set of studies can reveal an average correlation between two variables that is statistically more precise than the result of any individual study.) With very few exceptions, deliberate practice correlated positively with skill. In other words, people who reported practicing a lot tended to perform better than those who reported practicing less. But the correlations were far from perfect: Deliberate practice left more of the variation in skill unexplained than it explained. For example, deliberate practice explained 26 percent of the variation for games such as chess, 21 percent for music, and 18 percent for sports. So, deliberate practice did not explain all, nearly all, or even most of the performance variation in these fields. In concrete terms, what this evidence means is that racking up a lot of deliberate practice is no guarantee that you’ll become an expert. Other factors matter.

What are these other factors? There are undoubtedly many. One may be the age at which a person starts an activity. In their study, Gobet and Campitelli found that chess players who started playing early reached higher levels of skill as adults than players who started later, even after taking into account the fact that the early starters had accumulated more deliberate practice than the later starters. There may be a critical window during childhood for acquiring certain complex skills, just as there seems to be for language.

There is now compelling evidence that genes matter for success, too. In a study led by the King’s College London psychologist Robert Plomin, more than 15,000 twins in the United Kingdom were identified through birth records and recruited to perform a battery of tests and questionnaires, including a test of drawing ability in which the children were asked to sketch a person. In a recently published analysis of the data, researchers found that there was a stronger correspondence in drawing ability for the identical twins than for the fraternal twins. In other words, if one identical twin was good at drawing, it was quite likely that his or her identical sibling was, too. Because identical twins share 100 percent of their genes, whereas fraternal twins share only 50 percent on average, this finding indicates that differences across people in basic artistic ability are in part due to genes. In a separate study based on this U.K. sample, well over half of the variation between expert and less skilled readers was found to be due to genes. 

In another study, a team of researchers at the Karolinska Institute in Sweden led by psychologist Miriam Mosing had more than 10,000 twins estimate the amount of time they had devoted to music practice and complete tests of basic music abilities, such as determining whether two melodies carry the same rhythm. The surprising discovery of this study was that although the music abilities were influenced by genes—to the tune of about 38 percent, on average—there was no evidence they were influenced by practice. For a pair of identical twins, the twin who practiced music more did not do better on the tests than the twin who practiced less. This finding does not imply that there is no point in practicing if you want to become a musician. The sort of abilities captured by the tests used in this study aren’t the only things necessary for playing music at a high level; things such as being able to read music, finger a keyboard, and commit music to memory also matter, and they require practice. But it does imply that there are limits on the transformative power of practice. As Mosing and her colleagues concluded, practice does not make perfect.

Along the same lines, biologist Michael Lombardo and psychologist Robert Deaner examined the biographies of male and female Olympic sprinters such as Jesse Owens, Marion Jones, and Usain Bolt, and found that, in all cases, they were exceptional compared with their competitors from the very start of their sprinting careers—before they had accumulated much more practice than their peers.

What all of this evidence indicates is that we are not created equal where our abilities are concerned. This conclusion might make you uncomfortable, and understandably so. Throughout history, so much wrong has been done in the name of false beliefs about genetic inequality between different groups of people—males vs. females, blacks vs. whites, and so on. War, slavery, and genocide are the most horrifying examples of the dangers of such beliefs, and there are countless others. In the United States, women were denied the right to vote until 1920 because too many people believed that women were constitutionally incapable of good judgment; in some countries, such as Saudi Arabia, they still are believed to be. Ever since John Locke laid the groundwork for the Enlightenment by proposing that we are born as tabula rasa—blank slates—the idea that we are created equal has been the central tenet of the “modern” worldview. Enshrined as it is in the Declaration of Independence as a “self-evident truth,” this idea has special significance for Americans. Indeed, it is the cornerstone of the American dream—the belief that anyone can become anything they want with enough determination.

It is therefore crucial to differentiate between the influence of genes on differences in abilities across individuals and the influence of genes on differences across groups. The former has been established beyond any reasonable doubt by decades of research in a number of fields, including psychology, biology, and behavioral genetics. There is now an overwhelming scientific consensus that genes contribute to individual differences in abilities. The latter has never been established, and any claim to the contrary is simply false.

Another reason the idea of genetic inequality might make you uncomfortable is because it raises the specter of an anti-meritocratic society in which benefits such as good educations and high-paying jobs go to people who happen to be born with “good” genes. As the technology of genotyping progresses, it is not far-fetched to think that we will all one day have information about our genetic makeup, and that others—physicians, law enforcement, even employers or insurance companies—may have access to this information and use it to make decisions that profoundly affect our lives. However, this concern conflates scientific evidence with how that evidence might be used—which is to say that information about genetic diversity can just as easily be used for good as for ill.

Take the example of intelligence, as measured by IQ. We know from many decades of research in behavioral genetics that about half of the variation across people in IQ is due to genes. Among many other outcomes, IQ predicts success in school, and so once we have identified specific genes that account for individual differences in IQ, this information could be used to identify, at birth, children with the greatest genetic potential for academic success and channel them into the best schools. This would probably create a society even more unequal than the one we have. But this information could just as easily be used to identify children with the least genetic potential for academic success and channel them into the best schools. This would probably create a more equal society than the one we have, and it would do so by identifying those who are likely to face learning challenges and provide them with the support they might need. Science and policy are two different things, and when we dismiss the former because we assume it will influence the latter in a particular and pernicious way, we limit the good that can be done. 

Wouldn’t it be better to just act as if we are equal, evidence to the contrary notwithstanding? That way, no people will be discouraged from chasing their dreams—competing in the Olympics or performing at Carnegie Hall or winning a Nobel Prize. The answer is no, for two reasons. The first is that failure is costly, both to society and to individuals. Pretending that all people are equal in their abilities will not change the fact that a person with an average IQ is unlikely to become a theoretical physicist, or the fact that a person with a low level of music ability is unlikely to become a concert pianist. It makes more sense to pay attention to people’s abilities and their likelihood of achieving certain goals, so people can make good decisions about the goals they want to spend their time, money, and energy pursuing. Moreover, genes influence not only our abilities, but the environments we create for ourselves and the activities we prefer—a phenomenon known as gene-environment correlation. For example, yet another recent twin study (and the Karolinska Institute study) found that there was a genetic influence on practicing music. Pushing someone into a career for which he or she is genetically unsuited will likely not work.

SOURCE