The collapse of Global Warming theory puts a harsh spotlight on the involvement of scientist-activists in efforts to deceive opinion leaders and policy-makers.
Later this year, I’ll attempt to explain why scientist-activist groups are always—yes, always—wrong about politics and public policy. This week, here’s a quick look at one of the reasons: that scientists may be good at analyzing the results of replicable research and experimentation, but they’re not good at analyzing things that are unrepeatable.
Consider this problem in political analysis: The period of strong economic growth that began in late 1994/early 1995—Was it more the result of President Clinton’s economic plan, which became law in August 1993, or was it more the result of the election of a Republican Congress in November 1994? Without one factor or the other or both, would the economy have grown faster or slower? There is no way to determine the answer—not in a scientific way. We cannot rewind history, change one or two factors, and do the experiment over. Scientists who get involved in public policy often get confused and frustrated, because they don’t understand the difference. (Policymakers have little understanding of physical science, too, but that presents different problems.)
Even Albert Einstein, who sought to organize scientists for political causes, recognized the limitation of applying scientific methods to political problems. In 1949, in the first issue of the Marxist publication Monthly Review, Einstein wrote:
It might appear that there are no essential methodological differences between astronomy and economics: scientists in both fields attempt to discover laws of general acceptability for a circumscribed group of phenomena in order to make the interconnection of these phenomena as clearly understandable as possible. But in reality such methodological differences do exist. The discovery of general laws in the field of economics is made difficult by the circumstance that observed economic phenomena are often affected by many factors which are very hard to evaluate separately.
Most political questions are unrelated to science. Science may tell us something about the stages of development of a fetus, but it cannot answer the question of when a fetus (or an embryo) becomes a human being who should be subject to the protection of the law. Thus, science has nothing to say about abortion laws or about whether the federal government should increase funding for embryonic stem cell research. It may be able to tell us whether a particular spot on earth, at a particular time on a particular day of the year, has a higher temperature than on the corresponding day and time a year before. But science has nothing to say about whether the U.S. should ratify the Kyoto treaty.
That limitation is particularly important in light of the fact that there is no evidence scientists understand politics any better than chicken farmers understand politics, or that scientists understand politics any better than politicians understand science. We should listen to what the scientific community says on political issues, just as we listen to the janitorial community or the truck-driver community, but no more so.
Einstein wrote that “we should be on our guard not to overestimate science and scientific methods when it is a question of human problems; and we should not assume that experts are the only ones who have a right to express themselves on questions affecting the organization of society.” Einstein, who strongly endorsed centralized economic planning—perhaps the most discredited-yet-popular idea of the 20th Century—knew what he was talking about when he warned us “not to overestimate science and scientific methods.”
Richard Pipes, who headed the famous Team B of intelligence analysts during the Cold War, addressed the problem. [A note on terminology: “Idiographic” is a term from Kantian philosophy describing the effort to understand the meaning of contingent, accidental, and often subjective phenomena. It is contrasted to “nomothetic,” a term that describes the effort to understand objective phenomena.] In a chapter in the 1982 book Intelligence Requirements for the 1980s: Analysis and Estimates, Pipes wrote:
In the natural sciences, once you arrive at a validated law, any subsequent experiment will always yield the same result. That assurance of repeatability is the essence of the natural sciences. Once you have found, for example, that water boils at a certain degree on the thermometer (atmospheric pressure being constant) it will always do that. In contrast, the idiographic sciences, of which history is one, have to deal with concrete situations which contain certain elements familiar from other phenomena but the combination of which is always unique and therefore unrepetitive. For this reason, there can be no science of history. There are certain ancillary disciplines within history, such as statistics and demography, which are close to science. There may be a science of analyzing documents. But the actual process of arriving at historical conclusions is never precisely the same because every situation is unique. For this reason history is so very appropriate as training for intelligence analysts, in as much as the situations which confront the intelligence analyst also are forever concrete, unique, and unrepeatable
Pipes was too generous regarding statistics and demographics. For example, the U.S. census does a pretty good job manipulating information in those fields by, for example, targeting certain groups for inclusion in the Census and by creating or eliminating ethnic groups to create false impressions (e.g., the fake and racist “America will soon turn into a non-white country” meme). But his general point about the different types of science was correct.
To be continued.