Friday, February 1, 2013

The History of the Demarcation Problem, in an Insolent Nutshell



In the 17th century the Enlightenment hit, especially the British Isles. Gone were the days when Plato could sit around in his toga and just make stuff up, claiming that the path to true knowledge was human reason, and that everything was made of earth fire and water. We started to force natural philosophers to prove their claims by empirically testing them. Empiricism emphasizes the role of experience, especially experience based on perceptual observations by the senses. This technique works very well, and the world blossomed.

Strong forms of empiricism emerge. A group of scientists who called themselves the Vienna Circle, still reeling from Einstein’s theories and quantum mechanics, created the idea of verificationism. Verificationism is the ides that a statement must be empirically verifiable or it’s meaningless.

A philosopher named Karl Popper broke from the Vienna Circle. Popper noticed that the Vienna Circle had mixed up two different issues, meaning and demarcation. The Vienna Circle had proposed verificationism as a single solution to both questions. In opposition to this view, Popper observed that many meaningful theories are not scientific, and that, a criterion of meaningfulness doesn’t necessarily coincide with a criterion of demarcation. Popper was also irritated by unscientific theories created by the likes of Karl Marx and Sigmund Freud, where the authors kept changing auxiliary theories (moving the goal posts) in order to keep their pet theories alive. Thus, Popper urged that verifiability be replaced with falsifiability as the criterion of demarcation. Popper claimed that, if and only if a theory is falsifiable, then it is scientific. This idea has become very influential.

There are problems with this view however. Falsifiability blesses too many theories that are falsifiable but wrong, and it excludes some potentially good hypotheses like string theory. And Popper never explains why a theory that has survived falsification is any better that one that has never been tested.

Then Thomas Kuhn came along in 1962 and really made trouble for science: He challenged the  prevailing view of progress in science. Everyone had accepted that science was making progress by accumulating true facts and theories about reality. Kuhn argued that our scientific theories were not progressing toward some version of the "truth". Kuhn exposed the way that science really worked. Science was better described with an episodic model where periods of steady “normal” science are interrupted by periods of “revolutionary” science. Normal science consists mostly of solving little puzzles During the revolutions, the discovery of experimental anomalies lead to a whole new paradigm that change the jargon, rules of the game and the roadmap directing new research. Our theories are rarely updated or corrected, they're just tossed away. Kuhn criticized Popper for defining science only by its scientific revolutions, which tend to be scarcer. 

Around that time Paul Feyerabend took the challenge to extremes. He decided that the very question of demarcation was sinister: Science itself had no need of demarcation. Instead some authors were attempting to fashion an unjustified position of authority for science in an attempt to dominate public discussion and policy. Feyerabend asserts that science doesn’t occupy any special place in terms of either its logic or method, and no claim to special authority made by scientists can be maintained. Within the history of scientific practice, no rule or method can be found that has not been violated circumvented or otherwise mangled in order to advance scientific knowledge. Additionally, Feyerabend claims, correctly in my opinion, that science is not an independent form of reasoning, but is inseparable from the larger body of human thought and inquiry. Despite Feyerabend’s many insights, when you read his papers, they are so over the top that it’s hard to believe he is being serious.

Paul R. Thagard tried to simplify things in 1988.  Thagard proposed that the following criteria are usually the case:

1.       science is simple and unified;
2.       science is progressive insofar as it predicts novel facts; and
3.       Adherents to science attempt to develop it so as to solve puzzles, evaluate it with respect to alternatives, and are open to confirmation and falsification.

Around the same time, Larry Laudan made an alternative suggestion. Based on failed historical attempts to define demarcation in science, "philosophy has failed to deliver the goods". Laudan suggests that the demarcation between science and non-science is a pseudo-problem that would best be replaced by focusing on the distinction between reliable and unreliable knowledge, without bothering to ask whether that knowledge is scientific or not. I whole heartedly agree with this sentiment, but what a thorny task this is. It seems like asking the question, “Can we devise an oracle that can predict the outcome of any given scientific endeavor?”

In 1995, in a court case involving Daubert vs. Merrell Dow Pharmaceuticals, the judges finally needed to decide what was and wasn’t science. Not able to wait while philosophers hemmed and hawed, seven members of the Court agreed on guidelines for admitting scientific expert testimony. The testimony must be the result of empirical testing, subjected to peer review and publication, the error rate must be known, there must be standards and the theory or technique must be generally accepted by a relevant scientific community. These practical criteria seem like a spectacular start to me.

So there you have it. The problem remains, although the problem itself is better defined than it was 100 years ago.
 

No comments:

Post a Comment