Showing posts with label philosophy of science; theology. Show all posts
Showing posts with label philosophy of science; theology. Show all posts

Wednesday, April 07, 2021

Theology and the Philosophy of Science: The Received View

Discussions within the philosophy of science oftentimes refer to the Received View and the criticisms this view has generated. It was called the "Received View" because there was a great deal of consensus on it prior to the 1960s.  What is this view? 

Simply put, the Received View refers to the attempt by philosophers of science to construe scientific theories as axiomatic calculi which are then given a partial observational interpretation by correspondence rules linking theoretical terms to sets of observations.  This view dominated the philosophy of science from the 1920s all the way through the 1950s, after which a number of criticisms developed that effectively brought the theory to its knees. What is the background of such a view?

The Received View grew out of the school of Logical Positivism, but survived the demise the latter.  In order to understand the Received View, it is therefore quite helpful to know something about Logical Positivism.  

The origins of Logical Positivism can be traced to Hans Reichenbach's "Berlin School" and to the philosophy of the Vienna School.  As a movement, Logical Positivism rejected traditional metaphysics, concerning itself instead with foundational issues in science. It had its origins within the German universities and rested oftentimes upon the power and authority of individual professors.    

Ludwig Büchner's mechanistic naturalism from roughly the 1850s to the 1880s can be seen as an important source for Logical Positivism. Rejecting any a priori elements in science, Büchner held that we have an immediate empirical knowledge of the laws of the natural world that govern the movement of matter.  

During the 1880s and 1890s, Helmholtz, Cohen and the Marburg School of Neo-Kantianism generally claimed that webs of a priori logical relations are exemplified in the external world and that it is the aim of science to discover the general structure of sensations.  

At about the same time, Ernst Mach was also arguing that there is no a priori which organizes the facts of science, but rather that science is a conceptual reflection upon the givenness of facts.  He famously rejected that neither space nor time is absolute and claimed that ultimately all empirical statements comprising scientific theory must be reducible to statements about sensations.  

Thus it is that by 1900 the following schools of thought held sway, all contributing to Logical Positivism, and all of which were challenged by the theory of relativity and quantum theory generally:

  • Mechanistic Materialism with its roots in Büchner's thought; 
  • Neo-Kantianism and its claim of the a priori nature of logical relations; 
  • Machian Positivism rejecting the a priori. 

The Vienna Circle and Reichenbach's Berlin School both sought a philosophy of science that was compatible with the new physics. Both believe that Mach's verifiability criterion of meaningfulness was essentially correct: Putative scientific claims that could neither empirically confirmed nor disconfirmed were meaningless.  Simply put, scientific statements required truth conditions as the sine qua non of meaningfulness. Both schools also believed that mathematics was very important in science, accepting along the way Poincare's thesis that one ought understand scientific law in terms of conventions about how one might talk about phenomena. Accordingly, both supposed that the subject matter of theories are phenomenal regularities and that these regularities can be characterized by theoretical terms. Both claimed that while theoretical terms are mere conventions used to refer to phenomena, the same assertions made employing these theoretical terms could in principle be made within phenomenal language generally, i.e.,  explicit definitions of theoretical terms were possible in this way.  

The original Received View was heavily influenced by the mathematical logic of Frege and Cantor, as well as the work of Russell and Whitehead. In the early days, the logical axiomization of theories was particularly important with correspondence rules granting an observational interpretation of theoretical propositions. 

Logical axiomization proceeds by using logical and mathematical terms both to formulate scientific laws and specify the relationship among theoretical terms which themselves are construed as abbreviations of phenomenal descriptions. Observational terms, on the other hand, are given an explicit phenomenal interpretation.  Correspondence rules are employed to give explicit definitions to theoretical terms and consisted in biconditionals.  Thus, for all x, x has theoretical property T if and only if x has observational properties O. The idea here is that all theoretical terms could be correlated with congeries of observational terms.  

The aim of the approach was always to eschew metaphysics. Since for all x, x has T just in case x has O, one does not need to grant some non-empirical ontological status to theoretical entities.  The existence of these biconditional "bridge rules" giving an observational meaning to theoretical terms meant that no supersensible or nonsensible entities need exist, a happy event for anyone wanting to limit the metaphysical within science.  The verificationist criterion claiming that the meaning of a term is its method of verification also effectively precluded any appeal to the metaphysical. There was a general concern to construct a logically perfect language that would make no reference to metaphysical entities. 

Since all assertions of scientific theory are in principle reducible to assertions about phenomena in the observational language, all assertions of scientific fact are reducible to assertions in a basic phenomenal language.  Theorists called this language a protocol language directly referring to the givenness of observational experience.  Within this protocol language, one could distinguish particular assertions from generalizations.  

But storm clouds were gathering on the horizon. Granted that we have a protocol language, what precisely is this language about?  One group claimed that the language referred to incorrigible sense data.  The problem with this, is that such sense data appears to be subjective.  The incorrigibility of immediate experience carries with it subjectivity.  Another group claimed that we must eschew subjectivity and that we can only do that by allowing the protocol language to refer to physical things and their properties.  Now the problem of subjectivity has been fixed, but our experience is no longer incorrigible.  Thus it is that there was both a phenomenalist and a physicalist interpretation of the protocol language, the first emphasizing incorrigibility but losing objectivity, and the second emphasizing objectivity but losing incorrigibility.  In what then could the givenness of experience consist?   

There were other problems as well.  As is well-known, all inductive arguments are invalid, i.e., it is always possible for the premises of such arguments to be true, while their conclusions are false. The problem is that no matter how many times there is a correlation among our empirical experiences, it is always logically possible that the next experience will be one in which that correlation does not hold.  This "problem of induction" is often called Hume's problem, after the 18th century Scottish philosopher, David Hume.  

Within the philosophy of science, the problem of induction seems to collide with the axiomatic and deductive nature of scientific theory.  While there were attempts to formulate an inductive logic with clear algorithms, on empirical grounds one cannot observe a generalization, and, moreover, it is always logically possible that one's next experience will disconfirm the generalization under consideration.  

Overwhelmingly important in the Received View is the notion of a correspondence rule.  Sometimes called "rules of interpretation," "epistemic correlations," "coordinating definitions," "dictionaries," or "observational definitions," the correspondence rules had a threefold function: 1) They defined the theoretical terms, 2) they guaranteed the cognitive significance of these terms, and 3) they specified the experimental procedures by which the theory might apply to the phenomena.  

Unfortunately, there are some manifest inadequacies of the correspondence rule approach.  Take, for instance, a dispositional term like 'fragility'. How is this disposition definable in first order predicate logic? An object that is fragile and the one that is not fragile have no observational differences if neither one breaks.  Fragility is a property about what would happen if the object in question were subjected to certain conditions that do not in fact occur. But talk of counterfactual or subjunctive conditionals takes us away from first order predicate logic into the land of intension -- non-extensional meaning -- something most philosophers of science wanted to avoid. 

But there is a bigger issue.  The correspondence rule approach presupposes that a theoretical term is identified with one experimental procedure.  Unfortunately, this is generally not the case.  There is no one-to-one coordinating definition for theoretical terms.  An observational correlate to such terms is not found in one complete observation, but rather the theoretical term connects to many incomplete, partial observations.  Thus, the effort began to give an alternative non-semantic account of how theoretical terms might have only a partial observational interpretation. 

But now even more questions arise.  When theoretical terms are provided only a partial observational interpretation, then what is their status?  Since they no longer can the be linked by semantics to observational correlates, what is it to which they link? 

Some espousing realism asserted that the terms of the theory actually refer to real entities that presumably exist apart from human awareness, perception, conception and language.  Others counseling instrumentalism claimed that the terms are really short-hand ways of calculating and predicting observational results.  While most adherents of the Received View were realists committed to Quine's notion that "to be is to be the value of a bound variable," the fact that a theoretical term is not exhausted by its observational interpretation neither entails realism nor anti-realism (instrumentalism).  

A new modified Received View thus arose espousing indirect verification, a view summarized by A.J. Ayer this way: 

A statement is indirectly verifiable if and only if in conjunction with certain other premises it entails one or more directly verifiable statements which are not deducible from the other premises alone. 

Both the modified and original Received View were nonetheless committed to the legitimacy of the distinction between observational and theoretical terms.  Examples of the former include 'black', 'cold', 'right of', 'shorter than', 'soft', 'volume', 'floats', 'weighs', 'wood', etc.  Examples of the latter are 'mass', 'energy', 'electric charge', 'electron', 'hadron', 'temperature', 'virus', 'gene' or even 'ego'.  

While this distinction might prima facie seem clear, it is not.  It turns out that there is a theoretical component to any observational term. While 'ego' is a theoretical term in psychological theory, 'floats' has a theoretical component as well in the everyday "theory" of our involvement in the world. Moreover, we might speak about "observing" electrons, but what are the identity conditions of 'observe' when applied to electrons? Consider, "I see a leaf." Do all cultures have some non-theoretical identify conditions for 'leaf', or is that also a theoretical term. Simply put, it may be useful for us to talk about leaves as separate entities from the branches upon which they grow, but is there a fact of the matter here? Do putative observational terms come as self-identifying objects?  

We will continue our discussion of the Received View in the next post which will contrast the syntactic view of scientific theory from the semantic view.  Read on!


Friday, April 02, 2021

Theology and the Philosophy of Science: A First Look at Scientific and Theological Method

Theologians will recognize that the general title of this series of posts is that of the 1976 translated English title of a book by Wolfart Pannenberg published in 1973 entitled, Wissenschafts Theorie und Theology.  My reflections in this blog, however, are entirely my own.  

I have often thought that theologians ought to study the philosophy of science. Why? There are two compelling reasons to do so. Theologians should have a grasp of scientific methodology and theory formation because such knowledge will help them understand the nature of truth claims and justification in the natural and social sciences and help them avoid such statements as, "Well it's not confirmed yet; it's only a theory."  

The second reason to study the philosophy of science is the more important. We should study how it is that truth comes to be declared in the natural and social sciences so that we have a better appreciation of how it is declared in theology. In what ways are truth claims similar in science and theology and in what ways do they differ? The study of the philosophy of science is helpful for both partners in the science and theology discussion to understand each other. 

Some people are surprised perhaps to learn that there is such a discipline as the philosophy of science.  They ask what purpose philosophical reflection accomplishes to the results and trajectories of science.  Do not scientists already know what they are doing?  What could philosophers possibly add to this?  Clearly, they don't generally have PhDs in the fields of study that scientists are investigating.   

I am of the opinion that philosophy is useful in all kinds of areas about which people do not generally know. In addition to the philosophy of science, there is a philosophy of mathematics, a philosophy of mind, a philosophy of law, and a philosophy of history, to name a few.  While philosophers don't generally add to the content of a field of research, their questions can help those within and outside these disciplines to appreciate the particularity of their own intellectual endeavors and the institutional practices they presuppose.  Philosophers can help all of us gain clarity on the assumptions within an area of research, assumptions that sometimes have empirical support, or assumptions that rest upon human convention.  

I want to provide theologians not knowing much about the philosophy of science some orientation to this important branch of philosophy. I hope that this brief introduction will help theologians think a bit more clearly about the nature of science. Perhaps it will help them think a bit differently about the nature of theology as well.  

All of us probably learned in grade school about the scientific method.  We learned that scientists do observations which allow them to spot regularities.  After finding these regularities, they seek an account as to why.  What is it that grounds the regular nature of these regularities.  What explains them?  The search for explanations lead scientists to offer hypotheses which can explain the occurrence of the regularities.  Such hypotheses are explanatory accounts of why the regularity holds.  The rectitude of the hypothesis is accomplished via experiment.  The story I learned was that hypotheses are experimentally confirmed or disconfirmed.  

As a first step in understanding science, this account is quite helpful.  It does allow young students to grasp some about the tentative nature of science.  Young students should realize that science is a human procedure whereby human conjectures are checked up upon within human experience.  Perceptive children might even think about the common adjective of 'procedure', 'conjecture' and 'experience', concluding that science is a very human activity.  The really bright and informed among the perceptive might even ask this, "Given that human beings get so much wrong much of the time, how do we know they get so much right in science?" 

Given that science is clearly a human activity, and given that people clearly are ignorant and close-minded much of the time -- notice what passes for political conversation in these days -- how could we know that they are not simply being ignorant and close-minded in the doing of science?  Millions of people believed in a species of Marxism during the twentieth-century that both failed to explain and predict pheomenena.  How do we know with confidence that we are not all equally duped in the compelling theories of today, e.g., Darwinian theory, cosmological theory, psychological and sociological theory? 

Despite these global skeptical questions, the scientific method has wide consensus as the way to acquire scientific knowledge.  Understanding it more deeply than we did as children is important in evaluating conflicting truth claims that we sometimes encounter in the sciences, it is important for understanding the basic thrust and orientation of science in general, and it is deeply important in the very human activity of day-to-day living where we have to link scientific truth claims with the other types of truth claims in which we are daily involved, particularly the truth-claims of religion.  

In reflecting upon science we must make an important distinction between data and theory.  The first is what is given in experience, while the second offers a story or an account which both explains how it is that we are given the data we are given, and predicts what our future data might be.  For instance, granted that there is a change in sea level at a particular location, what theory best accounts for this rise in sea-level.  A theory (or better a bundle of theories) appealing to global warming could best explain the increase in sea level. Moreover, the theory might predict that if the causal mechanism it specifies actually holds, we should this reflected in future data.  A theory consists of an account that can both explain the data as it is currently given, and predict on its basis of the account what the future data will be.  

Now we might get a bit more technical in our nomenclature and call the scientific method we learned about as children the hypothetical-deductive method.  It consists of observationhypothesis, deduction, experimental confirmation/disconfirmation, and adjustment.  People outside science don't often recognize the provisional nature of science.  It is always open for adjustment.  Theory-tweeking is how science progresses.  

Both natural and social science begins with data that is gained through observation.  However, the nature of observation itself turns out to be difficult.  In any observational situation, there must be some consensus about the nature of what it is one is seeking to observe.  Such consensus is necessary to know where to look, as it were.  For thousands of years human beings observed the sun rising in the morning and many thought on the basis of this observation that the sun must go around the earth. But this heliocentric hypothesis has been long disproved. The sun does not go around the earth, but the earth goes around the sun.  Thus, it is that one cannot observe the sun rising?  So what exactly is it that we observe?  It is an appearance of the sun rising? But how do we describe such an appearing?  

The question of what is given in data turns out to be connected deeply to the question of what theories we are assuming.  In the heyday of Logical Positivism in the 20th century, philosophers assumed that they could specify a "given" that was public, objective, and prior to all theory and interpretation.  But such a given has proved very difficult to support. Text implies context. The given is always a given within a context of background scientific theory and practice.  Most philosophers now believe that there is no objective "unvarnished good news" (Quine) of the given upon which we can base scientific theory.  

But scientists have another immediate problem after collecting singular data.  They must look for generalizations of that data, and such looking happens over the course of time.  I can observe data x a time y and then again at time z, and then at time u, but how do I know that the data I am finding in these three times are the data that I would be able to find at every time, were I able to check it?  In other words, how do I know that I am observing a true regularity of nature and not just an accidental generalization I have drawn because of my limited experience?  The problem of generalizing from specific instances is called the problem of induction or sometimes Hume's Problem after the great Scottish philosopher, David Hume. No matter how many times we observe a critical correlation in nature, we do not know if it will happen next time.  We might have a theory predicting it will and find in our next observation that the thing predicted does not happen, and thus perhaps our entire predictive theory is flawed.  Gustav Bergmann said, "Induction is the long arm of science," and he is clearly correct.  If we start from the contingency of empirical experience itself, as we must do in science, we must always recognize that we could get things wrong right out of the chute. We might find ourselves building a theory to account for a regularity in nature that is not a real regularity. 

Given that we have found data and some generalizations to explain, the next thing we must do is hypothesize a theory from which we can deduce ramifications that we can compare to our experience. A good scientific theory has several characteristics.  It must be applicable and adequate to the experimental data, internally consistent and coherent in its formulation, simple in that it posits as few theoretical entities or laws as it can to explain the data, and fecund in that it can ground a continuing research agenda.  All of these characteristics of the "best" theories are put to the task of explaining and predicting the empirical data.  

How does this way of doing things compare with theology?  I would argue that "theological theory" has  data too, but that this data is not that which is "given" to the five senses. So what could be the data of theology?  One might argue that its data is revelation, but then one must further specify the identity conditions of the term 'revelation'.  Perhaps we say that there is a "revelation" of God's activity through Scripture and tradition.  But is the first primary over the second, and is there stratification within the former?  Are some parts of Scripture more revelatory than others, and if so, how are these differences normed? Moreover, cannot preaching confront the listener in a revelatory way, perhaps more so than simply reading Scripture. Furthermore, theologians often distinguish specific and general revelation, meaning by the latter some intimations within experience of that source which ultimately transcends experience. Perhaps "limit notions" or senses of ultimacy are part of human experience in ways that are useful data for theological theory.  

Assuming that there is data, there is often generalization of that data.  The experience of ultimacy or that of the sacred is a generalization from experiencing this ultimacy or this experience of the sacred.  A little reflection should convince the reader that the same general problems of extending the particular to the general apply in theology as well as in science generally. 

If there can be some agreement on what theological data might be, we then could go to work theory construction.  Just as scientific theory should be applicable to empirical experience as well as being adequate to it, that is, it must not only apply to the empirical, but it must apply apply deeply to all of the empirical, so too must theological theory apply to our experience, to revelation, to our experience as human beings haunted by the question of God.  Moreover, the theory must be deep enough to cover all of that experience.  It must give an account of both the experience of the presence of God (immanence) and of God's absence (transcendence).  

The theory must be consistent, coherent, fecund, and simple. Just as with science generally, theology cannot have theoretical statements that contradict each other.  Why is this the case?  It is clear that from a contradiction any proposition whatsoever can be derived. Assume both P and ~P.  If P, then P or any arbitrary statement Q.  But by disjunctive syllogism, since ~P holds, the given P v Q, Q must hold.  

Moreover, just as in science, theological theory must be coherent.  This means that the fundamental terms of the theory must mutually presuppose each other and there are not ad hoc assertions holding this or that in order to account for the data.  In other words, the first theory much be simple, or at least as simple as a theory can be which asserts both the Trinity and the Incarnation.  Finally, the theory must be fecund.  One might argue, that the Chalcedon Definition has been extremely fruitful in the history of theology generally. Clearly, the notion of the Trinity has generated centuries of ongoing theological reflection.  

Given these similarities, there are overwhelming dissimilarities between the two disciplines (Wissenschaften) as well, and I do not want to minimize that.  I have given the above sketch of similarities simply to get the conversation going.  I will in the next posts return to the field of the philosophy of science and discuss the Received View, the critique of the Received View, and the new vistas produced by historicism within the philosophy of science.