My notes on Causality and Chance will be limited to Bohm's general philosophy of natural law—his attempt to unify 'causal laws' and 'laws of chance' into a holistic view of nature as an infinitely layered and ever-evolving totality. His philosophy is often mesmerizing and impressive for its unifying effects on different areas of science and philosophy, but Causality and Chance is also eclectic and it never quite resolves the tensions that arise from the influence of conflicting philosophical currents.
His depiction of causality is colored by the positivist and empiricist view of natural laws as empirical regularities that enable one to predict sequences of events (what Carl Hempel would soon call 'covering laws'). Likewise, Bohm defends the frequency theory of probability which claims that probabilities are objective features of external reality (a view that also appeals to to empiricist sensibilities). Yet the book promptly reveals positivism's flaws and Bohm's complete philosophy (like his own quantum theory) wishes to supersede the positivist framework, if not to exceed its coordinates altogether. His affinity for dialectical reason gives the work an entirely different feel than one could otherwise expect.
Bohm offers a number of insights which anticipated important developments in the philosophy of science. In this post I'll present a short summary of Bohm's argument and note its relevance to contemporary philosophy, including realist theory and Edwin Jaynes' theory of probability as logic.
Causal laws
To begin, he presents a theory of causal laws which proceeds from empirical regularities to successively deeper layers of explanation and understanding.
Empirical regularities
Here is his first stage of description:
The first thing that suggests causal laws is, of course, the existence of a regular relationship that holds within a wide range of variations of conditions. When we find such regularities, we do not suppose that they have arisen in an arbitrary, capricious, or coincidental fashion...we assume, at least provisionally, that they are the result of necessary causal relationships. (p. 4)
The next step in scientific investigation is to form a hypothesis 'which would explain these regularities and permit us to understand their origin in a rational way' (p. 4). Unfortunately, he does not expand much on this point.1 In any case, hypotheses generate new predictions which, when satisfied by careful observation, provide new kinds of evidence in their favor and possibly uncover new regularities which may call for the revision of existing ideas.
The identification of law of nature with empirical regularity unravels on closer inspection. Bohm uses the cause of malaria as a running example. To start, controlled observations in the field allowed researchers to learn that the disease could be removed by removing contact with the Anopheles mosquito or other persons infected with malaria; this simply is an observed 'regularity'. But,
not everybody who is bitten by an infected mosquito gets sick. This fact is explained by a more detailed understanding of the processes involved in getting sick. Thus, the bacteria produces substances that interfere with the functioning of the body and tend to make a person sick. But the body can produce substances which interfere with the functioning of the bacteria. Thus, two opposing tendencies are set up. Which one will win depends on complex factors concerning the functioning of microbes and of the body, which are not yet fully understood. (p. 7)
Of course from the positivist viewpoint, one might wish to say that the right set of sufficient conditions have simply not been identified. But the possibility has already arisen that one could find that every event has a potentially infinite number of causes, since a potentially infinite series of processes must converge at various layers of effective activity in order to form all of the necessary conditions for the event to happen.
To preserve the idea of causality as a regular association might try to enrich the narrative by distinguishing between immediate causes and background causes. This distinction allows one to identify those causes that seem most important for specific purposes:
The immediate causes may be defined as those which, when subjected to the changes that take place in a given context, will produce a significant change in the effects. The conditions may be defined as those factors which are necessary for the production of the results in question, but which do not change sufficiently in the context of interest to produce an appreciable change in the effects. (p. 10)
Planting of the right seeds is an immediate cause of the crops while properly watered soil may be considered a (necessary) background condition. But what one takes to be a fixed 'background' condition (like good soil and irrigation) may one day become so unreliable (due to drought, or non-regenerative techniques) that it cannot be taken for granted and its own contingent nature comes to the foreground of attention. Bohm does not wish to defeat the idea of regularity but to relativize it, making it conditional to a specific system or structure or 'background'.
The properties of things
If predicting the course of events using causal laws is one aspect of casuality, then predicting the qualitative and quantitative properties of things forms a different, equally important aspect of causality. Here Bohm has in mind mundane transformations of things, such as how boiling an egg will cause it to become solid, as well the ability to use theory (namely of the inner-structures of things) to predict properties that have never been seen before; as when it was predicted from theory that 'uranium exposed to neutrons should be transformed into a new element, plutonium, that had not previously been observed or produced anywhere else' (p. 14). He is returning to the value of theories or hypotheses that, if true, enable us to understand things rationally, as distinct from a mere catalogue of observed regularities.
The most important part here is that the properties of things emerge or arise out of the nature of things themselves, and this speaks to the non-arbitrary or rational nature of natural law as a whole:
...the causal laws are not like externally imposed legal restrictions that, so to speak, merely limit the course of events to certain prescribed paths, but that, rather, they are inherent and essential aspects of these things' (p. 14)
He goes on, briefly, to advocate for an evolutionary approach that incorporates an understanding of how things have arisen into our knowledge of that thing and of its mode of being.2 But at this point, one really wants to hear more. Of course he does say more but none of it strikes me as being any more specific than the quoted passage. For example,
the causal laws satisfied by a thing...are inextricably bound up with the basic properties of the thing which helps to define what it is. (p. 14)
What is still not clear is whether these causal laws should still be understood as empirical regularities, or something else.
Summary
Bohm's concept of causal laws begins with the positivist view of causality as regularity, or what Harré calls the successionist view, but he quickly shows that any such regularity depends on the stability of certain background conditions and, no matter how fundamental they might appear to us, every one of them may ultimately prove to be a mere contingency (rather than some kind of universal firmament or heavenly-ordained permanency). This leads nicely to his observation that causality is not like an externally imposed law or restriction but, instead, something internal to things as they currently are. Some ambiguity remains because he does not fully explain (in these passages at least) his own understanding of the role for explanatory theory and why things exhibit the properties they do.
Laws of chance
At this point, Bohm has broken down the notion of 'empirical regularity' but he is still hanging onto it. He is criticizing it (or a deficient version of it) with the intention of preserving the core idea of what he considers to be causal laws. The idea of a one-to-one causal relationship—in which some cause A always has the same effect B—is an idealization which may be convenient but has been thoroughly undermined. If the one-to-one causal relation supposes a closed system, in reality every system is subject to both external changes (background changes that alter or destroy the system) and lower-level disturbances (as the health of an organism depends on lower-level, molecular processes). To adopt some language from critical realism, we could say that in Bohm's philosophy all systems are open systems.
If all systems are open systems, why is there any stability at all? Why are things in one sphere of activity not constantly undermined by fluctuations stemming from other causal spheres? The answer, says Bohm, has to do with the 'extremely complex and manifold character of the external contingencies' (p. 23). When a large number of forces act independently of one another, their effects tend to cancel out. There is just no chance that all molecules of gas in a container will spontaneously move in the same direction. Nor will all of the molecules in a book move upwards simultaneously such that the book would jump off the surface of the table. This stems from the relative independence of molecules from one another, or the effective absence of causal ties between them, and not from a causal necessity per se. The results are so reliable and so fundamental to the nature of our reality that it becomes fair to speak of the laws of chance as an aspect of natural law. Put differently: like causality, chance is a kind of natural necessity.
Like causality, chance fluctuation 'is quite often an inherent and indispensable part of the normal functioning of many kinds of things, and of their modes of being':
Thus, it would be impossible for a modern city to continue to exist in its normal condition unless there were a tendency towards the cancellation of chance fluctuations in traffic, in the demand for various kinds of food, clothing, etc., in the times at which various individuals get sick or die, etc. In all kinds of fields we find a similar dependence on the characteristic effects of chance. (p. 23)
This point sets up two further elements of Bohm's view of chance and probability:
-
Chance is an objective features of reality which cannot be reduced to an observer's state of knowledge or ignorance. This is despite the fact that all events across all layers of reality must have definite, if manifold or even infinite, causes.
-
Causality and chance are two necessary aspects or sides of natural necessity: just as every chance contingency has a definite cause (outside one's area of focus), laws of chance are part of the way that causal laws are realized.
Predicting frequencies
The first point, that chance or probabilities cannot be reduced to an observer's state of knowledge or ignorance, leads Bohm to defend the frequency theory of probability. The alternative, epistemic view of probability would say that, for example, in the game of dice we assign equal probabilities (1/6) to each possible outcome because the information we have about each outcome is symmetric. According to Bohm, this logical view of probability supposes that probability 'would cease to be necessary or even to have meaning' once we obtain enough information to predict the outcome (p. 26). The logical view fails to exhaust the subject of probability because it supposedly 'gives us no idea at all of why probability can be used to make approximate predictions about the actual relative frequency with which a given face of the die will be obtained after a large number of throws' (p. 26).
As Bohm argues, our ability to apply probability theory in science
depends only on the objective existence of certain regularities that are characteristic of the systems and processes under discussion, regularities which imply that the long run or average behavior in a large aggregate of objects or events is approximately independent of the precise details that determine exactly what will happen in each individual case. (p. 27)
The emphasis returns to his qualified empirical regularities which are not 'absolutes' but which are germane to the systems that on is focused on.
Notice that Bohm's critique of logical probability assumes a nominalist perspective for which logic is a purely formal system which is divorced from reality rather than being about reality, as discussed below.
One-to-many and many-to-one relations
The second point is that chance can be found within causal laws as a necessary aspect of how they function.
Again, Bohm holds that 'causal laws' consist of qualified empirical regularities. But because (a) reality is stratified into manifold domains which are relatively independent of one another, and (b) these fluctuations are so numerous that they tend to cancel each other out, enabling stability, we find that effects of a stable cause are not one-to-one but they do stay within a certain bounds. Causal laws are necessarily one-to-many relations. He illustrates this with an example of a gun firing in a controlled way at a target—no matter how precise and controlled one becomes, projectiles will cluster around the points predicted from physical laws.
One-to-many causal relations finds its complement in many-to-one relations where
many different kinds of causes can produce essentially the same effect. An example is that all the rain that fall within a certain watershed will, independently of precisely where it drops, reach the sea in a certain place (i.e. where the main river of the watershed flows into the sea)...Thus, in physics, if a body is disturbed or set into motion when it is near a position of stable equilibrium, it will eventually (because of friction) come back to its equilibrium position, independently of a wide range of possible initial motions. (p. 17-18)
Qualitative transformations always have this many-to-one character, as the way water becomes steam (a singular transformation) when any variable amount of heat is applied, so long the amount passes a certain threshold.
Usually an example will easily lend itself to either one of these depictions (one-to-many or many-to-one), but on closer inspection the two are found together as different aspects of causal laws. The exact amount of force applied to the trigger of a gun will vary each time it is fired, making this one-to-many relation also a many-to-one relation.
Philosophy after Bohm
The following are some thoughts on how well Bohm's project has aged and how it relates to some more prominent philosophies.
Quantum theory as normal science
Bohm was writing at a time when positivist philosophy was ascendant. In more than one respect, Causality and Chance offered an alternative. Most important in this respect must be Bohm's own quantum theory, which Louis de Broglie praised in his foreword to the text. After explaining why in the quantum realm we are prevented 'in general from predicting precisely the result...allow[ing] only statistical predictions', he sets the stage for Bohm's intervention:
The construction of purely probabilistic formulae that all theoreticians use today was thus completely justified. However, the majority of them, often under the influence of preconceived ideas derived from positivist doctrine, have thought that they could go further and assert that the uncertain and incomplete character of the knowledge that experiment hat its present stage gives us about what really happens in microphysics is the result of a real indeterminacy of the physical states and of their evolution. Such an extrapolation does not appear in any way to be justified. (p. xi)
For quantum theory, the importance of Bohm's philosophy of natural law is that it views the philosophical 'problem' of quantum mechanics, namely our inability to predict exact outcomes, to be typical of the scientific research process. As de Broglie notes,
He has shrewdly and carefully analysed the idea of chance and has shown that it comes in at each stage in the progress of our knowledge, when we are not aware that we are at the brink of a deeper level of reality, which still eludes us. (p. xi)
The (pseudo-) regularities that are identified by science, and are undergirded by natural necessity, are always and inevitably one-to-many and many-to-one relations, not one-to-one. In this respect, quantum mechanics is not special.3
(I have no stake at all in arguments about physics per se, and little ability to judge them either. Bohm is careful to distinguish between physics and philosophy while recognizing how the two influence on another, all of which makes his philosophy accessible.)
Natural necessity and unpredictability
Given his critique of positivism in physics, it is all the more striking that Bohm's philosophy, in the end, still fixates on empirical regularities as the stuff of natural law. What is absent, or at least not explicit enough for my liking, is an alternative causal concept that stands apart from the notion of the empirical regularity.
Reading Bohm's attempt to escape positivism's orbit leaves me with a greater appreciation for Harré and Madden's Causal Powers and Roy Bahskar's A Realist Theory of Science, which sever the idea of a regularity from the scientific concept of causality (or show these concepts to be united but only in a contingent manner). For them, causal knowledge is about the capacities, liabilities, tendencies, or causal powers that things possess by virtue of their structures and the mechanisms embedded in those structures. The idea of a mechanism allows us to understand why empirical regularities can be produced in the laboratory and nearly-closed systems (where the mechanism can act undisturbed, under constant conditions). It is trivial to see that the world is such that any prediction can be upset by a disturbance from it's outside and that empirical regularities generally do not appear until we try to produce them in a relatively controlled setting. Experiments are important—and this gets us to the importance of the distinction between causality and regularity—because we have reason to believe these causal powers remain active 'in the wild', which is to say, even when empirical regularities break down.
In other respects, Bohm's ideas have reappeared many times since publication. Bohm's view of qualified regularities and stratified reality fits nicely with today's realist philosophy. Nancy Cartwright's The Dappled World argues that regularities per se are not the stuff of natural law; rather, one can build a system that has certain properties or systemic tendencies (e.g., a welfare state). Likewise, one can disrupt or destroy that system; but only a fool would say that one has changed Nature or natural law in doing so (again, natural necessity or mechanism is shown to be distinct from a regularity). Andrew Lawson introduces the language of demi-regularities to speak of basically the same thing: regularities that hold as long as a particular system or structure persists. Demi-regularities are very real but they are fungible and subject to disruption. These ideas reconcile natural necessity with unpredictability and the inevitable transformation of structures.
One of the more unfortunate parts of Bohm's work is that he speaks quite crudely of 'statistical laws', which he suggests are evident any time there is a statistical trend. Appending a phrase like 'one-to-many relation' to such 'statistical laws' only decorates the mistake. Those passages reminded me of the physicists hired by Wall Street who believed that housing prices would continue climbing (forever?) because their statistical models predicted it—until the crash of 2008.
The logic of probability
Finally, Bohm's embrace of the 'objective' frequency theory of probability was temporary. As he concluded in private, with the frequency theory ‘we are led around and around in a conceptual circle, and the real meaning of none of the concepts can be pinned down’ (cited by Talbot 2017, p. 49–50). As with causality, Bohm's perspective on chance was bogged down by a positivist dogma but still managed to be fresh and insightful. Bohm's broader argument about chance and natural law remains interesting because it never actually needed the frequency theory and seems to be even better supported (if modified) by the logical view of probability.
As Bohm notes, probability distribution functions actually do resemble the frequency distributions that are exhibited by all sorts of series of events. If the logical theory claims that probability is only about our ignorance or incomplete information, then it indeed fails to explain this remarkable resemblance. The most complete reply to arguments like Bohm's was given by E. T. Jaynes. Jaynes published his breakthrough papers on this topic in 1957, the same year that Causality and Chance was published.
The flaw in Bohm's critique is that it assumes a nominalist perspective on logic. A nominalist or idealist theory is entirely severed from reality.4 Bohm's argument about all causality being one-to-many is a reference to the way physical systems tend towards maximum entropy or chaos; systems being what they are, the entropy reaches some maximum which is set by the constraints of the system (until the system is disrupted, of course). Bohm's discussion of the innumerable and relatively independent layers or nested scales of reality is a way of explaining how even an apparently simple system can be subject to fluctuations which follow the laws of chance. But this still leaves the most important question unanswered: what is it about the common probability distributions (binomial, Poisson, Gaussian...) that accounts for their predictive success? Why do their predictions so often resemble actual frequency distributions of events?Bohm, like other frequency theorists, never attempts such an explanation.
This is the question that Jaynes set himself to answering. What Jaynes argues is that the reason the common probability distributions (as mathematical functions) predict frequency distributions so well is that the distribution functions are also maximum entropy functions (each for a given set of mathematical constraints). Jaynes then develops a very clear analogy between entropy maximization as a property of physical systems and uncertainty maximization as a normative part of logic (i.e., one must acknowledge all uncertainty that is permitted by the known constraints of a problem). Since uncertainty can be measured using an entropy formalism, the two are mathematically the same but substantively distinct.
References
Bhaskar, Roy. 1975. A Realist Theory of Science. Verso.
Bohm, David. 1957. Causality and Chance. University of Pennsylvania Press.
Jaynes, E. T. 2003. Probability Theory: The Logic of Science. Cambridge University Press.
Talbot, Chris, ed. 2017. David Bohm: Causality and Chance, Letters to Three Women. Springer
Endnotes
It may be tempting to say that this reference to explanatory hypotheses constitutes a non-positivist position (not tied to the idea of cause-as-regularity), but in fact how one understands this point about hypotheses depends entirely on one's own philosophical predilictions. He hasn't yet given us enough information to know where he is headed.
Bohm's interest in the evolutionary aspect of things hints at his affinity for dialectics. An appropriate example of this viewpoint would be the role of history in Marx's Capital, in the sense that Marx identifies a system of industrial capitalism which has its own law-like tendencies (and counter-tendencies) but also details some of the long-term historical processess which had to happen before such a system could have become possible (namely, the enclosure of the commons and vast accumulations of merchant's capital).
Harold Jeffreys shared a similar view in his Theory of Probability. He expresses dismay at the general response to Heisenberg's uncertainty principle simply because he was under the impression that every physicist already knew this to be true of every possible experiment, not just quantum physics. His point being, we never make exact predictions (successfully) and the magnitude of error has to seen as part of the law. This is all quite similar to Bohm's philosophy of natural law but Jeffreys was squarely in the analytic tradition, with no hint of dialectical thinking.
If memory serves, reading Jeffreys' derivation of the normal distribution is another way, apart from reading Jaynes, to dispell this nominalist fallacy. See his Scientific Inference (Cambridge University Press).