The Quietist’s Case

“The alternatives are not placid servitude on the one hand and revolt against servitude on the other. There is a third way, chosen by thousands and millions of people every day. It is the way of quietism, of willed obscurity, of inner emigration.”
J.M. Coetzee

“It is a stupidity second to none, to busy oneself with the correction of the world.”
Molière

The word “quietism” has been used to characterize a number of distinct but related phenomena. Perhaps its oldest use refers to a heretical stream within Catholicism that emphasizes self-sufficiency, mysticism, and a withdrawal from worldly affairs. Quietist tendencies have been identified in other religions such as the Islam, Jainism, Buddhism, and ultra-orthodox Judaism to identify a conscious separation from social and political engagement. In a more general sense, quietism is often used to characterize those individuals or schools of thought that (passively) accept existing political arrangements and/or refrain from political engagement. When used in this manner, the word quietism usually has a negative connotation. For example, in her take-down of philosopher Judith Butler, Marta Nussbaum repeatedly claims that Butler’s positions can give rise to a passive or hip “quietism.” In his book “Waking Up: A Guide to Spirituality Without Religion” Sam Harris writes: “Focusing on training the mind to the exclusion of all else can lead to political quietism….” Suffice it to say that Sam Harris does not find this appealing.

In fact, one cannot escape the impression that to some observers the type of quietism that aspires to withdrawal from political engagement is perceived to be as bad, or even worse, as someone fighting for the wrong cause. Especially in an era where social and political engagement is emphasized greatly, quietism is  seen as insensitive, immoral, or elitist – a pastime only available to the privileged.

Can a more positive case be made for political quietism? What would this entail? And how might a quietest respond to the negative perception of such a stance?

A number of secular arguments for political quietism can be identified:

1. Political quietism as a consequence of moral nihilism. If there is no objective justification for any kind of normative ethics over another, the case for advancing a particular political ideology is weakened and an individual may decide to simply withdraw from political engagement of any kind. Such an individual may respond to the political engagement of others with incomprehension, amusement, or sadness, depending on temperament.

2. Political quietism as a consequence of the recognition of the futility of political engagement. This position would extend the orthodox economic argument about the negligible effect of one’s individual vote in a democracy to political engagement in general. He can still have a preference for certain social and political arrangements but has resigned himself to the fact that, as a general rule, he has little influence over it.

3. Political quietism as a response to the irrationality associated with the practice of politics. This position emphasizes the ways in which politics triggers all kinds of ancient tribal instincts, group-think, anger, and violence. This position is well described by Joseph Schumpeter:

“The typical citizen drops down to a lower level of mental performance as soon as he enters the political field. He argues and analyzes in a way which he would readily recognize as infantile within the sphere of his real interests. He becomes primitive again.”

This aversion to politics may not necessarily translate into political quietism and can also give rise to “political” efforts to replace political decision making with some kind of market-based decision making in which participants actually have “skin in the game.” If one considers politics to be a fundamental and unalterable part of life, however, an alternative response would be to withdraw from it altogether.

4. Political quietism as an aesthetic response. In this form of quietism, what is most objected to in politics is its vulgarity. To such a political quietist debating, organizing, marching, and shouting slogans debases the person involved. As Michael Oakshott wrote, “Political action involves mental vulgarity, not merely because it entails the occurrence and support of those who are mentally vulgar, but because of the simplification of human life implied in even the best of it purposes.”  He might still prefer one social arrangements over another but they would need to be achieved through education, individual virtuous behavior, and silent (non) consent. This kind of response to politics would expected to be even stronger if politics is also considered to be arbitrary, stupid, and ineffective.

5. Political quietism as a response to political alienation. An individual (or group of individuals) may decide that the political environment of their era is so fundamentally opposed to their own political outlook that any kind of political engagement would be pathetic, painful, and meaningless. This form of quietism is distinct from the general economic argument about the utility of political action and specific to time and place. A nationalist-socialist in post-war Germany, an advocate of a hereditary monarchy in the United States, a proponent of laissez-faire capitalism during most of the 20th century, etc.

Is political quietism possible? One might object that the “personal is political” and removal from politics is impossible in principle. That not to participate in politics is itself a political act. An obvious rejoinder is that this response does not leave much conceptual space between personal morality and collective action. For political quietism to have meaning, politics must refer to something beyond the rather obvious recognition that each person’s actions (or lack thereof) has effects on others. The “political” in political quietism discussed here refers to the (conscious) shaping and influencing the structures that enforce norms, collective decision making, laws, government, i.e. the “supra-individual” realm.

A milder variant of this critique is to state that you may not be interested in politics, but politics is interested in you. It seems indeed rather obvious that if a quietist position on politics is conjoined with ignorance about the political process and social-political and cultural trends in general, all kinds of unexpected, bad, things can happen in one’s life. This would be a kind of ignorant quietism – one that is mostly associated with the kind of religious quietism that categorically avoid knowledge of, and participation in, the modern world. A more secular quietism does not need to have this characteristic and can incorporate knowledge of social-economic- and cultural trends to make rational individual decisions. One  might even argue that a response that confines itself only to what we can meaningfully influence actually empowers the person.

Is political quietism “ethical” (immoral, wrong, etc.)? To a political quietist of a nihilist persuasion this question is nonsensical because  it assumes the very thing that needs to be established: that there is an objective set of normative guidelines that humans can and should translate into political action. As for the other variants of political quietism, a plethora of rejoinders are available to its adherents as well. Is abstaining from futile acts wrong? How can it be wrong to withdraw from the stupidity, violence, and ugliness that is intrinsic to political activity?  The political quietist may not have an iron-clad case, but his position can draw from a wide variety of metaphysical, religious, existential, psychological, economical, and cultural-aesthetic traditions.

Does the political quietist even has to “make” or “defend” his case? There is a type of political quietism that follows from the recognition that society does not have a “goal” or “purpose” that political action should bring about (or maintain). The quietist may consider this kind of “teleological” thinking about society naive, quasi-religious, and restrictive. It is often this kind of quietism that upsets people the most because this quietist refuses to “play the game” at all.

The Unrepentant Nihilist

The topic of nihilism raises two important questions. “What do we mean by nihilism?” “What are the consequences of nihilism?” (Is it a disease or a cure?)

In her book “The Banalization of Nihilism: Twentieth-Century Responses to Meaninglessness” (1992) Karen Carr distinguishes between:

  1. Epistemological nihilism (the denial of the possibility of knowledge)
  2. Alethiological nihilism (the denial of the reality of truth)
  3. Metaphysical or ontological nihilism (the denial of an independently existing world)
  4. Ethical or moral nihilism (the denial of the reality of moral or ethical values)
  5. Existential or axiological nihilism (the feeling that life has no meaning).

Some forms of nihilism imply other forms of nihilism. For example, if one denies the possibility of knowledge or truth then this renders the idea of normative ethics void. On the other hand, one can believe that there is an objective world of which true knowledge is possible but also hold that all moral preferences are subjective and life has no objective meaning. In fact, the desire for knowledge and truth can turn against the idea of an objective morality. As Nietzsche observed: “But among the forces cultivated by morality was truthfulness: this eventually turned against morality, discovered its teleology, its partial perspective–and now the recognition of this inveterate mendaciousness that one despairs of shedding becomes a stimulant.”

The main concern of Carr’s book is whether nihilism is considered a “crisis” with transformative and redemptive powers (as per Nietzsche or Karl Barth) or instead a “rather banal characterization of the human situation” that needs to be welcomed and celebrated as an antidote to dogmatism, a view she associates with the writings of Richard Rorty and contemporary deconstructionists and  anti-foundationalists. Carr herself does not welcome this “joyous affirmation” of nihilism because she believes that such an anti-dogmatic position produces the paradoxical effect of reinforcing “dominant social beliefs and practices of our culture” and the “absolutization of the dominant power structures of the culture to which we belong” because it cannot appeal to any critical (objective) standard outside of itself.

Carr’s position is puzzling for a number of reasons. It is not clear at all that nihilism would have the effect of reinforcing existing power structures. Most power structures and cultural norms are in fact based on residual beliefs about objective morality. It is also not clear why an abandonment of truth would have a reinforcing effect instead of a transformative effect. Carr herself writes that “one is left with simply the blind assertion of one’s private will; if the particular community to which one belongs does not support one’s will, one simply finds (or creates) a community more sympathetic to one’s tastes.” But this scenario of continuous power struggle and creating one’s own communities sounds rather dynamic, not static.

What she really appears to fear is a situation where critical thinking with universalist aspirations is replaced by a more individualist Hobbesian perspective in which “disagreements…deteriorate into contents of power.” A more cynical (or “nihilistic”) observer may point out that this has always been the condition of mankind and that the kind of critical perspectives that she feels are needed have always been rhetorical tools in power struggles and lack credible cognitive merit.

She approvingly quotes Thomas McCarthy who writes that “critical thought becomes aestheticized and privatized, deprived of any political or social implications. There can be no politically relevant critical theory and hence no theoretically-supported critical practice.” But is this a defect or a virtue of nihilism? Is this a disease or a cure? This assessment basically captures a modern, scientific, view of the world where morality and culture are an emergent property of evolution and politics can be best understood in a “contractarian” framework where individual preferences, coordination, and bargaining create moral and cultural conventions, an outlook that might be considered a major improvement over religion, or the vacuous nature of most “critical theory.”

Moral Rhetoric in the Face of Strategic Weakness

Even people who are inclined to believe in a universal, objective foundation for morality are sometimes prone to the impression that in certain situations invoking “moral” arguments is rather insincere. For example, moral arguments in favor of (income) equality are often dismissed by libertarian-leaning individuals as just a sanitized expression of resentment and envy by “losers.” But can this argument be generalized? Is moral rhetoric simply a way of pulling someone’s leg, and often employed when faced with a poor bargaining position? In a remarkable experimental philosophy paper, Moral Rhetoric in the Face of Strategic Weakness: Experimental Clues for an Ancient Puzzle (1997), Yanis Varoufakis throws some much-needed light on this topic.

A series of elegant games were designed to test the hypothesis that the “strong” would have a tendency to maximize their strategic advantage and the “weak” would have a tendency to choose “quasi-moral acts,” even when this is clearly against their own interests. In all three variants of the game, the cooperative solution was dominated by the solution to “cheat” but, quite remarkably, as the incentive of the “strong” to “cheat” increased, the “weak” displayed even more “cooperating” behavior. In the third version of the game, the tendency of the “weak” to cooperate slightly declined but this was only because the payoff for the “strong” to cheat was decreased (but still a dominating strategy). Since the participants of the game alternated between being “strong” and being “weak,” and long-term reputation effects were ruled out by not allowing the same pair of players to play the game twice in a row, we cannot claim that different kinds of people will play the game differently, or that the cooperative behavior of the “weak” was motivated by reputation effects. And since players varied their strategy depending on whether they were in a strong or weak bargaining position, moral theories that would predict that players in both roles would recognize the value of having a cooperative disposition (a la David Gauthier) can be dismissed, too.

Since it never makes sense in these games to cooperate against an uncooperative opponent, the most credible explanation of the “weak” to often cooperate is that this kind of behavior (or rhetoric) comes with being in an “unfavorable” strategic situation (i.e., one’s “social location.”) As the author of the paper notes, “Many (and on occasion most) of our participants swapped happily their cooperative choices for strategic aggression when they moved from the weaker to the stronger role.”

What to make of these results? For one thing, they could be seen as evidence that “power corrupts” and that the (formerly) “oppressed” will exhibit the same kind of aggressive behavior when they are in a position to become the oppressors. This is a popular view, and it does not seem these experimental results contradict it. This perspective also seems to reinforce political views that aim for the abolition of political power (anarchism) instead of giving all people (as represented by parties or coalitions) equal access to it (democracy). Of course, differences in bargaining power do not disappear in the absence of political power so even in a stateless society we would still expect to see the tendency of those in a strategically “weak” position to moralize. Also, in the real world there will often be “reputation” effects, and we would also expect people with natural (hereditary) advantages to find themselves more often in a stronger bargaining position.

It is undeniable, however, that “moral rhetoric” is often used by those in power (sometimes even more so), too, instead of just naked appeals to strategic advantage.  In a sense one could argue that in modern societies the division of resources is not exclusively settled by strategic advantage (or strength) but by a combination of strategic self-interest and moral rhetoric. We then would expect political actors that reconcile self-interest (or group interest) with evolved (“hardwired”)  moral outlooks (egalitarianism) to prevail.

Experimental evidence that those in a weak strategic position tend to play the “morality card” does not necessarily imply that the idea of a objective morality is a chimera. Many people still seem to believe that universal normative ethics is possible. On the other hand, a position of moral skepticism or moral nihilism does not mean that morality can be exclusively explained as a (psychological) response to a weak strategic position. In this sense, studies like these cannot provide definitive answers concerning the truth value of normative statements (or the lack thereof) or the evolutionary psychology of  moralizing. Also, the tendency to cooperate is not identical to moral rhetoric (or moral behavior in general) and additional research is needed to further differentiate between the two in the face of strategic weakness.

Our best understanding of moral behavior at this time is that it is an outcome of evolution and specific to species and their life history. In such an evolutionary perspective the question of which moral perspective is “correct” simply does not make sense. As this understanding of morality will come to dominate in society, bargaining will gradually come to replace traditional ethics and moral rhetoric will increasingly be seen as either ignorant or (deliberate) manipulation. Such a development  could be classified as the end of “morality” as we know it, but it can also been as the beginning of an era where modern (secular) humans arrive at a new understanding of what morality means. It is difficult to predict what a society will look like in which “a-moral” humans settle disagreements and conflicts about scarce resources exclusively by strategic interaction and conventions, but some efforts to understand and predict this have been made by writers like David Gauthier, Anthony de Jasay, and James Buchanan (albeit from different perspectives).

David Gauthier revisits Morals by Agreement

“The prohibition on bettering by worsening seems to me to lie at the core of any adequate social morality.” David Gauthier, 2013

In may 2011, the York University in Toronto organized a conference celebrating the 25th anniversary of David Gauthier’s Morals by Agreement. Gauthier’s own contribution to the conference, “Twenty-Five On,” was published in the July 2013 issue of Ethics. Since Gauthier has only sporadically published since the start of this millennium, his article provides a valuable resource to understand how Gauthier’s views have changed since the publication of Morals by Agreement.

Gauthier identifies his contractarian approach as an alternative to both “Kantianism or utilitarianism” and contrasts the maximization paradigm of classical game theory with Pareto-optimization:

“Instead of supposing that an action is rational only if it maximizes the agent’s payoff given the actions of the other agents, I am proposing that a set of actions, one for each agent, is fully rational only if it yields a Pareto-optimal outcome….To the maximizer’s charge that it cannot be rational for a person to take less than he can get, the Pareto-optimizer replies that it cannot be rational for each of a group of persons to take less than, acting together, each can get.”

Gauthier’s rational cooperators (the updated version of his “constrained maximizers”) do do not “bargain” and interact on a maximizing basis but seek agreement using the principle of “maximin proportionate gain” (previously called “maximin relative benefit”). Unlike in Morals by Agreement, Gauthier does not really discuss under which conditions these issues are relevant, but perhaps they comes into play in the production of “public goods.” After all, as has been argued by philosophers such as Jan Narveson, without such an argument, Gauthier’s Lockean proviso can do all the important work without having to consider the distribution of goods arising from public action. As Anthony de Jasay has written:

“Output is distributed while it is produced. Wage earners get some of it as wages in exchange for their efforts; owners of capital get some of it as interest and rent in exchange for past saving. Entrepreneurs get the residual as profit in exchange for organization and risk bearing. By the time the cake is “baked,” it is also sliced and those who played a part in baking it have all got their slices. No distributive decision is missing, left over for “society” to take.”

Interestingly enough, Gauthier has strengthened the role of his revised Lockean proviso:

“The proviso is not the whole of morality or even the last word, but it is, I believe, the first word. It provides a default condition that may be appealed to set a baseline for social interaction.”

It does not seem Gauthier has substantially revised his interpretation of the Lockean proviso. In a nutshell, the proviso forbids bettering oneself at the expense of another person. As such, the proviso can be “sharpened as a weapon of reason against parasitism.” As Gauthier appears to recognize in his discussion of “Robin Hood,” the proviso does not seem to leave much room for coerced income re-distribution where one party is worsened for the benefit of another (provided the proviso was not violated prior to this action). In his final remarks in an online discussion that his paper triggered, he writes:

“Any form of redistribution may involve a proviso violation, and so is prima facie wrong. Whether the violation is actually justified depends on (among other considerations) whether it rectifies an earlier wrong.”

While Gauthier has often followed John Rawls in characterizing society as a “cooperative venture for mutual advantage,” he now prefers the phrase “mutual fulfillment” because mutual advantage puts too much emphasis on “competitive or positional orientation” and is too restrictive. This change of wording, however, does not fundamentally change the contractarian framework that Gauthier advocates. In fact, one could argue that the word “contractarianism” suffers from a similar defect in characterizing his approach to morality.

Perhaps the most interesting part of this paper is where Gauthier reflects on the nature of his contractarian enterprise. In Gauthier’s opinion, absent a plausible justification of Kantian and utilitarian approaches, the Hobbesian contractarian approach is the only credible road to construct a modern, rational, approach to morality. As evidenced by his emphasis on the Lockean proviso, Gauthier’s contractarianism is not aimed at conferring legitimacy on whatever outcome results from markets and bargaining because this would privilege conditions that reflect prior violations of the provis. As such, his contractarianism is not an exclusive forward-looking approach using the status quo as a starting point. He writes:

“The key idea is that the best justification we can offer for any expectation or requirement is that it could be agreed to, or follow from what could be agreed to, by the persons subject to it, were they to be choosing, ex ante, together with their fellows, the terms of their (subsequent) cooperation. The hypothetical nature of the justification is clear—if, per impossible, you were to be choosing, together with your fellow humans, the terms on which you would interact with them, then what terms would you accept? Those are the terms of rational acceptance, the terms that you, as a cooperator, have good reason to accept given that others have like reason. “

In reality this requirement can, of course, produce vigorous discussion because it is rather challenging to objectively demonstrate who has unjustly benefited from violations of the proviso/contractarian approach and to what degree. This challenge is further exacerbated by the fact that over time groups that were deprived of their liberties have now been granted special privileges by governments to offset such events. It also not clear how the individualist assumption embodied in Gauthier’s contractarianism can be squared with compensating victims (ranging from taxpayers to minority groups) by any other person than the specific individual(s) who engaged in behavior that violated the proviso.

Gauthier discusses three different objections to his contractarian approach.

The first is the objection that only actual contracts are binding. Gauthier replies that “actual agreement would not show that the terms agreed to were rational, since it privileges existing circumstances. The contractarian test, in taking the ex ante perspective, removes that privilege.” This perspective may sound overly optimistic because it requires that people who think about ex-ante agreement reach a specific determinate result (see below). In response to Gauthier, however, one could argue that there is an interesting asymmetry here. While the existence of a contract does not necessarily reflect (non-coerced) rational agreement, a person who denies and can demonstrate not having agreed to a certain obligation (as is the case with most government obligations) provides reasonably good evidence that the contractarian test has failed.

A second objection to the contractarian framework is that it is redundant. If it is rational to act in a certain way, than the appeal of a social contract is superfluous. Gauthier answers that this misses the point because individual rational behavior will not tell us what it would be rational to agree under “suitably constrained circumstances.” As with the first objection, it is clear that Gauthier, like Rawls, wants to push the reset button on existing circumstances to allow for a social agreement that does not privilege existing conditions. What is really important for Gauthier is to show that a rejection of existing conditions as a starting point does not follow from an (arbitrary) moral conviction but is required by his contractarian framework, a non-trivial challenge.

The third objection, and in my opinion the strongest, is that an appeal to ex-ante agreement does not yield a sufficiently determined result. One might even go further and argue that the substance of hypothetical agreements cannot be established in a meaningful fashion.

Gauthier disagrees and refers the reader to his paper on “Political Contractarianism,” where he outlines which kind of society would pass the contractarian test. Most readers read some kind of (moderate) libertarianism in his political writings (he also wrote a back cover endorsement of Jan Narveson’s “The Libertarian Idea”) so it would seem that in Gauthier’s view rational agreement produces classical liberalism, perhaps with some allowance for a very limited welfare state based on mutual insurance arguments (Gauthier’s own writings are not particularly clear here).

Gauthier may not sufficiently recognize that his emphasis on voluntary association, the Lockean proviso, and rejection of parasitism puts him at odds with many other philosophers and people. In particular, his position that there is a morally relevant distinction between “harming” and “failing to help” is a core libertarian belief that is not shared by many. When most people think about a (hypothetical) social contract they do not think about the terms of interaction (like Robert Nozick’s side constraints) but about specific conditions they would like society to conform to such as equality of opportunity or equality of income. Absent these conditions, they will “reject’ the society they live in, regardless of whether such conditions can occur without worsening the position of anyone. Similarly, Gauthier’s writings strongly reflect the perspective that non-zero sum interactions between people prevail in markets that pass the contractarian test, a position that does not seem to resonate with many people yet.

Both Gauthier’s approach to morality and his view of society as a cooperative venture for mutual fulfillment is far removed from the democratic “churning society” that we live in today. Gauthier seems to be very much a philosopher of the future, or of a society with people of high intelligence. This would be consistent with Steven Pinker’s perspective, who writes in his book “The Better Angels of Our Nature” that the General Social Survey, which tracks the characteristics of society in the United States, contains hints that “intelligence tracks classical liberalism more closely than left-liberalism” (p. 663).

Buddhism, science, and the political mind

One of the complaints about science is that it does not offer any moral guidance. It can describe reality and causal relationships but it does not tell us how we should behave. One can accept such a situation as a fact of life but most people are drawn towards belief systems that do offer such moral guidance. What is interesting about Buddhism, or at least its more (modern) secular versions, is that it both seeks to understand reality and to offer moral and “spiritual” guidance as well. This of course presents a problem. Science also seeks to understand reality but the consensus is that if there is anything we are learning about reality it is that life has no objective meaning and the idea of objective, person-independent morality is an illusion.

One of the perplexing things about Buddhism is the assumption that gaining a correct understanding of Reality (typically written with a capital R) will trigger a corresponding change in our moral outlook. For example, when a person comes to realize that the “self” is an illusion, a lot of moral misconduct will disappear. Unfortunately, getting rid of such “illusions” about the self is neither sufficient nor necessary for moral progress. Great moral progress has been made in countries where people are firm believers in the existence of an unchanging self and many moral defects have been identified in countries where a belief in the illusion of the self is encouraged. In fact, the belief in a self is interesting because it has been both praised as a guard against nihilism and as an illusion that undermines morality.

Despite its appearance of being a secular open-minded belief system, Buddhism rests on a rather strong premise about the beneficial effects of seeing the “real” nature of reality. But contemporary science does not support such strong statements about reality. Like any other topic in science, our understanding of reality is subject to continuous revision. It might even be possible that we live in a computer simulation and “reality” outside of it is quite different from what Buddhists believe.

One of the most level-headed discussions of Buddhism and science is Donald S. Lopez’s Buddhism and Science: A Guide for the Perplexed. This book is a detailed exposition of the history of discussions about the compatibility of Buddhism and science. The author recognizes that the position that Buddhism  is compatible with, or even supported by, science is as old as Buddhism itself and provides reasons why Buddhism more than any other “religion” is prone to such statements. In the end, however, Buddhism is recognized as a rather diverse and dynamic belief system and whether it is compatible with science depends on what is exactly meant by “science” and “Buddhism.” It is clear that a lot of historical expositions of Buddhism contain claims that are now known to be scientifically incorrect.  This raises the question how much of Buddhism can be rejected before it is no longer Buddhism.

One of the most uncomfortable claims in Buddhism concern the origin and nature of the universe. As Lopez writes, “all of the religions of the world asserted that the world is flat. This belief, in turn, was held so tenaciously that when it was first suggested that the world is not flat, those who made such a suggestion were executed.” Most secular Buddhists would not mind claiming that the Buddha was wrong about this and that these beliefs are not the essential doctrines of Buddhism, but as Lopez writes, “yet once the process of demythologizing begins, once the process of deciding between the essential and inessential is under way, it often difficult to know where to stop.” Which raises, once more, the question of why not to reject Buddhism completely and embrace a thorough scientific, empiricist perspective on life.

A counter argument is that Buddhism offers things that science cannot offer such as deeper metaphysical insights into the nature of reality and ethical truths. But the modern scientific mind is exactly distinguished by claiming that no objective truths should be expected here. In particular, there is no credible method, to deduce such ethical truths from metaphysical “facts.” There are not many rigorous analytic philosophical treatments of Buddhism but those that exist, such as Mark Siderits’ Buddhism as Philosophy: An Introduction, have identified several problems and challenges. If Buddhism (even in its most modern, secular, form) is subjected to the kind of scrutiny that has been applied to thinkers such as Wittgenstein, Carnap, and Kant it is not likely that it can survive in its current form. At best it will be just another philosophical “school.”

A very sympathetic account of Buddhism, and its relation to contemporary (neuro)science and philosophy is Owen Flanagan’s The Bodhisattva’s Brain: Buddhism Naturalized. Flanagan goes out of his way to give the most charitable reading of modern secular Buddhism but in the end he confesses, “I still do not see, despite trying to see for many years, why understanding the impermanence of everything including myself makes a life of maximal compassion more rational than a life of hedonism.” Perhaps this is because there simply is no necessary, logical, connection between recognizing the nature of Reality and specific moral and lifestyle choices. While Buddhists usually do not like being accused of being negative and pessimistic it can hardly be denied that more cheerful, care-free, implications of the idea of impermanence can be imagined (and have been imagined).

How would Buddhism look like if it really would be serious about making adjustments to its (core) beliefs based on science? For starters, it would treat each belief as an hypothesis that is calibrated when new evidence becomes available. But how many Buddhist publications are really serious about this? Such work is typically done by sympathetic outsiders but the result never produces a full endorsement of core Buddhist beliefs. Although Buddhism seems to be able to survive in a modern secular society it still has its share of ex-Buddhists who feel that it is still too dogmatic and unscientific. In his article “Why I ditched Buddhism” John Horgan writes:

“All religions, including Buddhism, stem from our narcissistic wish to believe that the universe was created for our benefit, as a stage for our spiritual quests. In contrast, science tells us that we are incidental, accidental. Far from being the raison d’être of the universe, we appeared through sheer happenstance, and we could vanish in the same way. This is not a comforting viewpoint, but science, unlike religion, seeks truth regardless of how it makes us feel. Buddhism raises radical questions about our inner and outer reality, but it is finally not radical enough to accommodate science’s disturbing perspective. The remaining question is whether any form of spirituality can.”

Perhaps the best defense of (secular) Buddhist thinking is to be found in Sam Harris’s “Waking Up: A Guide to Spirituality Without Religion”. The central premise in this book is that the emphasis on examining consciousness in many Asian religions can yield insights that the Abrahamic religions cannot offer. He shows that some Buddhist perspectives on the nature of consciousness and “the self” are consistent with contemporary neuroscience. Harris makes a lot of the insight that “the self is an illusion” and sometimes even equates spirituality with this recognition. What is perplexing about this argument is that he does not really seems to value the difference between the insight that careful introspection reveals that the self as a casual and unifying concept does not exist and use of the phrase as a pragmatic convention that economizes the use of language and the way we talk about experience and flourishing. Evolutionary arguments are often implied in his book but not made explicit, which does not permit him to discuss the evolutionary advantages of our concepts of self.

Although Harris sometimes refers to Western philosophy of mind, one wonders, though, if his exposition would have been richer if he would have contrasted Western philosophy (Plato, Hume) and Eastern philosophy because Western religion is a rather easy target when it comes to its neglect of consciousness.  Harris reiterates that a clear mind (or “spirituality”) is available to all but the question whether certain personality types (or even cultures) are more prone to mastering meditation would have been enlightening. His writing is unsurpassed when it comes to separating the metaphysical, pseudo-scientific, guru-centered aspects of Eastern spirituality from its more logical, empirical claims. His analytical debunking of “life-after-death” experiences is worth the price of the book alone.  Harris offers personal accounts about meditation and the capacity for moral concern, but these co-exist with his observations of odd moral behavior of people who have mastered mindfulness, which leaves the question of the relationship between spirituality and morality unresolved.

There is one element in Buddhist thinking, however, that can throw an interesting light on the “political mind.” Buddhism is not explicitly political although some followers have made attempts to politicize it, culminating in a rather artificial movement called “Engaged Buddhism.” Buddhism teaches that nothing in reality is permanent and emphasizes the continuous birth, transformation, and rebirth of things. What sets the political mind apart is that it looks at society as a whole and wants it to conform to an arbitrary idea about political justice or efficiency. While this aim can be even perceived as unrealistic and delusional for a small group, it borders on insanity for a world composed of billions of people. When political activists recognize that the world cannot be easily manipulated in such a fashion, or run into the unintended consequences of their policies, frustration, anger, and violence often ensue. This “thirst” for control of the external world has often been ridiculed by Zen Buddhist monks and this kind of “suffering” can be successfully eliminated if the ever-changing nature of reality is recognized.

There is a growing literature about the psychology and even neuroscience of political beliefs but much of this work does not examine the most basic questions. What exactly is a political belief (or ideology)? Why do some people choose political engagement and others seek to make less grandiose changes to their personal lives and environment? Can political ideals be satisfied or does the ever-changing nature of reality (and slight deviations from any ideal) suggest that politically engaged people chase an illusion and political happiness will be brief at best. To my knowledge, there have not been many publications in which Buddhist premises have been employed to argue against the idea of political ideology and “activism”, although it seems an interesting connection to make. Such a Buddhist argument would solely emphasize personal kindness instead of the (futile) desire to make the world conform to a specific idea (and the ensuing “suffering” if reality does not want to conform).

The illusion of free will is itself an illusion

While debates about free will remain prevalent in theology, philosophy, and the popular imagination, the concept of free will does not do any meaningful work in modern science. Even philosophically-inclined neuroscientists who write about free will do not evoke this concept in their technical work about the brain. Similarly, we talk about “nature versus nurture” not “nature versus nurture versus free will.” According to writer, philosopher, and neuroscientist Sam Harris, free will cannot be made conceptually coherent. In his little book “Free Will” he writes that “either our wills are determined by prior causes and we are not responsible for them, or they are the product of chance and we are not responsible for them.” Sam Harris is not the first person to debunk the idea of free will but what makes his treatment of the subject stand out from most hard determinists (or hard incompatibilists) is his no-nonsense treatment of “compatibilism” and his smart take on the view that free will is an “illusion.” He also has a talent for using effective metaphors to make his cases as evidenced by sentences such as, “you are not controlling the storm, and you are not lost in it. You are the storm.

Harris is not a “compatibilist” and follows philosophers such as Immanuel Kant (“wretched subterfuge” and “word jugglery”) and William James (“quagmire of evasion”) in identifying this position as a (subtle) attempt to change the subject. About the vast compatibilist literature he writes that “more than in any other area of philosophy, the result resembles theology.” Compatibilists like Daniel Dennet have spent considerable time in twisting the meaning of free will and putting it in an evolutionary context but as some of his critics have noted, the “free will” that is compatible with determinism does not capture the kind of free agency and moral responsibility that philosophers feel is worth talking about (for example, see Paul Russell’s article “Pessimists, Pollyannas, and the New Compatibilism“). “Compatibilism amounts to nothing more than an assertion of the following creed: A puppet is free as long as he loves his strings,” writes Harris.

Harris follows philosophers such as Derk Pereboom in noting that neither determinism nor indeterminism can give rise to free will or moral responsibility. This also includes more recent attempts to find “free will” in quantum mechanics. “Chance occurrences are by definition ones for which I can claim no responsibility…how would neurological ambushes of this kind make me free?

While Harris still recognizes free will as an illusion, there are some passages in his book that reveal that he does not seem to agree that disciplined introspection is a credible source for a belief in free will. “If you pay attention  to your inner life, you will see that the emergence of choices, efforts, and intentions is a fundamentally mysterious process…I do not choose to choose what I chose…there is a regress here that always ends in darkness.” This is a distinctly refreshing perspective because most literature is plagued by the belief that regardless of whether free will exists (or can exist) it is nevertheless an illusion, or worse, a necessary illusion. This “illusion of the illusion of free will” remains a mainstay of most discussions of the topic, despite its shaky foundation in introspection or logical analysis. In a rather Buddhist perspective on the matter, Harris concludes his book by observing that

“our sense of our own freedom results from our not paying close attention to what it is like to be us. The moment we pay attention, it is possible to see that free will is nowhere to be found, and our experience is perfectly compatible with this truth. Thoughts and intentions simply arise in the mind. What else could they do? The truth about us is stranger than many suppose: The illusion of free will is itself an illusion.”

So what then gives rise to the belief in free will and the desire to prove its existence? According to Harris, a belief in free will is closely associated with the concept of “sin” and retributive punishment. One might also add that “compatibilist” philosophy arises from the recognition that most normative ethical theorizing requires some kind of compatibilism. It is not a coincidence that the most exotic treatments of free will can be found in theological, ethical, and ideological writings. Obviously, Harris denies that a belief in free will is necessary for morality and justice. “Certain criminals must be incarcerated to prevent them from harming other people. The moral justification for this is entirely straightforward: everyone else will be better off this way.” The fact that no criminal has free will does not mean that all crime should be treated the same. The reason why we are interested in, for example, whether the cause of a crime can be attributed to a brain tumor or a psychopathic personality type is because it is important to know what kind of person we are dealing with and under which conditions we should expect such crimes most likely to occur. There is no need for a complete overhaul of our criminal system but in a society in which there would be less emphasis on free will there would be more room for intelligent treatment of crime instead of hatred and retribution.

There is a brief chapter in the book where Harris discusses free will in the context of politics. He identifies modern conservatism as embodying an unrealistic belief in free will, as evidenced by the tendency to hold people responsible for their own choices and to glorify “individualism” and the “self-made man.” It is certainly the case that the concept of free will has clouded the mind of many political thinkers. For example, two writers that are closely associated with radical capitalism, Ayn Rand and Murray Rothbard, have offered rather obscure defenses of free will. Ultimately, however, most dominant ideologies can be restated without a belief in free will. A denial of free will in conjunction with postulating values such as”egalitarianism,” “impartiality,” and “universalism” can give rise to modern liberalism but a denial of free will is also compatible with an elitist, aggressive, anti-democratic pursuit of human enhancement through state coercion.

Libertarianism does not require a belief in free will either as evidenced by recent attempts to derive it from Hobbesian contractarianism (Jan Narveson) or economic efficiency arguments (David Friedman). Incoherent discussions of free will in moral and political theory are easy targets for ridicule, and often an indicator of belief in other mysterious concepts such as “natural rights.” In fact, libertarianism cannot only be restated without any appeals to “free will” or “natural rights” but it does not even require the postulation that “freedom” is valuable (or needs to be be maximized) as has been shown in the recent writings of Anthony de Jasay.

Voting, cheering, and exploitation

In his little book ‘Game Theory: A Very Short Introduction‘ Ken Binmore writes:

Real people seldom think rational thoughts about whether to vote or not. Even if they did, they might feel that going to the polling booth is a pleasure rather than a pain. But…the pundits who denounce the large minority of people who fail to vote in presidential elections as irrational are talking through their hats. If we want more people to vote, we need to move to a more decentralized system in which every vote really does count enough to outweigh the lack of enthusiasm for voting which so many people obviously feel. If we can’t persuade such folk that they like to vote and we don’t want to change our political system, we will just have to put up with their staying at home on election night. Simply repeating the slogan that ‘every vote counts’ isn’t ever going to work, because it isn’t true.

Later in the book Binmore returns to this topic when he discusses the “myth of the wasted vote”:

If a wasted vote is one that doesn’t affect the outcome of an election, then the only time that your vote can count is when only one vote separates the winner and the runner-up. If they are separated by two or more votes, then a change in your vote would make no difference at all to who is elected. However, an election for a seat in a national assembly is almost never settled by a margin of only one vote….Naive folk imagine that to accept this argument is to precipitate the downfall of democracy. We are therefore told that you are wrong to count only the effect of your vote alone – you should instead count the total number of votes cast by all those people who think and feel as you think and feel, and hence will vote as you vote…This argument is faulty for the same reason that the twins fallacy fails in the Prisoner’s Dilemma . There may be large numbers of people who think and feel like you, but their decisions on whether to go out and vote won’t change if you stay home and watch the television.

Faced with the criticism that game theorists who openly disseminate such observations lack “public spirit” he responds by drawing an analogy between voting and cheering at a football game.

“No single voice can make an appreciable difference to how much noise is being made when a crowd of people is cheering. But nobody cheers at a football game because they want to increase the general noise level. They shout words of wisdom  and advice at their team even when they are at home in front of a television set. The same goes for voting. You are kidding yourself if you vote because your vote has a significant chance of being pivotal. But it makes perfectly good sense to vote for the same reason that football fans yell advice at their teams.”

Whether this analogy is accurate or not, it is doubtful that such explanations of voting can salvage the idea that participation in an election is a meaningful public activity, let alone a civic duty. A recent Reason article, Your Vote Doesn’t Count,’ is a good survey of this topic and the desperate attempts to rehabilitate the case for voting. With the exception of economist Gordon Tullock, few scholars are known for publicly admitting to the futility of voting, let alone admitting to not voting themselves.

One explanation why people vote is that many do not explicitly recognize that they are no longer deciding an issue in a small hunter gatherer tribe. The fact that the scale of our decision making has changed substantially throughout the history of mankind is increasingly being discussed though. For example,  one presentation at the 2012 Ancestral Health Symposium reads as follows:

Richard Nikoley, B.S. – Paleo Epistemology and Sociology

Primitive peoples evolved to account for the values and actions of a relatively small tribe of family and close acquaintances comprising of 30-60 members whereby, every individual had a critical role and opportunity to influence the behavior and actions of the group or tribe as a whole. This is far removed from the unhealthy social trends in modern society where individuals are fooled into believing that they have real power at the voting booth and other activism when in reality, their influence is insignificant and pales in comparison to the social power a primitive hunter-gatherer would have wielded.

A more sophisticated argument about conditions under which which it would be rational to vote was recently expressed by the social philosopher Anthony de Jasay in a 2011 interview:

It is in fact widely held that because millions vote, no voter can rationally expect to influence the result. Millions nevertheless keep on voting, which looks a bit strange. Many parapsychological stories have been written to explain why they do so. I am not sure that we need them. In a well-oiled democracy, the perfect election result yields a wafer-thin majority because that outcome maximizes the size of the losing coalition ready to be exploited and minimizes the size of the winning coalition whose members share the spoils. This idea, of course, is the well-known median voter theorem. When the majority is literally wafer thin, the displacement of a single vote turns the majority into a minority, and vice versa. Thus, the perfectly oiled democratic mechanism produces outcomes with a majority of one vote; a single vote is decisive; and, hence, the voter is quite rational to cast it. In a less perfectly oiled democracy, where the majority is thicker than a wafer, the probability of a single vote’s being decisive is less than unity (the median voter theorem does not quite hold), but it need not be negligible. Because voting is not very costly, to affirm that it is irrational to vote is much too strong a claim.

Jasay’s explanation why people vote, or under which conditions it would be rational to vote, deserves closer scrutiny because it aims to do more than coming up with a “feel-good” story about voting. What Jasay is saying here is that in elections that are purely distributive in nature, it can be rational to vote. Real-world elections, however, do not take place in such “well-oiled” democracies and virtually all large elections are decided by majorities much thicker than a wafer.

The problem with Jasay’s argument about the rationality of voting is not just that it has little relevance for actual existing democracies but it also raises questions about whether such a cynical form of democracy could be viable at all. Although most people recognize the redistributive component in politics, it is doubtful that a democracy in which politicians operate without any illusion about serving the general interest could persist. Just like it is doubtful whether voting would survive widespread recognition that it is just another form of cheering (or signalling), it is also doubtful that a democracy that would function in the way that Jasay describes it would be able to secure stable compliance, especially from its victims.

Public choice scholars often praise themselves as doing politics without romance by stripping the political process of all its lofty rhetoric and just analyzing it in terms of interest. But if people would actually recognize public institutions solely as vehicles to form coalitions to exploit others, the resulting governments would have little resemblance to the Western governments scholars and philosophers currently analyze. In other words, it may be a mistake to assume a distinctly different view of human nature and social interaction but keep political institutions unchanged.

While economists sometimes recognize the futility of voting in technical works, Bryan Caplan has been one of the few scholars who has developed this fact (and its implications for public policy) into a general theory about the microfoundations of political failure. In two excellent blog entries for EconLog he further reflects on the illusion of choice in American elections and how politics discourages self-correction (as opposed to markets).

The Better Angels of Our Nature

The Summer 2012 issue of the Independent Review features my review essay (PDF) of Steven Pinker’s The Better Angels of Our Nature: Why Violence Has Declined. There can be little doubt that this work constitutes one of the most ambitious and credible contributions to social science to date. Although the review essay was written from a classical liberal perspective, I think that one of the main criticisms of Pinker’s project can be sustained without any kind of “ideological” perspective. In fact, one of the concerns I have about his project is that insufficient attention has been given to providing a neutral definition of violence. Why is this important?

If we would go back in time and challenge some of the violence that was routine in those days inevitable objections would be that these acts of cruelty should not be condemned because they simply involved the execution of God’s will, proper punishment, served “the common good,” etc. One of the themes of Pinker’s book is that we have become less tolerant of these kinds of justification for violence and acts of extreme cruelty. Naturally, this raises the question of whether there are still many acts of violence, cruelty, and punishment that are being rationalized with poor reasoning. In my review I suggest that most of what we consider the normal operation of government, such as collecting taxes and “regulation,” is sustained through violence and threats of violence.

One might object that this perspective reflects a minority position on violence that does not conform to common use of the term violence. I do not believe that this response would be credible because the common opinion is not that government operates without threats of violence (and punishment if one fails to obey) but that in this case the use of force is legitimate and socially sanctioned. In that case, however, Pinker’s project would not be about the decline of violence but the decline in violence not approved by governments. Pinker does not go that far because he does not exclude warfare by democratic governments from his review of violence, but there is something rather arbitrary about what matters to him.

For example, Pinker writes that “early states were more like protection rackets, in which powerful Mafiosi extorted resources from the locals and offered them safety from hostile neighbors and from each other” but does not give good reason why we should view contemporary states much differently. In fact, one can even argue (as individualist anarchists like Lysander Spooner have done) that modern democratic states do not only extort protection money but in turn use this against the victim in the form of “regulation.”

I suspect that what makes Pinker exempt force associated with normal government operations is that the actual use of violence is rather rare. But that is not necessarily because most people prefer paying taxes or complying with regulations but because individual resistance is not rational. As Anthony de Jasay writes in his essay Self-Contradictory Contractarianism (collected in his book Against Politics: On Government, Anarchy, and Order).

If the cost of rebellion is high, if the expected (“risk-adjusted”) value of its success is not very much higher, and if the very possibility of collective action against the sovereign is problematical (at least in normal peacetime conditions), then two plausible conjectures suggest themselves. The equilibrium strategy of the sovereign will be to use its discretionary power to satisfy its preferences, perhaps by exploiting all its subjects in the service of some holistic end, perhaps by exploiting some of them to benefit others. The equilibrium strategy of the subjects will be, not to resist, but to obey, adjust, and profit from the opportunities for parasitic conduct that coalition forming with the sovereign at the expense of the rest of society may offer.

A potential rejoinder to this argument is that the operation of government is necessary to prevent even more violence. Leaving the usual problems with utilitarian arguments like this to the side, such a perspective can at best confer legitimacy to a very minimal form of government and would not exempt most other operations of government. If social order and peaceful commerce can arise without government, there is no reason at all to exempt any operations of government from a critical perspective. Pinker does recognize the existence of anarchist perspectives but his treatment of this topic does not indicate a thorough familiarity with the literature on conflict resolution without the state. This is problematic because reason and commerce (two of Pinker’s drivers of the decline in violence) may be sufficient for a peaceful society. In fact, the advantage of commerce versus government (or ‘democracy’) is that commerce itself is a peaceful activity.

One might further object that there is a difference between war and collecting taxes on the one hand and regulating on the other. In a real showdown between individuals and government officials, however, the priority of government is to prevail using as much force as necessary. As mentioned above, that does generally not require a lot of force because most individuals recognize the futile nature of individual resistance. In fact, it may be the increase of intelligence and individualism that Pinker also discusses in his book that makes more people less inclined to mount heroic but ineffective forms of resistance.

This does not mean that Pinker’s claims are completely arbitrary and dependent on whether one includes normal government operations in his definition of violence. For example, it is indisputable that the nature of violence and the cruelty of punishment has seen substantial changes since the middle ages. Also, in spite of the increase of public force associated with the growth of modern governments, the tolerance of people for violence is still declining. In fact, many public debates concern forms of harm that can hardly be construed as violence (discrimination, ‘hate speech’, insensitivity, poverty, etc.). This phenomenon itself raises a rather interesting question. How can the widespread tolerance of government force co-exist with increasing sensitivities about acts of human behavior that do not even involve physical harm (or threats thereof)?

There are a lot of other interesting topics in Pinker’s book such as his treatment of the sociobiology of violence, morality, and ideology. On the topic of morality he writes:

The world has far too much morality. If you added up all the homicides committed in pursuit of self-help justice, the casualties of religious and revolutionary wars, the people executed for victimless crimes and misdemeanors, and the targets of ideological genocides, they would surely outnumber the fatalities from amoral predation and conquest.

The Better Angels of Our Nature is not a treatise on (meta)ethics but Pinker’s evolutionary perspective leaves little room for grandiose moral theories and is more in line with classical liberal views in which morality is an emergent phenomenon that allows for peaceful human interaction, which is evidenced by his observation that “modern morality is “a consequence of the interchangeability of perspectives and the opportunity the world provides for positive-sum games” and that “assumptions of self-interest and sociality combine with reason to lay out a morality in which non-violence is the goal.”

He also observes that “to kill by the millions, you need an ideology.” At the same time he notes that “intelligence is expected to correlate with classical liberalism because classical liberalism is itself a consequence of the interchangeability of perspectives that is inherent to reason itself.” He does not discuss the potential tension between his (wholesale) rejection of ideology and his identification with classical liberalism. Perhaps Pinker believes, as does the author of this review, that classical liberalism, conceived in a non-dogmatic fashion, is not so much an ideology but a perspective that starts from the recognition that individuals have different interests and that reason can provide guidance to coordinate these interests to mutual advantage.

Jacques Monod’s Ethics of Knowledge

Nobel Prize winner Jacques Monod concludes his seminal essay on the natural philosophy of modern biology, Chance and Necessity (1970), with a chapter of reflections on evolution, the place of man in nature, culture, ideas, and the nature of morality. He writes:

During entire aeons a man’s lot was identical with that of the group, of the tribe he belonged to and outside of which he could not survive. The tribe, for its part, was able to survive and defend itself only through its cohesion…This evolution most not only have facilitated acceptance of tribal law, but created the need for mythical  explanation which gave it foundation and sovereignty. We are the descendants of such man. From them we have probably inherited our need for an explanation, the profound disquiet which goads us to search out the meaning of existence. The same disquiet that has created all the myths, all the religions, all the philosophies, and science itself.

He then goes on to explain how religions, philosophical systems, and ideologies (such as Marxism) that see nature or history unfolding according to a higher plan can be traced back to this innate disposition to look for Meaning. And while science, and the associated postulate of objectivity, has gradually replaced those myths and beliefs, most of our contemporary thinking about values still reflects this kind of animism:

No society before ours was ever rent by contradictions so agonizing. In both primitive and classical cultures the animist tradition saw knowledge and values stemming from the same source. For the first time in history a civilization is trying to shape itself while clinging desperately to the animist tradition to justify its values, and at the same time abandoning it as the source of knowledge, of truth. For their moral bases the “liberal” societies of the West still teach – or pay lip-service to- a disgusting farrago of of Judeo-Christian religiosity, scientistic progressism, belief in the “natural” rights of man, and utilitarian pragmatism…All the traditional systems have placed ethics and values beyond man’s reach. Values did not belong to him; he belonged to them.

Obviously, this perspective on the futile attempts to ground values in something beyond man (beyond practical reason one might say) raises the question of “who shall decide what is good and evil.” Monod clearly struggles with this question because he does not want to admit that “objective truth and the theory of values constitute eternally separate, mutually impenetrable domains.” His answer, however, may strike contemporary readers as something of a cop-out when he tries that argue that the pursuit of science itself implies an ethical postulate:

True knowledge is ignorant of values, but it cannot be grounded elsewhere than upon a value judgment, or rather upon an axiomatic value. It is obvious that the positing of the principle of objectivity as the condition of true knowledge constitutes an ethical choice and not a judgment arrived at from knowledge, since, according to the postulate’s own terms, there cannot have been any “true” knowledge prior to this arbitral choice. In order to establish the norm for knowledge the objectivity principle defines a value: that value is objective knowledge itself. Thus, assenting to the principle of objectivity one announces one’s adherence to the basic statement of an ethical system, one asserts the ethic of knowledge. Hence it is from the ethical choice of a primary value that knowledge starts.

This attempt to derive (or distill) universal normative claims from an activity or pursuit itself is not unique in ethics. Some have tried to derive morals and rights from the nature of human agency (Alan Gewirth), the activity of argumentation (Hans-Herman Hoppe) and so forth (one might argue that there are even traces of such an approach in Jasay’s argument for the presumption of liberty). Either such attempts produce trivial conclusions or are stretched beyond credibility to make them do a lot more work than they are capable of, such as deriving specific socio-economic norms concerning welfare rights or absolute property rights. At the end of the day, these writers fail to recognize the fact that morality is an emergent property of social interaction in nature (that is to say, morality is conventional) and attempts to “justify” moral rules is as futile as trying to “justify” the laws of physics (although one might argue that certain “strategic” advantages can accrue to those who are successful in persuading others of such moral “truths”).

Monod’s ‘ethics of knowledge’ is simply “justified” by pragmatic advantages (a similar thing might be said about accepting the principle of causality – as has been proposed by the philosopher of science Hans Reichenbach). Such a pragmatic explanation for the pursuit of knowledge (and the emergence of values) places morality in the realm of individual practical reason and evolution, where serious philosophers, economists, and biologist have been making efforts to understand it.

In his introduction to the 1997 Penquin edition of Chance and Necessity, the evolutionary biologist and geneticist John Maynard Smith, briefly alludes to Monod’s rather clumsy (and dated) attempt to link his ethics of knowledge to scientific socialism in the final pages of the book, which only shows how vacuous the ethics of knowledge is for deciding moral and socio-economic questions.

A more specific concern for Monod is the end of natural selection and degeneration in man:

To the extent that selection is still operative in our midst, it does not favor the “survival of the fittest” – that is to say, in more modern terms, the genetic survival of the “fittest” through a more numerous progeny. Intelligence, ambition, courage, and imagination, are still factors in modern societies, to be sure, but of personal, not genetic success, the only kind that matters for evolution. No, the situation is the reverse: statistics, as everybody knows, show a negative correlation between the intelligence quotient (or cultural level) and the average number of children per couple…A dangerous situation, this, which could gradually edge the highest genetic potential toward concentration within an elite, a shrinking elite in relative numbers.

This is not all. Until not so very long ago, even in relatively “advanced” societies, the weeding out of the physically and also mentally least fit was automatic and ruthless. Most of them did not reach the age of puberty. Today many of these genetic cripples live long enough to reproduce. Thanks to the progress of scientific knowledge and the social ethic, the mechanisms which used to protect the species from degeneration (the inevitable result when natural selection is suspended) now functions hardly at all, save where the defect is uncommonly grave.

And since Monod seems to categorically rule out gene therapy in germ cells (“the genome’s microscopic proportions today and probably forever rule out manipulation of this sort”), his only hope resides in “deliberate and severe selection.”

Notwithstanding Monod’s unduly pessimistic perspective on human genetic engineering  and the missed opportunity to recognize the evolutionary and conventional nature of morality, Chance and Necessity remains a classic, uncompromising, exposition of modern evolutionary biology and the scientific view of the world that has made this knowledge possible.