The Trouble With Theological Terrorism
By Jan Narveson
Introduction
“Rationality is about means towards ends, not the ends themselves. Ends are simply preferences, and non-rational rather than irrational. If one claims that there is logic and evidence for the proposition that one’s religion is correct and others are incorrect, but one cannot provide the logic and evidence, then the proposition is unwarranted. But being mistaken is not the same as being irrational. What would be irrational would be to believe that X is true yet act in ways inconsistent with that belief.” [from a contributor to a discussion list I’m on, who needn’t be identified here.]
The thesis of this presentation is that we have to reject a significant part of the above, in the particular case of morals, that is. Morals is the subject of informal interpersonally imposed restrictions on behavior.
It is much too widely argued that normative propositions, or perhaps for that matter all propositions whatever, come down in the end to matters of nonrational belief. From this, apparently, it is inferred that one person’s moral beliefs are as good as any other’s.
We must reject that, too.
It doesn’t take much to see that we live in a social world: we come, pretty frequently, into contact with other people. In almost all cases, which is more than enough to do, when x is a person, then it’s fairly obvious that x is a person rather than something else. And when it is, there is a prospect of interaction with this entity, of a fairly special kind. Unlike the case of our relation to trees, rocks, the moon, and the rest of the universe’s miscellaneous debris, we can enter into cognitively processed interactions with other people. We can proceed on the basis of beliefs, formed on the basis of evidence, about what others will do, and what they would do if I do this, that, or the other action.
These processes, leading to decisions about what to do, are “practical reason.” I take the general outline of practical reasoning to be fairly obvious. Taking our cue from Aristotle, among many others, we can put the matter thus. Action involves a sort of “syllogism”, one with two distinguishable sorts of premises:
Premise(s) I: this states (or affirms, or asserts, or manifests) a “value”, or perhaps more generally an interest, a desire, a want, of the agent. If we put it in value terms, we should take account of a tendency in ethical analysis to claim that value terms are just objective statements like anything else and have no particular connection to the agent’s action. If, or insofar as, value-language behaves like that, I mean to exclude it from the ambit of Premise I. Only values that the agent has, in the sense of caring enough about them so that they can or do motivate him to act so as to secure them, are what matter here.
Premise(s) II: These are factual claims, of the following specific kind: they assert that some eligible or available action of the agent’s would or will promote the values asserted in (I), with some intuitively estimated probability.
Conclusion: this will be, as Aristotle noted, an action, or a decision or resolution to perform such.
So far as concerns the central point, efficacy in controlling action, these premises have to be beliefs. They may be formed on the basis of evidence and reasoning, or not.
In the cases in which premise II is in some way faulty, observers party to agent’s reasoning are in a position to give him some advice: viz., Forget about doing x, because it isn’t going to work!
Again taking my cue from the likes of Aristotle and Kant, I am ready to say that an agent is prima facie rational insofar as his type (II) premises are sound. Some will want to say, Hey, wait a minute: how about the type I premises? Don’t they too have to be true or sound? And at that point, we will get much argument from people like the quotee at the outset of this. Indeed, not a few will maintain that there is no such thing as truth at this level.
Those who say that must, however, make a distinction. Consider these two propositions:
a) cabernet sauvignon is a great wine
b) I really like cabernet sauvignon
No one will deny that premise (b) can be true or false. Moreover, it is pretty clear that we can have evidence for it or against it. Everyone (among the philosophical cognoscenti, that is) also agrees that (a) ‚ (b). Nevertheless, one who affirms (a) while denying (b) will occasion a bit of head-scratching, as noted above.
Let us merely say that we can certainly check out premises (Ia) for sincerity, by checking out the correlated type (Ib) for truth.
Next, with that understanding in mind, let us say that the rational person acts on sound syllogisms: true premises, valid arguments – Conclusion really follows from the true premises in question.
And then let us quickly admit that many of our actions will be done on the basis of premises that are epistemically probable. Among these, let me distinguish two uses of ‘probable’:
(a) those, of interest only to philosophers, in which propositions with a phenomenal probability of approximately 1.0 (“the sun will rise tomorrow”) from those which have a really noticeably lower probability value than that (“This lottery ticket’s probability of winning is .0002”).
These show the need for making finer-grained distinctions. My value premise will have to be something like this:
a) Yes, it would be really nice to have a new Jaguar
b) It would be nice enough that a probability of .0002 x Jaguar is worth $35.
(Note that (b) has to be affirmed in the face of the fact that a new Jaguar would only cost $80,000, making the economic value of the lottery ticket only $16. Assorted people have investigated this matter, and arrived at the not very surprising conclusion that this latter point doesn’t really matter very much to typical purchasers, who are buying, in addition to the probable Jaguar, the certainty of the fun of knowing that you might after all be the winner; plus, in the case I have in mind, of knowing that you will have contributed somewhat to the economic viability of the K-W Symphony.)
ENTER MORALS
However, we haven’t got to morals yet, and it is time now to do so. Morals, familiarly, involves conclusions mediated by premises like “It is my duty, or I ought, to do [or refrain from] acts like x”. And these premises have the important feature that they need to be deduced from premises like “everybody ought to, or has a duty to, do/refrain from acts like x (in circumstances like this).
That is, moral claims suspend from value-assertions addressed, not just to the agent himself, but to everyone in general. They are, then, “rules”, practical ones, purportedly applicable to people in general, not just to the agent in particular.
What is a rule?
Well, firstly, it’s (obviously) a generalization a la R. M. Hare and Kant: that is, in the imperative mode, addressed to all: “Everybody, do x!” It should be noted that generalizations of that type can be as crazy as you like: “everybody, eat Crunchies! Everybody, shoot yourself! Everybody, shoot your neighbor!”
It may be suggested, plausibility, that morals claims to do better than that. Its imperatives have widely been touted as claiming the imprimatur of Reason – in Kant’s case, “Pure Practical Reason”. I’m willing to put forward the analysis to follow as a suggestion about what this rather mysterious, and somewhat oracular-sounding label of Kant’s might usefully be taken to mean.
We can try, as many have, to propose to identify and formulate the principles of a rational morality. If we do, of course, we have to answer the question, why any individual should care about a rational morality? But I note also that if we have played our cards right in describing individual practical reasoning, the answer will be obvious: nonrational moralities will have false major or minor premises (or both; or, of course, bad reasoning). When they do, it means, by definition, that there is something you care about which your proposed way of acting will not achieve.
And how would we know it won’t achieve it? By examining the structure of its reasoning.
So, now, step number one here consists in this: Could a rational morality require people to do things which they have no reason to do? I suggest that the answer is: obviously not. However, while it is obvious, it is apparently not obvious to everybody. There appear to be people who don’t give two hoots whether others have any reason whatever to do what they tell them to, when speaking with their moral hats on.
Those are the people I want to talk to.
In order to clarify what I propose to say to them, however, we must add something to our characterization of moral rules. We can add two things, both familiar.
(1) If R is a Moral rule, then R purports, or claims, to be such as to override individual interests that happen to be inconsistent with it. I might very much want the million dollars that Anna Timofyevna has in her sock, but it doesn’t matter how much I want it, or for what, I can’t have it – it’s hers. That’s the sort of thing a moral rule is supposed to be able to do.
Note: any tolerably plausible morality is going to have to make distinctions of degree. Do I get to take, without permission, your $2 bottle of aspirin if it will save the life of some otherwise-innocent person? Yes. It would be unreasonable of you to object. But taking your million dollars because I really want that Stradivarius which has suddenly come on the market for a bargain price won’t cut it. Etc.
Of course, the program for making these distinctions of degree must itself have the imprimatur of reason. It cannot be just arbitrary whether Moral Rule R does or does not override a particular person’s value V on occasion C. And, of course, doing this is a job, sure enough. (Hey, I never said it would be easy!)
(2) Rules involve administration. This is so especially in two broad respects. First, of course, there is what Aquinas calls “promulgation” – getting the word out. We, for one thing, just tell people that what they are doing is right or wrong, or good or bad, or whatever; but also there is some tendency to cite general principles, even though, as no end of recent philosophers have emphasized, it impossible, or near enough so, to formulate a general principle that remains fully general, reasonably definite, and plausible all at once. Nevertheless, reminding someone that to do x would be to lie, and we shouldn’t do that, is sometimes a reasonable thing to do.
Secondly, there is what we may call reinforcement. We bestow praise and blame, for instance – verbal reinforcement. But also, we sometimes take stronger measures. These measures range up to ones that are extremely strong, such as killing those who go against the rule in question.
The capability of applying enforcement procedures is itself, of course, the source of a major area of moral investigation: what is the right level of “punishment” for a given diversion from the rules? Today’s topic, of course, invites us into that area very quickly, and something will be said about it below (shortly, I hope!)
Why do rules provide a basis for reinforcement – when, of course, they are the right ones? The answer is suggested by the points made above about rules: this sort of rules, the moral sort of rules, are intended to tell us what we are to do whether we like it or not. Now, if we don’t like it enough, we will, of course, have a strong tendency toward noncompliance. Threats of punishment (and the like) are intended to supply some further reason for such compliance.
But there is a potential problem lurking here. It is all too easy for someone or some group to put themselves in the role of unilateral legislators of morals. For them, the essence of morals is: Do what We Say, Or Else!
It seems not to be obvious to everyone that there is something wrong with that model. We can see what it is by recalling an insightful argument of Aristotle’s (OK, he got it from Plato, but what the hell). Honor, he said, can’t be what life is all about because you get an honor as a recognition that you’ve done something that’s worth doing independently of the fact that we’d give you an honor for it. Aristotle’s argument is right on the beam. To go for honor no matter what its source, is to put yourself in the hands of people who might be unscrupulous, or tyrants, or of course ignoramuses.
Morals makes the following demand on everyone: it says, Look, here’s a rule such that, as you can see, there’s a damn good reason why everyone ought to comply with it. We insist on you doing this regardless of the prospect of reward for obedience or punishment for disobedience. If you nevertheless insist on deviating, then, alas, we’re going to have to go after you, but if the rule is reasonable in the first place, then you won’t be able to complain about this, and because you will have seen the first point, you will also concede the second.
Which, then, gets us back to the basic question: when is a proposed moral rule, or a candidate for a moral rule, as we might put it, a reasonable, or rational, moral rule?
There is a pretty short answer, at least in form, which we can extract right from the start. A reasonable rule is one that everybody has reason to accept. That is to say, they have reason, upon contemplating the considerations advanced on behalf of this rule, for setting aside some of their private interests, desires, values, passions, and instead doing what the rule calls upon them to do.
And how would that possibly ever be true? Again, the answer is in form pretty easy: when we could only expect to do a good deal worse without it than with it, if, that is, we had general compliance. One would be better off, according to one’s own value-scheme, in a society in which everyone, including oneself, conformed to its requirements than in a society which had either no rule at all on the subject, or some other, less promising one.
And on top of that, we would be better off in a society in which the reason why people conformed to this rule is precisely that the above is the case, than in a society in which much or most or perhaps all compliance was motiviated by the threat of specific punishment. If the rule is genuinely reasonable, we will all benefit from it, and the benefit will be much greater if we do not have to invest anything in devices for apprehending and punishing deviants.
Example: The rule against Killing
So: are there any proposed rules of that type?
There is a classic case: the rule against killing people simply because to do so would be beneficial, in some way, to the agent himself. (I mean this to include cases where the “benefit” is really weird, e.g., that you just don’t like his looks, or he reminds you of your wicked stepfather, or ….) This classic case, as I call it, has several points that have to be noted with extreme care.
1. Some people profess to believe that death isn’t all that bad a thing, or indeed might be a benefit. Example: upon dying, we move to a much classier neighborhood, enjoying forevermore the fond attentions of many beautiful sexual partners, etc., etc., etc.
2. Other have variants on that, or even profess to believe that life itself is an evil, so that dying really does us a favor, even if the subsequent period is distinguished by a total absence of the agent in question.
3. There is no need at all to take, say, an Epicurean attitude toward life, or some other optimistic version. Different people have different spins on life, obviously. However, there is a presumption kicking around in this area whose importance absolutely can’t be overrated: that whatever the agent’s values are, it is reasonable to presume that he or she is more likely to be able to realize them alive than dead. Life, that is to say, is a necessary condition for doing anything good.
3a. This is only a presumption, and a rebuttable one. One very handy feature, however, is this: anyone who genuinely believes he would be better off dead is welcome to kill himself, no problem. If you play your cards right, it needn’t be any skin off our backs.
For all normal people, on the other hand, the presumption is extremely plausible. It is one that we have no business not making.
To recap so far: the presumption that life is better for A is derived, entirely, from consideration of A’s own format of practical beliefs. And that derivation involves, at various points, in each case, certain general presumptions about the world around us that some philosophers, evidently, want to claim to be questionable; such as, that there are trees and other people and that we need water every now and then.
What’s important about these supplementary presumptions is that they are common-sense: that is, facts familiar to everyone, or readily verifiable by anyone, go into this set. Those in turn often lead us into Science, to be sure. The science goes, in a sense, beyond common sense. But he does so in a way that is accessible to common sense. If it was genuinely questionable taht there is any such thing as a “lab”, why on earth should we believe anyhthing that anyone claims to have discovered there?
The Theological Terrorist
The TT, as we may call him, affirms the existence of “God” and draws his beliefs about this being and his “commands” from texts, usually ancient, deemed to be sacred. More accurately, of course, he draws them from what he takes to be reliable copies of those texts, and bases that belief on the say-so of certain persons he regards as authorities on the matter – the preachers, or mullahs, or whatever. Whatever, let’s call this body of beliefs R1.
And the TT believes that, possibly for some reasons, or more likely for none, that the god in question has ordered him, the individual TT, to go out and kill a bunch of people who, looked at in the ordinary light, would have to be judged to be innocent of anything that could possibly deserve the death penalty. In the case of the TT, I wish especially to focus on one point: that the victims in question do not believe R1. Not only do they not believe it, but moreover, they have no reason to believe it.
Now, here is where morals gets involved with a very modest bit of epistemology. Supposed that somebody, whose name probably begins with P, claims to have a superduper proof of the existence of the god in question – this proof being well beyond the cognitive processing capabilities of ordinary PhDs, let alone ordinary people. We have the word of about eight people that, yup, by george, it’s valid! And the word of about ten thousand more that, nope, it isn’t. If Mr. Ordinary Person goes with the “nays”, as he almost certainly will, then what? Arguments on this subject, historically speaking, have not done well with the cognoscenti (like me). Moreover, and still more worrisome, there are a zillion different possible religions, and several thousand actual ones with appreciable constituencies, who differ. None of these differences can be cognitively resolved in the near future. What to do?
Of course the answer is: do not base moral claims on religion – *period*. Students familiar with my classes or with Plato will know the reason why. The claim that there is a god who would issue a bunch of commands which are – hey, wuddya know! – the commands of morals, is unsupportable. The reasons for this is that his only claim to be in a position of authority on such matters is that he is a good, or perfect, etc., being; but that’s a claim that invokes moral beliefs that would have to have been established independently, on logically prior grounds. Those grounds, whatever they are, have to be the foundations of morals, and the fact that god is also acquainted with them adds nothing to their cogency. So we are back to square one.
Which, of course, is where we came in. Now compare the interpersonal plausibility of the claim [1] that we should all do x because some god, whom most people do not believe exists and hardly anybody thinks we can prove exists, tells us to do it, with the claim [2] that we ought not to kill people simply in order to advance our own agendas, whatever the victim thinks of it. We are in the realm of the overwhelmingly implausible, on one view of the matter.
But on my view, the case is something more like a priori. You, the TT, are asking people who could kill you to desist from doing so, and instead allow the TT to kill them, and why? Because they don’t believe something of which the TT is utterly convinced, though on the basis of no interpersonally acceptable arguments or evidence. Anyone who subscribes to this is putting himself in for big trouble.
Indeed, it seems to me clear that, in form, the situation is this: if A proclaims to B that there is some alleged reason, inaccessible and unknown to and unknowable by B, why it is A’s moral duty to kill B, then from B’s point of view, A is an extremely dangerous, indeed lethally dangerous, character.
In fact, A has done what Islamicists of this stripe (and before them Crusading Christians, and assorted others, no end) have done: say, in public, and with conviction, that they believe they have a solemn duty to kill all nonbeliever. From the point of view of those nonbelievers, the rational thing to do is to make sure that A does not do this, and a plausible way to do it is to kill A first. In short, A has in effect declared war on the set of nonbelievers.
People who say these things sometimes act on them. Others apparently don’t quite mean them. I think it fairly clear that as long as they go on *saying* them, they make themselves eligible targets for the rest of us; we should get to it before they get to us. Obviously, a society full of people with such syndromes would be unlivable if they all took it seriously.
And that is what has to be explained, perhaps tactfully, to the Theological Terrorist. Morals has some modest epistemic involvements – very rudimentary indeed. People who make themselves aggressors against others on the basis of outright violations of those elementary involvements are asking for it. Logically, they should put up or shut up. Let us hope they do not put up: much better for them and us to shut up. But that’s the point. No acceptable religion on the world’s stage can proclaim, sincerely, that all unbelievers are to be put to death. On the contrary, all practitioners of religion need to accept the principle of freedom of religion: believe what you wish – but keep morals out of it, thanks!