I really have no qualification to comment on what I write here other than to say that I like to think, and I have been thinking a lot about the following the last few months.
This piece is a bit of a diversion from my previous pieces, and in any case, it is a bit of a ramble. It is less personal, yet still caries weight for me, and it does so because I have seen good people and those near them, those that are trying to do good in the world, be hurt.
One response to the question of why bad things happen to good people, which takes divine considerations off the table, is that many bad things are the cause of the natural order: lightning strikes can hardly be controlled, for example. Another is the simple statement that the natural world is actually unjust, or there is no guarantee of justice of distribution of natural calamity. That is, to say, that the natural world does not discriminate between good and bad people, whatever these descriptors might mean.
But this is not about the natural world, this is about people, and unlike the natural world, we are led to believe that humans are rational. That is to say that when humans act, those actions are deliberate and they can give rise to the conditions which may cause others to rejoice or to suffer.
I would like to think that such suffering is a byproduct of others’ thoughtlessness and that such actions are not calculated in ways that intentionally cause suffering for others, that no one actually acts in ways that wilfully harm others. I’m struggling with this position however, for, as you will see below, it is clear to me there are cases where people act within frameworks that judge the value of suffering of others against the value of the gains they themselves will experience.
There are a number of contemporary claims that our actions as humans are inevitable, that we cannot blame people for their actions (though we can forcibly stop them if necessary), as all of their actions are the results of certain antecedent conditions. Much of the basis for this position comes from contemporary neuroscience. So, for example, a person that lies to you in order to scam you out of your money really has no choice, as that action is the result of all the pre-existing conditions they have encountered. Their acts, then, can only be changed by changing those conditions and, since we cannot go back in time, we cannot do that. What we can do is judge whether their actions were good or not, and this is where I want to go here.
Such judgement presupposes that we have the ability to determine what we ought to do or ought to have done from empirical facts, and it is this position that troubles me, for while empirical facts can tell us how to achieve our moral goals, it is unclear to me that they can tell me what the goals actually are.
One position for the scientification of morals is posited by Sam Harris who suggests that the moral thing to do is to increase the well-being of humans. He does this by proposing a “Worst Possible Misery Argument” where he suggests that any behaviour that minimises collective misery is moral. Morally good behaviors, then, are those that promote human well-being, and the experience of this well-being exists only in the brain’s chemical and electrical makeup and so can be quantified through scientific technique. Therefore, science, not philosophy or religion, points the way to morality.
Let’s not forget at this point that if the above argument holds, it holds for all values of conscious beings, not just morality, whatever that may mean.
In his book, The Moral Landscape, Harris admitted that in respect of well-being, “it is sometimes hard to know what is being studied”. In order to attend to this little problem he turned the discussion around and In a set of tweets sent out in January 2018, he conflated the idea of unpleasantness with something morally bad.
“So what is morality? What *ought* sentient beings like ourselves do? Understand how the world works (facts), so that we can avoid what sucks (values)”.
In this way the landscape is reframed so that we no longer define goodness by well-being but by the minimisation of suckiness.
And this immediately identifies the first issue with this position, for in resorting to the vernacular (“it sucks”) in order to simplify and appeal to the masses, we must grant that what “sucks” differs for different people.
Regardless, the argument goes that when we minimise the amount of suckiness in the world we are doing morally good things. Let’s take that at face value and look at what such a stance might lead to. In order to determine the value between competing claims of what sucks, we simply have to measure how an action that redistributes misery or well-being measures out. If the amount of well-being or bliss generated overall outweighs the amount of misery or suckiness generated then an action is moral. Put more simply, if suckiness decreases an action is moral. Whoa, say what?
The first issue I see, which others have dealt with much better than I ever could, is that such a consequentialist stance is not the only way to determine what “ought” to happen. It is not obvious that the best course of action is the one that leads to the best consequences.
Taking consequentialism, however, there is a clear assumption that there is a causal relationship between action and outcome. That is, if we know what we are going to do, then we will know the outcomes. Not to put too fine a point in it, but history would tell us that we have been woefully inadequate at predicting such outcomes especially when it comes to human behaviour. The reductionist response to this would be that we simply don’t know enough to predict, and if we knew everything we would be able to.
This obviously assumes that we actually can know ‘everything’, and that we can actually navigate the insane network that exists in a ‘knowledge of everything’ to work out how all the different pieces act to produce a singular outcome. I mean we have trouble predicting all sorts of things infinitely less complex than ‘everything’ so how we could hope to accomplish this begs to be asked.
In principle, the consequentialist might argue, this is simply a matter of weight, just because we don’t know how to do it now, doesn’t mean we can’t do it. As I mentioned above however, this assumes that relationships between all the known variable are causal, but we have no a priori reason to believe this. In fact, if we delve into the micro world, we know that relationships are not quite so simple and that outcomes are probabilistic.
Next, and to condense the claim, our goal as conscious creatures is to attain the maximum well-being, now measured as the least suckiness, that we can. Harris does suggest in his writing that this is a collective well-being, but it is not explicit in his “worst possible misery argument”, and this is a concern for me as it seems to gloss over a significant point, and this is how we determine this collective betterment and who actually determines it. We’ve already suggested that well-being cannot be defined (yet) but when it can be defined who actually gets to define it? Likely not you or I, as our subjective experience is irrelevant in this scheme.
But let’s take it that we can do this and consider the following.
Assuming everyone in the world is at their worst possible misery and we could enhance the life of one person so that their well-being is improved while everyone else’s misery is maintained, the outcome would be morally better than doing nothing – suckiness would decrease overall. Let’s disregard the question of the character of that person and assume we know exactly what the outcome of the decreased suckiness will be, and ask how do we decide who it is? Or let’s enlarge the experiment and privilege a number of people, while others remain subjugated, for this is what it is. How do we decide who they are if they are all at the bottom of the scale?
What this scenario leads us to, in a world of finite resource, is the argument in support of slavery. In this argument, the well-being of an enslaved minority (as measured by overall moral health) is sacrificed for a maximization of well-being of a majority, or a rise in the collective well-being. In fact, in an extreme case, bliss may explode in a minority (kind of like what we have when we look at wealth distribution) at the expense of a majority. Put simply, this system does not consider fairness or justice as values that come into play.
Of course it might be argued that the enslaved, for whatever reason, don’t have the psychological capacity to reach the same level of moral bliss as the free, as perhaps they are already at their maximal well-being even if it is misery. Why we can’t actually remove barriers to capacity growth is unclear, and begs the question of actually where to do so in any case. As in schooling, as an analogy, we need to make a decision whether to treat the illiterate or enhance the highly literate. So how do we decide? And again, who decides?
Moving on, while the consequentialist argument is clear on the claim that genocide will result in more misery than not genocide (though why remains murky) this argument is premised on the position that people are not in abject misery. Assume that a proportion of the world’s population are at this nadir on the scale, the absolute ‘bottom’ end, and that these people are merely consuming resource to, at best, subsist. There is a powerful claim to be made that their extermination will provide a decrease in the collective suckiness of the world, and a decrease in suckiness is justification enough, because by definition, this increases the overall “well-being” of those left.
This position gets more insidious if we consider ability to maximise the use of resources. Clearly, if one person is using a resource more efficiently than another then there is claim to strip the resource from the latter and give it to the former, assuming, of course that there are no diminishing returns, which is another assumption that we might want to look closely at.
I had a discussion about such matters, about genocide in essence, with a colleague who suggested that this was a fine stance to take if society agreed that this was the best possible action. Now this assumes (again) a number of things. Majorly, it assumes that those at the bottom have equal power to decide as those at the top. It also assumes that all actually measure suckiness in the same way, which is problematic, and that the returns from each action can be reliably measured. It also discards any other values which may exist, which as I pointed out above, may well be identifiable by the same conditions as it is claimed that moral values are. Oops.
If we were to consider rights and liberties as having intrinsic value themselves, which may still happen to be the eventual case, then justice, as Martha Nussbaum would argue, takes on a moral aspect along with suckiness and we have a means of going forward, assuming we know how to integrate these. Justice, is the means by which we can keep a check on the excesses of Harris’ and similar moral theories.
Nevertheless, there is a lot to like about the feel and the ease of Harris’ theory. It is eminently reachable by a lay person and lays out rather simply what we need to do if we are to act morally. But, this very simplicity is its curse, as an uncritical reading of it can easily lead a person to rationalise their own actions at an atomistic level. By latching naively onto this consequentialist stance it is perfectly acceptable to argue that if something benefits me more than it costs you, then it is the moral thing for me to do. And this is where the problem lies, where the individual considers this a theory for the self, for the promotion of personal well-being, and for doing so without any consideration of the complexity of outcomes.
How does this action impact the suckiness of those people who haven’t been born? How does such action impact those conscious beings that are once or twice removed from the situation? How does such action impact the immediate social and environmental fabric, and thus those living within it?
And this then brings us back to the crux of the question I posed at the beginning. With all the above in mind I ask again “Why do bad things happen to good people?”
If, as it is posited above, moral actions are ones that decrease overall suffering, then it is possible that there are people who are actively involved in actually pursuing such actions and doing so by lowering their place on the moral scale to increase others’. It may well be that in an absolute sense, for example, the 100 dollars that you give away will do far more good distributed, say, to malaria prevention than it would if you had kept it. I would then define you as a good person.
I can’t say that the outcome would be good for sure as I don’t have the way to objectively measure this in a neuroscientific way, but then, and this is important, neither does anybody else have the ability to measure such outcomes. The fact that it intuitively feels like a good action doesn’t necessarily make it one, but by any gross measurement of suffering, it seems obvious that saving ten lives is worth more than a bottle of scotch.
Of course, without such measurements, it is impossible to determine the value of a person’s moral worth, and so, any choice that promotes one individual’s action over another’s is done without any recourse to the full knowledge of consequences. Let me repeat this. Without any objective measurement of suffering, I cannot legitimately privilege my worth over yours and, therefore, acting in the belief that I can is utterly irresponsible, and, I would add, antithetical to the central thesis that neuroconsequentialists propose.
The responsible thing to do, the apt thing to do for those seeking to reduce their ‘suckiness’ at the expense of another, is to abide by the precautionary principle – namely that if there is a possibility that their action will result in an overall increase in misery, then that action should be curtailed. But that would require people to think, to think beyond the obvious, to predict outcomes accurately, and to actually act on the objective data if it was there.
And unfortunately, it seems that this is too much work, or perhaps more sadly, that such work might ultimately lead them to the conclusion that the best course of action would be to increase their own misery in order to lessen that of others. And so, we have people acting in a moral void, and doing so results in bad things happening to good people.