Saturday, 17 April 2010

WTF is desirism?

A little while ago on Facebook, fellow skeptic Martin Freedman posted a link to a quiz that was meant to tell you how "consistent" your moral philosophy is, based on a handful of trolleyology questions. We both came out as 100% consistent for different reasons. I killed the one man to save the many every time (note that the transplant dilemma was not one of the questions asked), as Martin pointed out "like a good utilitarian should", but mentioned that he himself did not as he favoured a philosophy called "Desirism" I hadn't heard the term before, and Wikipedia was no help. Martin helpfully provided references but most of if seemed to be to be too detailed or not pitched at the right level for me, so I struggled to get my head around the concepts.
After a round of comments on one of Martin's posts defending desirism from an attacker we decided that we'd have a public exchange about it, so that he could explain it to me, and perhaps in the process explain it to others.

Firstly we thought, by way of introduction, we should explain why we are interested in ethics and morality. Perhaps least importantly, and as should be obvious from my other posts, I utterly reject the idea that what is moral is dictated by some Deity and that it is handed down to us in a holy book, which may need interpretation by a priesthood. For hundreds of years the morality espoused by the big three Abrahamic religions has lagged behind that of the general population. Those books may have been relevant in their time, though I'm not even convinced of that, but they are an anachronism now. As Bertrand Russell said "the moral objection [to religion] is that religious precepts date from a time when men were more cruel than they are and therefore tend to perpetuate inhumanities which the moral conscience of the age would otherwise outgrow." Even now, most of the major religions count homophobia and misogyny among their many faults, though they perceive them as virtues, not to mention the one that seems to think condom use and abortion are worse sins than child-rape and its subsequent cover-up.

So with religion out of the equation what do we have left? How do we make moral decisions? Some religious persons will tell you that without a god there is no reason for atheists to be good. Well, it appears that natural selection has built at least a rudimentary grasp of morality into us. Compassion and empathy of a sort manifest at a very young age, and are also present in some of our closest relatives in the animal kingdom. These tendencies are strongly influenced by society as we grow, but the building blocks of our morality are apparently innate. The problem here is that the rules that evolution has given us were "designed" by that blind watchmaker to cope with the tribal life of early hominids, and have not kept pace well with the acceleration of change in the way we live our lives that has happened over the last ten thousand years or so. Rules of thumb that helped us propagate our genes by giving aid to those who are likely to share them do not scale up well to the global economy; they barely scale up to the complex nature of our own local social interactions. How can we tell if banning burkhas is a bad thing? If homophobic B&B owners have the right to refuse services to homosexuals? If starting a war against an oppressive and mass-murdering regime in a foreign country is the right thing to do? Our intuition, borne of evolution and coloured by our culture, no longer serves us well. How do we know if our instincts are "right"? Especially since many other people's instincts are different? Just because something is some way in human-nature, does not mean that this is how things ought to be, that is the naturalistic fallacy at work. Just how do we resolve these dilemmas?

I've read a little of what various philosophers and other thinkers have to say on the subject of morality, and currently favour a variety of modified utilitarianism. Utilitarianism asserts that the moral worth of an action is determined by its utility, which is to say how much the results of that action increase the sum of happiness among all sentient beings.
My reason for adopting a utilitarian view essentially goes something like this:
  • I know that I can suffer.
  • I assume that others are also capable of suffering (it certainly appears that they are).
  • The (apparent) suffering of others causes me suffering. 
  • I would prefer that others do not inflict suffering on me.
  • Others are less likely to inflict suffering on me, or on others who may subsequently inflict suffering on me, if I do not do so to them.
  • It therefore works in my favour, and everyone else's, for me to try to minimise the suffering of as many others as possible.
In summary, it makes me feel good to be good, and what constitutes "good" is minimising suffering, and thereby maximising the pleasure/happiness/wellbeing of as many sentient beings as possible. It's a bit more complicated than that, but that gets the main point across. Some, particularly the religious, might say that this a selfish way of looking at morality, and to an extent they may be right, but Darwin and Dawkins have taught us that selfishness is in the root of our morality, in reciprocal altruism (you scratch my back, I'll scratch yours) and kin-selection (advanced-nepotism). And in any case, someone who is only good because of the promise of eternity in paradise or threat of eternity in torment is in no position to criticise.

This still leaves me with a problem. How do I, in a world where the information available to me is often incomplete and imperfect, and all the results of my actions cannot be accurately predicted, decide which actions will minimise the suffering for the most sentient beings? Well, largely, like most people, I wing it. I make decisions based on the best information I have; if I don't think I have enough I seek more until I either have all that's available or I think I have enough, or the effort I would have to expend to get more goes beyond what I'm prepared to invest. I guess you could call it "guided intuition". Several people have tried to propose mathematical models for calculating or approximating the balance of  suffering/happiness, but they are all so far (IMHO) flawed.

So then I hear of Desirism, apparently also sometimes called "Desire Utilitarianism" which, if I've understood it correctly, seems to want to offer an answer to this problem by approximating a method of minimising suffering and maximising wellbeing, with a rule of thumb that says we should foster behaviours that will, in most situations, fulfil the desires of the maximum number of people. Therefore we don't have to do complicated maths or reasoning every time we want to make a decision, we just have to do it for a set of given hypothetical situations, and then run our life by those rules, re-evaluating them as new evidence comes along.

I have my doubts about it, which may simply be down to my lack of grasp of the theory, but before I express them, I'll hand over to Martin to tell me what Desirism is in his own words.

No comments: