What's new

Utilitarianism

mockingbird

Silver Meritorious Patron
I found an interesting video on utilitarianism and it even has the phrase "greatest good for the greatest number" long before Ron Hubbard and his ethics book ever were around.​

But I found it interesting in that it points out the difficulties people have been having for centuries figuring out whether we should have morality be rigid adherence to rules or flexible.

it also introduced rule utilitarianism and contrasts that against other ideas. I am really curious about what other people think because I have found no moral system or philosophy to be correct in every situation and that people are not satisfied with just do the best you can in each situation and if you learn more then a different decision may be right in a different situation because it seems too open ended but honestly I don't know any better approach.

people call deciding based on the situation "situation ethics" and conservative people who value tradition and hierarchy usually hate situation ethics but if you learn a behavior would have a bad outcome then decide to change it isn't that the right thing to do ? Even if it breaks a rule ? Or law ?

I will post the transcript and video below:

Should batman kill the joker? If you were to ask the dark knight himself, with his hard-and-fast no-killing rule, he’d say absolutely not. Actually, in fact, he would say: “absolutely not.” When you think about it, dude is pretty kantian in his ethics.


Regardless of what joker does, there are some lines that good people do not cross, and for batman, killing definitely falls on the wrong side of that line. But, let’s be real here: joker is never gonna stop killing.


Sure, batman will have him thrown back in arkham, but we all know that he’s gonna get out – he always gets out – and once he’s free, he will kill again.


And maim and terrorize. And when he does won’t a little bit of that be batman’s fault? Batman has been in a position to kill joker hundreds of times.


He has had the power to save anyone from ever being a victim of the joker again. If you have the ability to stop a killer, and you don’t, are you morally pure because you didn’t kill? Or are you morally dirty because you refused to do what needs to be done? So, why do I describe batman as kantian? Well, the school of thought laid out by 18th century german philosopher immanuel kant – now known as kantianism – is pretty straightforward.


More precisely: it’s absolute. Kantianism is all about sticking to the moral rulebook. There are never any exceptions, or any excuses, for violating moral rules.


And our man batman tries his hardest to stick to his code, no matter what. But there are other ways of looking at ethics. Like, instead of focusing on the intent behind our behavior, what if we paid more attention to the consequences? One moral theory that does this is utilitarianism.
It focuses on the results, or consequences, of our actions, and treats intentions as irrelevant. Good consequences equal good actions, in this view. So, what’s a good consequence? Modern utilitarianism was founded in the 18th century by british philosophers jeremy bentham and john stuart mill.
But the theory has philosophical ancestors in ancient greek thinkers such as epicurus. All of these guys agreed that actions should be measured in terms of the happiness, or pleasure, that they produce. After all, they argued, happiness is our final end – it’s what we do everything else for.
Think about it like this: many things that you do, you do for the sake of something else. You study to get a good grade. You work to get money. But why do you want good grades, or money? There are different answers we could give – like maybe we’re seeking affirmation for our intelligence, or the approval of our parents, or a degree that will give us a career we want.



But why do we want that particular career? Why do we want approval? We can keep asking questions, but ultimately our answer will bottom out in, “i want what I want because I think it will make me happy.” That’s what we all want – it’s one of the few things everyone has in common.
And utilitarians believe that’s what should drive our morality. Like kant, utilitarians agree that a moral theory should apply equally to everyone. But they thought the way to do that was to ground it in something that’s really intuitive.



And there’s really nothing more basic than the primal desire to seek pleasure and avoid pain. So, it’s often said that utilitarianism is a hedonistic moral theory – this means the good is equal to the pleasant, and we ought, morally, to pursue pleasure and happiness, and work to avoid pain.
But, utilitarianism is not what you’d call an egoistic theory. Egoism says that everyone ought, morally, to pursue their own good. In contrast to that, utilitarianism is other-regarding. It says we should pursue pleasure or happiness – not just for ourselves, but for as many sentient beings as possible.
To put it formally: “we should act always so as to produce the greatest good for the greatest number.” This is known as the principle of utility.


Ok, no one’s gonna argue with a philosophy that tells them to seek pleasure. But, sometimes doing what provides the most pleasure to the most people can mean that you have to take one for the team.
It can mean sacrificing your pleasure, in order to produce more good overall. Like when it’s your birthday and your family says you can choose any restaurant you want.


The thing that would make you happiest is thai food, but you know that that would make the rest of your family miserable.


So when you choose chinese – which is nobody’s favorite, but everybody can make do – then you’ve thought like a utilitarian. You’ve chosen the action that would produce the most overall happiness for the group, even though it produced less happiness for you than other alternatives would have.


The problem is, for the most part, we’re all our own biggest fans. We each come pre-loaded with a bias in favor of our own interests.


This isn’t necessarily a bad thing – caring about yourself is a good way to promote survival. But where morality is concerned, utilitarians argue, as special as you are, you’re no more special than anybody else.


So your interests count, but no more than anyone else’s. Now, you might say that you agree with that. I mean, we all like to think of ourselves as being generous and selfless.


But, even though i’m sure you are a totally nice person – you have to admit that things seem way more important – weightier, higher-stakes – when they apply to you, rather than to some stranger.
So, utilitarians suggest that we make our moral decisions from the position of a benevolent, disinterested spectator. Rather than thinking about what I should do, they suggest that I consider what I would think if I were advising a group of strangers about what they should do.


That way, I have a disposition of good will, but i’m not emotionally invested. And i’m a spectator, rather than a participant. This approach is far more likely to yield a fair and unbiased judgment about what’s really best for the group.


Now, to see utilitarianism put to the test, let’s pop over to the thought bubble for some flash philosophy. 20th century british philosopher bernard williams offered this thought experiment.
Jim is on a botanical expedition in south america when he happens upon a group of 20 indigenous people, and a group of soldiers. The whole group of indigenous people is about to be executed for protesting their oppressive regime.


For some reason, the leader of the soldiers offers jim the chance to shoot one of the prisoners, since he’s a guest in their land.


He says that if jim shoots one of the prisoners, he’ll let the other 19 go. But if jim refuses, then the soldiers will shoot all 20 protesters. What should jim do? More importantly, what would you do? Williams actually presents this case as a critique of utilitarianism.


The theory clearly demands that jim shoot one man so that 19 will be saved. But, williams argues, no moral theory ought to demand the taking of an innocent life. Thinking like a kantian, williams argues that it’s not jim’s fault that the head soldier is a total dirt bag, and jim shouldn’t have to get literal blood on his hands to try and rectify the situation.


So, although it sounds pretty simple, utilitarianism is a really demanding moral theory. It says, we live in a world where sometimes people do terrible things.


And, if we’re the ones who happen to be there, and we can do something to make things better, we must. Even if that means getting our hands dirty. And if I sit by and watch something bad happen when I could have prevented it, my hands are dirty anyway.


So, jim shouldn’t think about it as killing one man. That man was dead already, because they were all about to be killed. Instead, jim should think of his decision as doing what it takes to save 19.
And batman needs to kill the joker already. Thanks, thought bubble! Now, if you decide you want to follow utilitarian moral theory, you have options. Specifically, two of them. When bentham and mill first posed their moral theory, it was in a form now known as act utilitarianism, sometimes called classical utilitarianism.


And it says that, in any given situation, you should choose the action that produces the greatest good for the greatest number.


Period. But sometimes, the act that will produce the greatest good for the greatest number can seem just wrong. For instance, suppose a surgeon has five patients, all waiting for transplants. One needs a heart, another a lung.


Two are waiting for kidneys and the last needs a liver. The doctor is pretty sure that these patients will all die before their names come up on the transplant list.


And he just so happens to have a neighbor who has no family. Total recluse. not even a very nice guy. The doctor knows that no one would miss this guy if he were to disappear.


And by some miracle, the neighbor is a match for all five of the transplant patients. So, it seems like, even though this would be a bad day for the neighbor, an act-utilitarian should kill the neighbor and give his organs to the five patients.


It’s the greatest good for the greatest number. Yes, one innocent person dies, but five innocent people are saved. This might seem harsh, but remember that pain is pain, regardless of who’s experiencing it.


So the death of the neighbor would be no worse than the death of any of those patients dying on the transplant list.


In fact, it’s five times less bad than all five of their deaths. So thought experiments like this led some utilitarians to come up with another framework for their theory.


This one is called rule utilitarianism. This version of the theory says that we ought to live by rules that, in general, are likely to lead to the greatest good for the greatest number.


So, yes, there are going to be situations where killing an innocent person will lead to the greatest good for the greatest number.


But, rule utilitarians want us to think long-term, and on a larger scale. And overall, a whole society where innocent people are taken off the streets to be harvested for their organs is gonna have a lot less utility than one where you don’t have to live in constant fear of that happening to you.
So, rule utilitarianism allows us to refrain from acts that might maximize utility in the short run, and instead follow rules that will maximize utility for the majority of the time.


As an owner of human organs, this approach might make sense to you. But I still gotta say: if batman were a utilitarian of either kind, it wouldn’t look very good for the joker. Today we learned about utilitarianism.


We studied the principle of utility, and learned about the difference between act and rule utilitarianism. Next time, we’ll take a look at another moral theory – contractarianism. Crash course philosophy is produced in association with pbs digital studios.


You can head over to their channel and check out a playlist of the latest episodes from shows like the good stuff, gross science, and pbs idea channel.


This episode of crash course was filmed in the doctor cheryl c. Kinney crash course studio with the help of all of these awesome people and our equally fantastic graphics team is thought cafe..

 
Last edited:

Karakorum

supressively reasonable
In my opinion, the greatest problem with utilitarianism is not that it is unethical or promotes low moral standards in some extreme cases. The greatest problem is that it isn't practical. We do not have the "computing power" to predict all the consequesnces of our action, especailly long-term consequences for people far away from us. Moreover, we do not have a tool taht would allow us to measure how much "pleasure/happiness/least sufffering" we have precisely caused.

That's the main problem.

The video already presented the "transplant" dilemma.

Let me offer a more light hearted dilemma of the "last hope" sort:

Assume you are a highschool student that needs to decide who to invite out to prom. As a moral-oriented person, you decide that you should invite a girl who otherwise would not be able to get anyone to take her. So you find yourself with 2 options:
- Ann, who is popular and has 50 friends, but is ugly and nobody invited her because of that.
- Betty, who is a lonely introvert with no friends.

1. If you invite Ann, you will make her happy and will also make her 50 friends quite happy as well. Betty will be miserable.
2. If you invite Betty, she will be immensly happy because you were her "only hope". Ann won't be happy, but she won't be miserable because she can always hang out with her friends at the prom, even without a date.

"Straightware" utilitarianism would make you choose Ann. Because the added happines of her and her 50 friends should mathemathically outweigh the negative feelings of a single person (Betty).
But is that the case? Do we have a meter that we could measure the happiness of all these people with? Nope.

Moreover "straightware" would have us assume that one person's happiness is comparable to another person, nobody is worth more than anyone else. The underlying assumption of "straightware" is that people are "grey goo" ("grey goo" = everyone is roughly the same as anyone else).

My conclusion:
Everything we can learn from modern psychology would suggest that people are very different from one another, have different zones of comfort, different personalities etc. I'd argue that we should thus outright discard any doctrine that is based on the "grey goo" assumption, which means we can throw "straightware utilitarnianism" to the rubbish bin at once.


LRH is vague enough to be interpreted in different ways, but I think the most cohrent interpretation would be that he does in fact believe in "grey goo". Could one construct a more nuanced version of Scn free from "grey goo"? Sure.
 
Last edited:

Enthetan

Master of Disaster
In my opinion, the greatest problem with utilitarianism is not that it is unethical or promotes low moral standards in some extreme cases. The greatest problem is that it isn't practical. We do not have the "computing power" to predict all the consequesnces of our action, especailly long-term consequences for people far away from us. Moreover, we do not have a tool taht would allow us to measure how much "pleasure/happiness/least sufffering" we have precisely caused.

That's the main problem.

The video already presented the "transplant" dilemma.

Let me offer a more light hearted dilemma of the "last hope" sort:

Assume you are a highschool student that needs to decide who to invite out to prom. As a moral-oriented person, you decide that you should invite a girl who otherwise would not be able to get anyone to take her. So you find yourself with 2 options:
- Ann, who is popular and has 50 friends, but is ugly and nobody invited her because of that.
- Betty, who is a lonely introvert with no friends.

1. If you invite Ann, you will make her happy and will also make her 50 friends quite happy as well. Betty will be miserable.
2. If you invite Betty, she will be immensly happy because you were her "only hope". Ann won't be happy, but she won't be miserable because she can always hang out with her friends at the prom, even without a date.

"Straightware" utilitarianism would make you choose Ann. Because the added happines of her and her 50 friends should mathemathically outweigh the negative feelings of a single person (Betty).
But is that the case? Do we have a meter that we could measure the happiness of all these people with? Nope.

Moreover "straightware" would have us assume that one person's happiness is comparable to another person, nobody is worth more than anyone else. The underlying assumption of "straightware" is that people are "grey goo" ("grey goo" = everyone is roughly the same as anyone else).

My conclusion:
Everything we can learn from modern psychology would suggest that people are very different from one another, have different zones of comfort, different personalities etc. I'd argue that we should thus outright discard any doctrine that is based on the "grey goo" assumption, which means we can throw "straightware utilitarnianism" to the rubbish bin at once.


LRH is vague enough to be interpreted in different ways, but I think the most cohrent interpretation would be that he does in fact believe in "grey goo". Could one construct a more nuanced version of Scn free from "grey goo"? Sure.
utilitarianism assumes that the practitioner is smart enough to figure out all the ultimate consequences of his acts. It appeals most to those with big enough egos to make that assumption.
 
Top