/r/Utilitarianism
The greatest good for the greatest number!
Basic Introduction and FAQ to Utilitarianism
Utilitarianism: A moral philosophy that says that what matters is the sum of everyone's welfare, or the "greatest good for the greatest number".
Utilitarianism comes in different variants. The most well-known variants are: Total, Hedonistic, CEV, Average, Preference.
Why Utilitarianism? Because our intuitions are wrong. Because we cannot be trusted to make decisions. Because morality is subjectively objective.
Please spread the word of this subreddit!
“Actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness” –John Stuart Mill
Utilitarian-adjacent subreddits:
The summum bonum:
General philosophy:
Also of interest:
/r/Utilitarianism
The shrimp welfare project may be one of the most effective charities in the world (considering both human and animals charities).
Here’s a blog talking about it: https://forum.effectivealtruism.org/posts/qToqLjnxEpDNeF89u/the-case-for-giving-to-the-shrimp-welfare-project
Here’s the organization: https://www.shrimpwelfareproject.org/
To me it seems a significant portion of humanity doesn't want to increase overall pleasure and decrease overall suffering. This often becomes clear during elections. Many people only care about their own pleasure and suffering, but some even want the suffering of others.
This sometimes makes me discouraged. No matter how much harm I reduce or pleasure I create there will always be people that want to make it worse. Do others feel the same? How do you deal with it?
I think we really need to create some universal symbol of utilitarianism, current one is not widely used and may be misidentified with the law and law-related.
What do you think? We need to do something significant for our extremely moral movement.
Philosophy is interesting to me and I'm currently in a philosophy class and I keep having this thought so I wanted to get y'all's opinions:
Utilitarianism relies on perfect knowledge of what will/won't occur, which no human has! The trolley problem, which is the epitomized utilitarian example, has a million variants regarding the people on the tracks, and it always changes the answers. If I had perfect knowledge of everything then yes Utilitarianism is the best way to conduct oneself, but I don't and the millions of unintended and unpredictable consequences hold that dagger everytime you make a choice through this lens. And the way I've seen a utilitarian argument play out is always by treating everything in a vacuum, which the real world is not in. For instance the net-positive argument in favor of markets argues that if atleast one person in the exchange gets what they want and the otherside is neutral or happier, then the exchange is good, but what it does not consider is that when I buy a jar of salsa it stops one other family from having their taco tuesday, and while this example is benign it seems to epitomize many of the things I see appear in the Utilitarian argument, why are we determining how we conduct ourselves based on a calculation that is impossible to know the answer to?
Anyways, any reading that acknowledges this argument? Additionally, an idea on where I fall on the philosophical spectrum?
I read most of it for a video I was making the other day and... damn. Knowing how dedicating your life to all of this affected Mill (combined with depression?) hits so hard. Here's a quote from page 138 of my version:
“Suppose [...] that all the changes in institutions and opinions which you are looking forward to, could be completely effected at this very instant: would this be a great joy and happiness to you?” And an irrepressible self-consciousness distinctly answered, “No!” At this my heart sank within me: the whole foundation on which my life was constructed fell down. All my happiness was to have been found in the continual pursuit of this end. The end had ceased to charm, and how could there ever again be any interest in the means? I seemed to have nothing left to live for.
Also here's the link for the video if anyone is curious: https://www.youtube.com/watch?v=aOFc8Glsiwc
Are these two principles/mechanism can be argued with each other?
One of the biggest dilemmas I face and continue to face when I think about utilitarianism is the issue of collective impact. For example, a vote, individually, a person's vote will have no utilitarian impact whatsoever. Such impact can only be seen when collective. But if the act of none of these people in itself has an impact, is the utility of the collective isolated in itself without direct correspondence to the individual, or is the impact divided equally among those who contributed to it? How objective would this approach be?
What do you think? Is there any differnece? I don't think so.
Just the above question. Every biological life tries to avoid pain and reduce pleasure. So why do we need to orient our society or even human race to reduce suffering when it is already the default status?
If an evil person was told that stopping 1,000 murders would justify committing one murder, it could potentially lead to fewer total murders.
Evil or morally weak individuals already know they should minimize harm but this knowledge does not motivate them.
This idea would have many dangerous side effects today, but under what circumstances would this be a reasonable strategy?
Consider a dystopian society, such as during slavery. People could purchase and kill a slave without any consequences. In such a context, would a similar moral trade-off to motivate evil people make sense?
Today we can torture and killing of animals without consequences. Under what circumstances might a utilitarian argue that if an evil morally weak person stops X instances of animal farming, they could farm an animal?
Edit:
To clarify I'm not suggesting utilitarians do evil to create good. I'm asking what should utilitarians tell currently evil/weak people to do if we know they won't be motivated to become virtuous any time soon.
For those that would oppose someone freeing 1,000 slaves as compensation before enslaving 1 person what should be the utilitarian limits?
Would you oppose someone freeing 1 million slaves as compensation for littering 1 item? Freeing 10 million slaves as compensation to enslave 1 person?
Or should people never encourage anyone to make such an arbitrary exchange?
The minimum standard of morality in terms of utility would be to do nothing, resulting in a net utility change of zero. [edit There is a minimum level because utilitarians in real life don't maximize utility at every opportunity. There is an accepted level where people are immoral even though they could choose to not be]
If doing nothing [edit: or whatever level the average utilitarian accepts] is morally accepted, performing one negative action offset by two positive actions should also be permissible, as it results in a net increase in utility.
Animal advocacy through digital media is estimated to save ~3.7 animals per $1. Therefore if one were to donate $3 each time they eat an animal, there would be more total utility which should also be morally acceptable.
This would also work with humans to be consistent. 10 murders is worse than one person committing murder then stopping 10 murders. There should be consequences for murder. But, while in prison, such a person could reflect that they increased total utility.
There should be an option for people who are convinced of veganism but too weak to not eat animals
Do you agree with this argument? Are there any gaps or flaws?
P1: Utilitarianism seeks to maximize overall well-being and minimize suffering.
P2: To accurately and efficiently maximize well-being and minimize suffering, we must consider the capacities of beings to experience well-being and suffering.
P3: Beings with greater psychological complexity have a higher capacity for experiencing both suffering and well-being, as their complexity enables them to experience these states in more intense and multifaceted ways. Therefore, the magnitude of their suffering or well-being is greater compared to less complex beings.
C1: Maximizing well-being and minimizing suffering in an efficient and accurate manner inherently favors beings with greater psychological complexity, since more well-being and suffering is at stake when something affects them.
P4: Humans are the most psychologically complex beings on Earth, with the highest capacity to experience complex well-being and suffering.
C2: Therefore, maximizing well-being under utilitarianism inherently focuses on or prioritizes humans, as they have the greatest capacity for well-being and suffering.
P5: A system that inherently prioritizes humans can be considered anthropocentric.
C3: Therefore, utilitarianism, when aiming for optimal efficiency in maximizing well-being and minimizing suffering, is inherently anthropocentric because it prioritizes humans due to their greater capacity for well-being and suffering.
Flaws found:
What would it mean for utilitarianism to be the objectively correct moral system? Why would you think so/not think so? What arguments are there in favor of your position?
I was hoping a utilitarian could help me with this. I recall reading Mills' Utilitarianism and finding a passage where he talked about how utilitarianism helped him deal with death, stating that it is easier to deal with death when you care about the wellbeing of those who outlive him. Or something similar to that. I may be misremembering, but I found immense comfort in that thought, and I'd love to find the quote again. I've tried using AI to find it, but am still drawing a blank.
Do others find comfort in this thought?
So you would pull the lever in the trolley problem and save 4 people? Perfect. Now let me ask another question - would you kill a guy and harvest his organs to save 5 people? They all need a vital organ, are in critical condition and there aren't any available. Do you kill him?
Be born into an Aristocratic, 1747 family
Excel in all classes, clearing showing great genius, going as far as reading a book on English history as a toddler and studying Latin at 3, and play the violin pristinely by 7, supported by the University College London and multiple academics
Graduate with a law degree, having been admitted at 12
Be given special legal privileges because of your sheer skill
Criticize British law, American revolutionary ideology, and multiple other systems and ideas with quoteworthy "catchphrases" like "Demon of Chicane" or "nonsense on stilts," a trend continuing on into your later life
Move to Russia and work in prisons to reduce mortality
Become outspoken on multiple issues of philosophy, become famous, have relationships with multiple women
Champion welfare, the separation of church and state, freedom of expression and individual and economic freedoms, equal rights for women, the right to divorce, and the "decriminalization of homosexual acts," the abolition of slavery, capital punishment, and physical punishment including for children, strong animal rights, and the reduction of appeals to god in philosophy in the 17 and 18 hundreds
Set the course for utilitarian philosophy for hundreds of years, being progressive centuries beyond your days
Die happily
Refuse to elaborate
Get a reddit greentext story
Regular actions like eating or wearing clothes or using a cell phone or driving have so many (mostly) unintended negative consequences, from pollution to worker exploitation to damaging chemicals, so it seems impossible to create more good than bad with almost any action. Often it seems to me that the action with the best outcome is to do nothing at all. Is it possible to still act in a way that creates more good than bad?
Most of my idea is in the title. Utilitarian philosophers should come together to create a political entity advocating for things like animals rights, progressivism, socialism, and other things associated with people like Bentham.
I dunno, just some form of organization would be nice.
natural being anything non-human, disease, starvation, killed by another animal etc..
Please focus first on the question itself before you comment on the implication of the question!
Suppose that, in exchange for making yourself miserable, you could make your descendants as happy as possible. Your descendants will be offered the same deal should you take it, and so forth for their descendants. If any generation refuses, the deal stops with them.
Suppose that you will indeed have descendants so that the question is non-trivial.
Would you accept the deal? Why or why not?
I'm sure someone thought of this, but let's say you have an action that is deemed never justifiable, in this instance, punching a baby. Everyone can agree that punching a baby is always bad, but what if you were in a saw-like scenario where you either punch a baby or three babies get super-punched, under utilitarian moral Philosophy, it would be more justifiable to punch the baby (as it results in less babies getting punched, and no babies getting "super" punched). This implies that any action that is deemed unjustifiable can then be justifiable if it is in a direct effort to prevent more of that action, meaning that in utilitarian moral philosophy, there is theoretically (as this saw-like scenario is obviously far fetched) never a truly "unjustifiable" action (as you could then justify it by saying it prevents more of that action). Is my baby-punching paradox stupid? Is this a well-known concept and is there any retort?
I've been a utilitarian for a while but all my knowledge on it comes from, and I'm embarrassed to admit this, YouTube debates.
Now that you're done writing a justifiably angry comment, where can I start learning about Utilitarianism as someone who's never read any ethical philosophy outside of my college ethics class 8 years ago? Trying to reignite the love for philosophy I had in school
And if we accept that life in wild is worth living for the animal, eating said animal should then be more ethical than eating a plant-based meal since by eating animals we make new animal lives because of the increased demand.
If however, we don't think life in the wild is worth living (for any given species), we come to some weird conclusions. Are we then morally obligated to drive this species to extinction since they are a net-harm to themselves?