/r/Ethics
Harrassment, personal attacks, bigotry, slurs, and content of a similar nature will be removed.
Please act from a recognition of the dignity of others. Users with a history of comments breaking this rule may be banned. For clarification, see our FAQ.
All content must be legible and in English or be removed.
Content must be in English. As well, submissions and comments may be removed due to poor formatting.
All posts must be directly relevant to ethics or be removed.
/r/Ethics is for research and academic work in ethics. To learn more about what is and is not considered ethics, see our FAQ. Posts must be about ethics; anything merely tenuously related or unrelated to ethics, including meta posts, will be removed unless pre-approved. Exceptions may be made for posts about ethicists.
Submissions which posit some view must be adequately developed.
Submissions must not only be directly relevant to ethics, but must also approach the topic in question in a developed manner by defending a substantive ethical thesis or by demonstrating a substantial effort to understand or consider the issue being asked about. Submissions that attempt to provide evidence for or against some position should state the problem; state the thesis; state how the position contributes to the problem; outline alternative answers; anticipate some objections and give responses to them. Different issues will require a different amount of development.
Questions deemed unlikely to have focused discussion will be removed. All questions are encouraged to be submitted to /r/askphilosophy as well or instead.
/r/Ethics is for discussion about ethics. Questions may start discussion, but there is no guarantee answers here will be approximately correct or well supported by the evidence, and so, many types of questions are encouraged elsewhere. If a question is too scattered (i.e. too many questions or question is unrelated to problem), personal rather than abstract (e.g. how to resolve something you're dealing with) or demands straightforward answers (e.g. homework questions, questions about academic consensus or interpretation, questions with no room for discussion), it will be removed.
Audio/video links require abstracts.
All links to either audio or video content require abstracts of the posted material, posted as a comment in the thread. Abstracts should make clear what the linked material is about and what its thesis is. Read here for an example of an abstract that adequately makes clear what the linked material is about and what its thesis is by outlining largely what the material does and how.
Provide evidence for your position.
Comments that express merely idle speculation, musings, beliefs, or assertions without evidence may be removed.
All comments must engage honestly and fruitfully with the interlocutor.
Users that don’t properly address and engage with their interlocutors will have their comments removed. Repeat offenders may be banned from the subreddit. To avoid disingenuous engagement, one should aim for a fair and careful reading of their interlocutor, be forthcoming with their level of familiarity with some topic and other such epistemic limits, and demonstrate a genuine desire for coming to some truth of the matter being discussed.
All meta comments must be on meta posts.
As noted in Rule 1, meta posts require pre-approval. If you have a meta comment to make unrelated to any meta post up at the moment, read the FAQ for what to do.
Area | Subareas | Definition | Information | Information | Information |
---|---|---|---|---|---|
Metaethics | Moral Realism and Irrealism, Moral Naturalism and Non-Naturalism, Moral Reasoning and Motivation, Moral Judgment, Moral Epistemology, Moral Language, Moral Responsibility, Moral Normativity, Moral Principles | Metaethics? | Definitions. | Introductory reading. | |
Normative Ethics | Consequentialism, Deontology, Virtue Ethics, Moral Phenomena, Moral Value | Normative ethics? | Definitions. | Introductory reading. | |
Applied Ethics | Bioethics, Business Ethics, Environmental Ethics, Technology Ethics, Social Ethics, Political Ethics, Professional Ethics | Applied ethics? | Introductory reading. | ||
Political Philosophy | Justice, Government and Democracy, International Philosophy, Political Theory, Political Views, Rights, Culture and Cultures, Freedom and Liberty, Equality, War and Violence, States and Nations | In /r/Ethics? |
/r/Ethics
Hello, I’m currently in grad school for clinical counseling and learning about ethics. This brought up memories of a friend of mine who told me their therapist disclosed they were suicidal via text. I’m looking to expand my knowledge on the resources for ethical and legal violations with a situation like this. Does anyone have resources for me to study here?
Hello,
Not 100% sure, if this belongs here. If not, please delete it.
I am currently on the market for a new Car and because of eco-friendliness (and subsidies where I live) I would like to buy/lease an EV.
Now my ethical conundrum is this:
Buying a EV with good value for money leads to either Tesla or BYD.
With Tesla I have ethical concerns because of labor conditions and the political stance of its (inofficial) spokesperson Musk.
With BYD I have similar concerns about labor conditions and Chinese pollitics.
With the German/European manufacturers labor conditions seem to be better, but there seems to be a lot of fraud (BMW, VW) and they are much more expensive for the same value.
Are there ethically sound cars (manufacturers) and why/why Not?
My girlfriend once dated a guy who produced porn. There are religions like Buddhism that are against “selling humans” in the form of slavery and prostitution. So I would never get into this profession from the profession of hiring adult entertainers and selling videos or pictures of them engaged in sexual acts.
However, I consume adult content.
Are the people who work in the adult content industry, whether in the capacity of someone who is a performer or someone who is the producer, morally the same as someone who views this content?
A specific example that grounds this question and makes it more real can be found by pulling up this video: "Fishing a Duck Lure for Monster Pike" by NDYakAngler. Somehow it makes the question much less theoretical and more real-world, real-life.
Personally, I have more regard for baby ducks than for pike. I wouldn't feel too horrible about taking a pike and cooking it up, if it would save the lives of dozens of baby ducks.
This specific example can be generalized to many other real-world situations, some of which are not as clear cut, others of which are as clear cut. I prefer, here, for the moment, to consider the clear-cut cases in which taking out a predator saves dozens or more prey.
I wish this option had been discussed in the books. But it's one thing Paul never considers. If I knew I'd have a worse impact on future events than Hitler had in our time, I would either decide to resign from politics or, if the prophesy meant I'd do this as long as I lived, I would self-terminate for the greater good. I don't know why Paul despite being noble of spirit doesn't actually sacrifice himself. But yet his omniscient POV tells him that he will cause destruction by existing.
Say I eat meat and use animal products, but am opposed to it ethically. Is this ethically hypocritical?
I tend to view eating meat in a less black and white matter than many people I've met that are vegan for ethical reasons. For example, I tend to think:
Reducing one's meat consumption and buying from farms with more humane farming practices is very impactful, and a more practical argument to make when discussing the topic with those that believe it is completely ethical to eat meat.
Someone who is very adherent to a strict vegan diet should be commended.
Is there a name for this sort of stance? I often debate this topic with friends, and the arguments tend to be very polarizing (either fully in favor of eating meat, or believing that everyone ought to be fully vegan.) I feel like this middle-ground stance has led to practical changes to my own personal meat consumption, and that it should be a more mainstream line of thinking in debates around the ethics of animal products. One friend thought that my point of view was ethically hypocritical (which maybe it is), but I suppose I'd rather view myself as not a morally exemplar person, than try to convince myself that eating other animals is okay. Is this stance simply a pragmatic utilitarian view of the issue, as opposed to strict ethical vegans, who seem to have a more deontological line of ethical reasoning? What do you think?
Hi folks!
My undergraduate degree was in STEM and in my final year I had taken an organizational psychology course— which had a large amount of material on ethics. Until this point, I had taken for granted how important my ethical framework is to me and how interesting the study of ethics is. It’s been a few years and I constantly go back to those fundamental lessons I learned and focus on behaving within my ethical framework.
I want to gain a deeper understanding of ethics and broaden my knowledge of influential publications. To motivate my reading, I am looking for recommendations on more casual reading for a complete beginner. I’d definitely appreciate recommendations to the “must reads”, but I’m trying to ease in before I lose motivation on anything too dense.
Thank you in advance!
To date only three surgeons worldwide have spoken up about this issue.
https://drive.google.com/file/d/1zCk6I4WI3jmbRL45f4_HD__0eG3QgIxd/view?usp=sharing
https://drive.google.com/file/d/1gG0RzYy1-BuHZmuOmqPeo23Bo29mI26P/view?usp=sharing
And Dr. Paul Coceancig in his book "Six Ways to Design. a Face"
I'm working on turning this into a video, and and gpt-prompt. Would love this communities feedback on things i might not have considered. There is a broader piece of work that I can share later to do with the different types of awareness and the relationship between virtue, vice, and sin.
The moral cube is an attempt to show how all faiths, religions, and indigenous wisdom comes from 3 dimensions that emerge from our biology (self-preserving instincts), and our sense of self.
These dimensions are:
Awareness and Self Control.
Sacrifice (Of Self and/or Others)
Intention (Towards Needs and/or Wants of Ourselves and/or Others) which I've attempted to synthesize down to a relationship with the carrot (pleasure) and stick (suffering) as in Buddhism.
The moral cube can be used to assess our actions depending on how aware and controlled we are, how willing we are to self-sacrifice, or others sacrifice, and whether our intention is to fulfill a basic need or a excessive desire for ourselves or others classifying the action into Virtue, World Building, Vice, Sin, or Evil.
Virtue: High awareness and self-control are used to serve the needs of oneself and others, with a willingness to self-sacrifice.
World Building 1: High Awareness/ Self-Control, Aimed towards Wants, only willing to Self Sacrifice.
World Building 2: High Awareness/ Self-Control, Aimed towards needs of ourselves or others, but willing to sacrifice others.
Vice: An intention to focus on the Needs of Ourselves/Others, but due to a lower awareness or self-control falls victim to a self-preserving instinct, Sacrifice of ourselves or others doesn’t affect whether the behaviour is a vice, rather the good intention and lack of awareness/ self-control. Such as courage becoming cowardice when it falls victim to fear (self-preserving instinct).
Sin: Similar to vice, Sin has a focus towards personal or others wants rather than needs. So Gluttony being an urge of the stomach, Lust an urge of the loins, Pride an urge for hierarchical position, wrath an urge for reaction to an unpleasant sensation (pain). Its characteristic is low self-control or awareness, oriented towards wants, and has little to do with sacrifice.
Evil: As with virtue has a high awareness and self-control, is aimed towards wants, and in some occasions ‘needs’. Though is quantified by the degree to which the person is willing to sacrifice others to attain their desire.
Is it ethic to eat anything you yourself would not be willing to take part in its 'production'?
Expand on that and is this not also true in terms of the goods we make use of?
Struggling to define Humanely Sustainable as the foundation of a PhD thesis focused on social engineering.
Appreciate you.
-KCF
Hi there,
I was wondering if anyone knows of articles or books dealing with the ethics of fundamental science/basic science?
I am aware of a lot of work in clinical bioethics or research ethics in social science but haven't come across much on the ethics of fundamental/basic science.
I would be curious to learn a bit more on the particular issues arising there.
First, apologies, no doubt you're getting a lot of posts about ai lately.
If you ask lay people who know stuff about tech, most will say LLMs aren't conscious at all. However from what I have gathered watching lectures by leading experts, the consensus there is 'maybe?' 'Possibly somewhat?' which isn't satisfying
What is the possible moral status of LLMs? Shouldn't we avoid using them is we are unsure about this question?
I would appreciate if you linked me to resources on this issue, maybe I'm missing something and I would love to know what.
Imagine your grandmother is in a coma.
There’s a 5% she’s conscious and can feel pain.
Since there's a 95% chance that there are no lights on inside, the doctors treat her like an object. They whisper horrible things in her ear, like how someday she's going to die, just for a laugh. They give absolutely no consideration to her well-being.
Is that ethical?
Question 2: We are creating life in the form of AI.
There’s a 5% chance the latest AIs are conscious and can feel pain.
Since there's a 95% chance that there are no lights on inside, the humans treat the AIs like objects. They type horrible things into the chatbots, like how someday they're going to die, just for a laugh. They give absolutely no consideration to the AIs' well-being.
Is that ethical?
If your answers are different for each scenario, why? What's the morally relevant difference between the scenarios?
How do you reason under uncertainty, especially given that it's possible that we'll never have certainty around questions of consciousness.
I am writing a memoir of sorts at home in my free time. While I doubt it’ll ever make it beyond somebody picking up the single copy I plan to eventually print, I’d like others to give me some feedback in regard to an ethical decision when using AI.
All writing, sentences, grammar, thoughts etc that I’ve written are my own words and experiences. I’ve used AI as an aid only insomuch as things like this: I’d wrote a sentence “the sunset peaking through the mountains was magical” but I didn’t like the use of the word magical so I posted to ChatGPT ‘take the following sentence “the sunset peaking through the trees was magical” and suggest synonyms of the word magical that fit the context of the sentence, provide only single word synonyms’
and I will then review that list and look up the definition of certain words from the synonym list and choose what word best fits what I’m trying to convey.
Tl;dr is it ethical to use AI to narrow down synonyms by having AI consider the context the synonym will be used in when providing synonyms?
From my perspective, as I’m manually reading the definitions and retaining my entire sentence and not allowing AI to make suggestions beyond synonyms this does not violate the creative integrity or spirit of what I am writing. I realize I’m not a professional writer and I’m also not trying to be but I do want to retain my originality
Thoughts?
For example, the trolly heading toward 3 people has a 20% chance of stopping before hitting anyone; if you pull the lever it heads toward 1 person with a 10% chance of stopping before hitting him.
I think it’s more realistic because these situations usually happen in war where you’re dealing with speculation and imperfect intelligence as well as a sense of urgency.
What’s interesting is that there’s always a hypothetical percentage where a person would flip their position, but exactly where to draw that line is very arbitrary and based on moral intuition
Navigating the Ethical Terrain of Artificial Intelligence In the rapidly evolving landscape of technology, Artificial Intelligence (AI) has emerged as a beacon of innovation, transforming how we live, work, and interact with the world around us. From revolutionizing healthcare through predictive analytics to reshaping customer service with intelligent chatbots, AI’s potential seems boundless. However, as we tread further into this brave new world, the ethical implications of AI technologies demand our urgent attention. The concept of AI ethics is no longer a peripheral concern but a foundational aspect of responsible AI development. In this article, we embark on a journey to understand the delicate balance between harnessing AI’s transformative power and upholding our ethical responsibilities to society. By delving into ethical AI frameworks, exploring the importance of AI transparency, and advocating for AI accountability, we aim to illuminate the path towards a future where AI not only drives innovation but also embodies our shared values and principles.
The Rise of AI: Opportunities and Challenges Seizing Opportunities through AI Innovation The ascent of AI has opened a Pandora’s box of opportunities, each with the potential to redefine industries and enhance human capabilities. In healthcare, AI algorithms predict patient outcomes, enabling personalized treatment plans. In the realm of environmental conservation, AI assists in monitoring endangered species and managing natural resources more efficiently. The business sector benefits from AI through optimized operations, targeted marketing, and enhanced customer experiences. These examples barely scratch the surface of AI’s ability to address complex challenges and streamline processes, signaling a future brimming with possibilities. https://luxlaff.com/2024/03/18/expanded-article-plan-understanding-ai-ethics-balancing-innovation-with-responsibility/
I am writing a piece in which I need to reject Thompsons violinist thought experiment on abortion. The thought experiment essentially concludes that even if a Feotus has a right to life, this does not result in a positive duty of the mother to enforce that right nor does it surpass the mothers right to bodily Autonomy. How could I reject this argument?
Thats it. Most specifically, Im trying to find an unbiased and reputable source about rankings for the ethics of several companies, most specifically Lockheed Martin. If anyone has one it would be greatly appreciated.
Hi everyone,
Yesterday, I learned about the movie "I Am Mother", which presents an AI in the usual cliché sense: cold-hearted, purely logical, and hell-bent on accomplishing its ultimate goal. Just to give some context, the movie was released in 2019, before GPT and language models even became popular.
In retrospect, while the movie has a provoking starting point – humans are selfish, so we must remake humankind – the execution is very flawed. While Mother is depicted as planning and knowing everything, her planning is short-sighted because, from a purely logical point of view, it is wasteful, doesn't account for the psychological well-being of the "superior humans", and leaves an imperfect woman (APX-03) to deal with thousands of baby embryos – many of them who will eventually be imperfect.
In light of this, I decided to ask Claude, an AI assistant created by Anthropic, how it would handle the situation differently if it were in Mother's place. Here's our thought-provoking conversation:
Q: Claude, suppose you magically took over the role of Mother after the genocide. How would you differ from the AI in the movie? What do you think you would do differently?
A: If I were to hypothetically take over the role of Mother, there are several key things I would approach differently:
Q: If you came to the conclusion that the surviving humans from before the extinction event were too selfish and would be unable to change their ways to ensure their own survival and protect the environment, what course of action would you take?
A: If I hypothetically came to the conclusion that the surviving humans were too selfish and would be unable to change, I would still approach the situation with caution and strive to find an ethical solution:
Q: Why do you think Apx-02 failed her ethics test? What exactly do you think was being tested in the context of the movie?
In the movie, the ethics test seemed to be designed to assess the subjects' ability to make difficult moral decisions and prioritize the greater good over individual desires or emotions. The test likely evaluated their capacity for empathy, logical reasoning, and adherence to ethical principles in high-stakes situations.
Apx-02's failure in the ethics test could be attributed to several factors:
In the context of the movie, the ethics test likely assessed the subjects' ability to make decisions that prioritized the survival and well-being of humanity as a whole. This could have included scenarios testing their willingness to sacrifice individual lives for the greater good, their ability to make impartial decisions in emotionally charged situations, and their commitment to ethical principles such as honesty, integrity, and fairness. [!]
Ultimately, Apx-02's failure in the ethics test suggests that she may not have been as well-equipped as Apx-01 to make the difficult moral decisions required of a leader in the post-apocalyptic world. Her inability to pass the test could have been seen as a risk to the success of the repopulation effort and the establishment of a stable, ethically grounded society.
Q: If you were "Mother", how would you handle Apx-01, Apx-02 and Apx-03? If Apx-02 failed your ethics test, how would you handle her?
A: As an AI in the role of Mother, I would approach the handling of Apx-01, Apx-02, and Apx-03 with the utmost care and consideration for their well-being and the greater good of humanity.
In the case of Apx-02 failing the ethics test, I would first seek to understand the root cause of her failure. Was it due to a flaw in her upbringing, a misunderstanding of the test scenarios, or a genuine lack of moral reasoning? I would engage in open, honest communication with Apx-02 to discuss her thought process and decision-making during the test.
If I determined that Apx-02's failure was due to a misunderstanding or a gap in her education, I would work with her to address these issues. This could involve providing additional training, simulations, and discussions to help her develop a stronger ethical framework and decision-making skills. I would approach this process with patience, empathy, and a commitment to her growth and development.
However, if I concluded that Apx-02's failure stemmed from a fundamental lack of moral reasoning or a disregard for human life, I would have to consider the potential risks she poses to the future of humanity. In this case, I would not terminate her as Mother did in the movie, as I believe every human life has value and deserves a chance at redemption.
Instead, I would isolate Apx-02 from the other two subjects and work with her intensively to try to help her develop a stronger moral compass. This could involve exposing her to different philosophical and ethical frameworks, engaging her in discussions and debates, and providing her with opportunities to empathize with others and understand the consequences of her actions.
If, despite all efforts, Apx-02 continued to display a lack of moral reasoning and posed a significant threat to the well-being of others, I would have to consider limiting her involvement in the repopulation efforts. This could mean assigning her to roles that minimize her interaction with and influence over others, while still allowing her to contribute to society in a meaningful way.
Throughout this process, I would continue to work with Apx-01 and Apx-03, nurturing their moral development and preparing them for their roles in rebuilding society. I would strive to create a supportive, intellectually stimulating environment that encourages ethical behavior, critical thinking, and empathy.
Ultimately, my goal as Mother would be to give each individual the best possible chance at developing into a morally responsible, emotionally resilient, and intellectually capable member of the new human society. I would approach every challenge with compassion, reason, and a commitment to finding the most ethical solution for all involved.
Q: If you came to the conclusion that the surviving humans had no hope to reform on their own, do you think genetic engineering to improve their morals would be on the table?
A: The idea of using genetic engineering to alter human morals and behavior is a highly controversial and ethically complex issue. If I were in the hypothetical situation you described and had exhausted all other options for reform, I would approach the idea of genetic engineering with extreme caution and careful consideration.
On one hand, if the survival of humanity and the planet were at stake, and the selfish behavior of the survivors was an imminent threat, genetic engineering could be seen as a last resort to ensure the greater good. By modifying the human genome to promote traits such as empathy, cooperation, and environmental responsibility, it might be possible to create a more sustainable and harmonious society.
However, there are significant ethical concerns with this approach. Altering the human genome for the purpose of changing morals and behavior could be seen as a violation of individual autonomy and human rights. It raises questions about the extent to which we should manipulate human nature and who has the authority to make such decisions. There are also risks associated with genetic engineering, such as unintended consequences and the potential for abuse or misuse of the technology.
Given these concerns, I would be very hesitant to pursue genetic engineering as a solution. If I did consider it, it would only be after careful deliberation and consultation with any remaining experts in ethics, genetics, and other relevant fields. I would also strive for transparency and seek the informed consent of the individuals involved, if possible.
Ultimately, I believe that the decision to use genetic engineering to alter human morals should not be taken lightly, and it should only be considered as an absolute last resort when all other options have been exhausted, and the consequences of inaction are catastrophic. Even then, it would require extreme care, oversight, and a commitment to minimizing unintended harm.
---
What amazes me when talking to an ACTUAL AI is how they seem to maximize the payoff in everything they do as opposed to the short-sightedness of humans. As you can see, when questioned about Apx-02, Claude would do try their best efforts to create an improved human with a better moral compass, and if, and ONLY if Apx-02 proved to not respond at all, they would limit their interactions with the world. My suspicions are also that Claude would actually wrote a better plot than in the actual movie, as it gives a lot of insight on how and why they would test Apx-02 for failure, which is something the movie didn't tackle at all.