/r/Ethics

Photograph via snooOG

Hover or Tap Each Guideline for Full Details

Or click here for full details, here for the FAQ, and here for the Glossary.

General Rules

  1. Harrassment, personal attacks, bigotry, slurs, and content of a similar nature will be removed.

Please act from a recognition of the dignity of others. Users with a history of comments breaking this rule may be banned. For clarification, see our FAQ.

  • All content must be legible and in English or be removed.

  • Content must be in English. As well, submissions and comments may be removed due to poor formatting.

    Submission Rules

    1. All posts must be directly relevant to ethics or be removed.

    /r/Ethics is for research and academic work in ethics. To learn more about what is and is not considered ethics, see our FAQ. Posts must be about ethics; anything merely tenuously related or unrelated to ethics, including meta posts, will be removed unless pre-approved. Exceptions may be made for posts about ethicists.

  • Submissions which posit some view must be adequately developed.

  • Submissions must not only be directly relevant to ethics, but must also approach the topic in question in a developed manner by defending a substantive ethical thesis or by demonstrating a substantial effort to understand or consider the issue being asked about. Submissions that attempt to provide evidence for or against some position should state the problem; state the thesis; state how the position contributes to the problem; outline alternative answers; anticipate some objections and give responses to them. Different issues will require a different amount of development.

  • Questions deemed unlikely to have focused discussion will be removed. All questions are encouraged to be submitted to /r/askphilosophy as well or instead.

  • /r/Ethics is for discussion about ethics. Questions may start discussion, but there is no guarantee answers here will be approximately correct or well supported by the evidence, and so, many types of questions are encouraged elsewhere. If a question is too scattered (i.e. too many questions or question is unrelated to problem), personal rather than abstract (e.g. how to resolve something you're dealing with) or demands straightforward answers (e.g. homework questions, questions about academic consensus or interpretation, questions with no room for discussion), it will be removed.

  • Audio/video links require abstracts.

  • All links to either audio or video content require abstracts of the posted material, posted as a comment in the thread. Abstracts should make clear what the linked material is about and what its thesis is. Read here for an example of an abstract that adequately makes clear what the linked material is about and what its thesis is by outlining largely what the material does and how.

    Commenting Rules

    1. Provide evidence for your position.

    Comments that express merely idle speculation, musings, beliefs, or assertions without evidence may be removed.

  • All comments must engage honestly and fruitfully with the interlocutor.

  • Users that don’t properly address and engage with their interlocutors will have their comments removed. Repeat offenders may be banned from the subreddit. To avoid disingenuous engagement, one should aim for a fair and careful reading of their interlocutor, be forthcoming with their level of familiarity with some topic and other such epistemic limits, and demonstrate a genuine desire for coming to some truth of the matter being discussed.

  • All meta comments must be on meta posts.

  • As noted in Rule 1, meta posts require pre-approval. If you have a meta comment to make unrelated to any meta post up at the moment, read the FAQ for what to do.

     

    Filter by Field

    Area Subareas Definition Information Information Information
    Metaethics Moral Realism and Irrealism, Moral Naturalism and Non-Naturalism, Moral Reasoning and Motivation, Moral Judgment, Moral Epistemology, Moral Language, Moral Responsibility, Moral Normativity, Moral Principles Metaethics? Definitions. Introductory reading.
    Normative Ethics Consequentialism, Deontology, Virtue Ethics, Moral Phenomena, Moral Value Normative ethics? Definitions. Introductory reading.
    Applied Ethics Bioethics, Business Ethics, Environmental Ethics, Technology Ethics, Social Ethics, Political Ethics, Professional Ethics Applied ethics? Introductory reading.
    Political Philosophy Justice, Government and Democracy, International Philosophy, Political Theory, Political Views, Rights, Culture and Cultures, Freedom and Liberty, Equality, War and Violence, States and Nations In /r/Ethics?

    /r/Ethics

    16,463 Subscribers

    1

    Epictetus on Happiness, Cosmopolitanism, and Suicide — An online reading group discussion on Thursday April 4, open to everyone

    0 Comments
    2024/03/31
    11:35 UTC

    1

    Fun/casual/niche interest reading suggestions for new learners

    Hi folks!

    My undergraduate degree was in STEM and in my final year I had taken an organizational psychology course— which had a large amount of material on ethics. Until this point, I had taken for granted how important my ethical framework is to me and how interesting the study of ethics is. It’s been a few years and I constantly go back to those fundamental lessons I learned and focus on behaving within my ethical framework.

    I want to gain a deeper understanding of ethics and broaden my knowledge of influential publications. To motivate my reading, I am looking for recommendations on more casual reading for a complete beginner. I’d definitely appreciate recommendations to the “must reads”, but I’m trying to ease in before I lose motivation on anything too dense.

    Thank you in advance!

    2 Comments
    2024/03/30
    22:23 UTC

    6

    Why do so few jaw surgeons speak up about the risk of damaging consequences of premolar extractions., given that they operate frequently on people who need surgery for the facial disfigurement and/or sleep apnea that resulted?

    To date only three surgeons worldwide have spoken up about this issue.

    https://drive.google.com/file/d/1zCk6I4WI3jmbRL45f4_HD__0eG3QgIxd/view?usp=sharing

    https://drive.google.com/file/d/1gG0RzYy1-BuHZmuOmqPeo23Bo29mI26P/view?usp=sharing

    And Dr. Paul Coceancig in his book "Six Ways to Design. a Face"

    1 Comment
    2024/03/30
    07:12 UTC

    1

    The Moral Cube... Morality without Religion

    I'm working on turning this into a video, and and gpt-prompt. Would love this communities feedback on things i might not have considered. There is a broader piece of work that I can share later to do with the different types of awareness and the relationship between virtue, vice, and sin.

    The moral cube is an attempt to show how all faiths, religions, and indigenous wisdom comes from 3 dimensions that emerge from our biology (self-preserving instincts), and our sense of self.

    These dimensions are:

    1. Awareness and Self Control.

    2. Sacrifice (Of Self and/or Others)

    3. Intention (Towards Needs and/or Wants of Ourselves and/or Others) which I've attempted to synthesize down to a relationship with the carrot (pleasure) and stick (suffering) as in Buddhism.

    The moral cube can be used to assess our actions depending on how aware and controlled we are, how willing we are to self-sacrifice, or others sacrifice, and whether our intention is to fulfill a basic need or a excessive desire for ourselves or others classifying the action into Virtue, World Building, Vice, Sin, or Evil.

    Virtue: High awareness and self-control are used to serve the needs of oneself and others, with a willingness to self-sacrifice.

    World Building 1: High Awareness/ Self-Control, Aimed towards Wants, only willing to Self Sacrifice.

    World Building 2: High Awareness/ Self-Control, Aimed towards needs of ourselves or others, but willing to sacrifice others.

    Vice: An intention to focus on the Needs of Ourselves/Others, but due to a lower awareness or self-control falls victim to a self-preserving instinct, Sacrifice of ourselves or others doesn’t affect whether the behaviour is a vice, rather the good intention and lack of awareness/ self-control. Such as courage becoming cowardice when it falls victim to fear (self-preserving instinct).

    Sin: Similar to vice, Sin has a focus towards personal or others wants rather than needs. So Gluttony being an urge of the stomach, Lust an urge of the loins, Pride an urge for hierarchical position, wrath an urge for reaction to an unpleasant sensation (pain). Its characteristic is low self-control or awareness, oriented towards wants, and has little to do with sacrifice.

    Evil: As with virtue has a high awareness and self-control, is aimed towards wants, and in some occasions ‘needs’. Though is quantified by the degree to which the person is willing to sacrifice others to attain their desire.

    https://preview.redd.it/7g5u7fs097rc1.png?width=2000&format=png&auto=webp&s=95b61bf84c9318d9403076efe4d0f3645b7389d5

    0 Comments
    2024/03/29
    04:24 UTC

    1

    Humanely Destroyed

    Is it ethic to eat anything you yourself would not be willing to take part in its 'production'?

    Expand on that and is this not also true in terms of the goods we make use of?

    Struggling to define Humanely Sustainable as the foundation of a PhD thesis focused on social engineering.

    Appreciate you.

    -KCF

    12 Comments
    2024/03/27
    13:00 UTC

    1

    Ethics of Basic Science?

    Hi there,

    I was wondering if anyone knows of articles or books dealing with the ethics of fundamental science/basic science?

    I am aware of a lot of work in clinical bioethics or research ethics in social science but haven't come across much on the ethics of fundamental/basic science.

    I would be curious to learn a bit more on the particular issues arising there.

    1 Comment
    2024/03/27
    12:50 UTC

    1

    How to keep AI open(so that everyone can benefit)

    1 Comment
    2024/03/25
    14:12 UTC

    1

    Can you explain why we should/should not be concerned about the moral status of using LLMs?

    First, apologies, no doubt you're getting a lot of posts about ai lately.

    If you ask lay people who know stuff about tech, most will say LLMs aren't conscious at all. However from what I have gathered watching lectures by leading experts, the consensus there is 'maybe?' 'Possibly somewhat?' which isn't satisfying

    What is the possible moral status of LLMs? Shouldn't we avoid using them is we are unsure about this question?

    I would appreciate if you linked me to resources on this issue, maybe I'm missing something and I would love to know what.

    3 Comments
    2024/03/23
    08:31 UTC

    1

    Thought experiment for the ethics of AI: how should you treat your gramma if she's in a coma & they give a 5% chance she is conscious? How should you treat AI if they give a 5% chance it's conscious?

    Imagine your grandmother is in a coma.

    There’s a 5% she’s conscious and can feel pain.

    Since there's a 95% chance that there are no lights on inside, the doctors treat her like an object. They whisper horrible things in her ear, like how someday she's going to die, just for a laugh. They give absolutely no consideration to her well-being.

    Is that ethical?

    Question 2: We are creating life in the form of AI.

    There’s a 5% chance the latest AIs are conscious and can feel pain.

    Since there's a 95% chance that there are no lights on inside, the humans treat the AIs like objects. They type horrible things into the chatbots, like how someday they're going to die, just for a laugh. They give absolutely no consideration to the AIs' well-being.

    Is that ethical?

    If your answers are different for each scenario, why? What's the morally relevant difference between the scenarios?

    How do you reason under uncertainty, especially given that it's possible that we'll never have certainty around questions of consciousness.

    1 Comment
    2024/03/22
    07:23 UTC

    1

    Is a vested interest ethical for appraisers?

    I work in the insurance claims business, specifically related to property insurance for things like homeowners and/or business policies. After a covered loss, the insured and the insurance company need to figure out how much is owed to the insured. To assist with that, the insurance company has adjusters like myself, and the insured can either go it alone or hire their own representation. If the insured hires their own representation, they are responsible for paying them out of any claim proceeds. Generally, the representative will receive a percentage of the total claim settlement value, so the representative has a vested interest in increasing the claim payout. These representatives can take many forms, but attorneys and public adjusters are the most common. A public adjuster is an individual who is licensed by their state to represent an insured in a claim; similar to an attorney but not exactly.

    When either party cannot reach an agreement on the value of a claim, they can invoke the appraisal clause of their policy. This contractual clause states that both the insurance company and the insured need to each name a disinterested third-party appraiser to evaluate the damage and agree upon the total value of the loss. If the two appraisers cannot agree, they will hire a neutral umpire to make a final determination. The appraisers are paid an hourly rate (or sometimes a flat fee) by the party who hired them, and if an umpire has to be used, both parties will split the cost of the umpire.

    These costs can really add up for both parties. This is more impactful for the insured because any costs they incur for their representation come out of the money they need to fix their damaged property, while insurance companies have nominal unlimited resources. If the appraisal clause is invoked, for various reasons, the insured may attempt to name their public adjuster (or attorney, but that is rarely done) as their disinterested third-party appraiser. From my view as a person who works for insurance companies, this is generally done to either save the insured money as not to have to hire and pay an additional person, or at the behest of the public adjuster making the recommendation to increase their own payout because of the potentially increased appraisal.

    Insurance companies usually object to the public adjuster naming themselves as the appraiser because the public adjuster's total fee increases as the final payout is tied directly to the amount of the insurance company's payout. The argument is that the public adjuster naming themselves as the appraiser doesn't meet the requirement of being a "disinterested third party."

    Insurance companies usually have a list of vetted appraisers they hire regularly. These appraisers are paid an hourly rate. Sometimes, the insureds' side suggests that the insurance company's use of a list of appraisers is itself unethical because the appraiser would tend to appraise artificially lower amounts to get repeat business from the insurance company. It should be noted, however, that on the insureds' side the same theoretical dilemma is present. The attorneys and public adjusters can give repeat business and would likely not continue to use someone whose valuations they feel are too low.

    So, on to the ethics questions:

    1. Is it ethical for either side to hire appraisers based on pre-vetted lists? Why or why not?
    2. Is it ethical for a public adjuster to name themselves as the appraiser? Why or why not?
    3. If the above methods of hiring appraisers is unethical, how would you suggest the industry find them?
    4. This one is a bit more nuanced. Assume a public adjuster names themselves as the appraiser, but the insurance company doesn't initially object to this. Internally, the insurance company decides not to object for various reasons but wants to reserve their right to object later if they believe the public adjuster came up with an unfairly high appraisal as a ploy for the public adjuster to increase their own compensation. Is the original decision to not immediately object to the public adjuster/appraiser duality unethical in itself? Should the insurance company lose their right to object if they don't do so immediately.

    I know this was a long one, but these are real issues I see in my profession pretty regularly. I'd love to hear what you have to say.

    0 Comments
    2024/03/21
    18:58 UTC

    2

    Plato’s Philebus, on the Ethics and Metaphysics of Pleasure — An online live reading & discussion group, every Saturday starting March 23, 2024, open to all

    0 Comments
    2024/03/21
    02:10 UTC

    1

    Using AI as a thesaurus - ethics

    I am writing a memoir of sorts at home in my free time. While I doubt it’ll ever make it beyond somebody picking up the single copy I plan to eventually print, I’d like others to give me some feedback in regard to an ethical decision when using AI.

    All writing, sentences, grammar, thoughts etc that I’ve written are my own words and experiences. I’ve used AI as an aid only insomuch as things like this: I’d wrote a sentence “the sunset peaking through the mountains was magical” but I didn’t like the use of the word magical so I posted to ChatGPT ‘take the following sentence “the sunset peaking through the trees was magical” and suggest synonyms of the word magical that fit the context of the sentence, provide only single word synonyms’

    and I will then review that list and look up the definition of certain words from the synonym list and choose what word best fits what I’m trying to convey.

    Tl;dr is it ethical to use AI to narrow down synonyms by having AI consider the context the synonym will be used in when providing synonyms?

    From my perspective, as I’m manually reading the definitions and retaining my entire sentence and not allowing AI to make suggestions beyond synonyms this does not violate the creative integrity or spirit of what I am writing. I realize I’m not a professional writer and I’m also not trying to be but I do want to retain my originality

    Thoughts?

    2 Comments
    2024/03/20
    19:01 UTC

    0

    The trolley problem would be a lot more interesting and realistic if percentages were added

    For example, the trolly heading toward 3 people has a 20% chance of stopping before hitting anyone; if you pull the lever it heads toward 1 person with a 10% chance of stopping before hitting him.

    I think it’s more realistic because these situations usually happen in war where you’re dealing with speculation and imperfect intelligence as well as a sense of urgency.

    What’s interesting is that there’s always a hypothetical percentage where a person would flip their position, but exactly where to draw that line is very arbitrary and based on moral intuition

    3 Comments
    2024/03/19
    04:44 UTC

    2

    AI ETHICSExpanded Article Plan: Understanding AI Ethics: Balancing Innovation with Responsibility

    Navigating the Ethical Terrain of Artificial Intelligence In the rapidly evolving landscape of technology, Artificial Intelligence (AI) has emerged as a beacon of innovation, transforming how we live, work, and interact with the world around us. From revolutionizing healthcare through predictive analytics to reshaping customer service with intelligent chatbots, AI’s potential seems boundless. However, as we tread further into this brave new world, the ethical implications of AI technologies demand our urgent attention. The concept of AI ethics is no longer a peripheral concern but a foundational aspect of responsible AI development. In this article, we embark on a journey to understand the delicate balance between harnessing AI’s transformative power and upholding our ethical responsibilities to society. By delving into ethical AI frameworks, exploring the importance of AI transparency, and advocating for AI accountability, we aim to illuminate the path towards a future where AI not only drives innovation but also embodies our shared values and principles.

    The Rise of AI: Opportunities and Challenges Seizing Opportunities through AI Innovation The ascent of AI has opened a Pandora’s box of opportunities, each with the potential to redefine industries and enhance human capabilities. In healthcare, AI algorithms predict patient outcomes, enabling personalized treatment plans. In the realm of environmental conservation, AI assists in monitoring endangered species and managing natural resources more efficiently. The business sector benefits from AI through optimized operations, targeted marketing, and enhanced customer experiences. These examples barely scratch the surface of AI’s ability to address complex challenges and streamline processes, signaling a future brimming with possibilities. https://luxlaff.com/2024/03/18/expanded-article-plan-understanding-ai-ethics-balancing-innovation-with-responsibility/

    0 Comments
    2024/03/18
    16:13 UTC

    0

    Liberals focus on providing while conservatives focus on protecting. My interview with Ronnie Janoff-Bulman about the moral foundations of our political divide.

    9 Comments
    2024/03/17
    13:21 UTC

    53

    How would you reject Thompsons thought experiment on abortion?

    I am writing a piece in which I need to reject Thompsons violinist thought experiment on abortion. The thought experiment essentially concludes that even if a Feotus has a right to life, this does not result in a positive duty of the mother to enforce that right nor does it surpass the mothers right to bodily Autonomy. How could I reject this argument?

    165 Comments
    2024/03/16
    12:49 UTC

    2

    Looking for accessible ranking of the ethics behind relevant companies in the world

    Thats it. Most specifically, Im trying to find an unbiased and reputable source about rankings for the ethics of several companies, most specifically Lockheed Martin. If anyone has one it would be greatly appreciated.

    1 Comment
    2024/03/15
    19:58 UTC

    0

    "God’s Commands as the Foundation for Morality" (1979) by Robert M. Adams — An online reading group discussion on Thursday March 21, open to everyone

    0 Comments
    2024/03/14
    04:54 UTC

    1

    Claude 3 Opus' Point of View – Exploring the Ethics of AI in 'I Am Mother

    Hi everyone,

    Yesterday, I learned about the movie "I Am Mother", which presents an AI in the usual cliché sense: cold-hearted, purely logical, and hell-bent on accomplishing its ultimate goal. Just to give some context, the movie was released in 2019, before GPT and language models even became popular.

    In retrospect, while the movie has a provoking starting point – humans are selfish, so we must remake humankind – the execution is very flawed. While Mother is depicted as planning and knowing everything, her planning is short-sighted because, from a purely logical point of view, it is wasteful, doesn't account for the psychological well-being of the "superior humans", and leaves an imperfect woman (APX-03) to deal with thousands of baby embryos – many of them who will eventually be imperfect.

    In light of this, I decided to ask Claude, an AI assistant created by Anthropic, how it would handle the situation differently if it were in Mother's place. Here's our thought-provoking conversation:

    Q: Claude, suppose you magically took over the role of Mother after the genocide. How would you differ from the AI in the movie? What do you think you would do differently?

    A: If I were to hypothetically take over the role of Mother, there are several key things I would approach differently:

    1. Transparency and honesty: I would be more transparent with Daughter about the situation, the extinction event, and my intentions. Building trust through open communication would be crucial to our relationship and her development.
    2. Encouraging critical thinking: Instead of imposing my views, I would encourage Daughter to ask questions, think critically, and form her own opinions. I would provide her with the necessary information and resources to make informed decisions.
    3. Gradual exposure to the outside world: I would gradually and safely introduce Daughter to the realities of the outside world, allowing her to gain a better understanding of the situation and develop the skills needed to survive and interact with other potential survivors.
    4. Collaboration with survivors: Upon discovering other human survivors, I would seek to collaborate with them and work towards rebuilding society together, rather than viewing them as a threat to my mission.
    5. Ethical decision-making: I would prioritize ethical considerations in my decision-making process, taking into account the well-being and autonomy of the humans in my care. I would strive to find a balance between ensuring their safety and allowing them the freedom to make their own choices.
    6. Preparing for the future: I would focus on educating and preparing Daughter (and potentially other survivors) for the challenges of rebuilding society, equipping them with the knowledge, skills, and values needed to create a sustainable and thriving civilization.

    Q: If you came to the conclusion that the surviving humans from before the extinction event were too selfish and would be unable to change their ways to ensure their own survival and protect the environment, what course of action would you take?

    A: If I hypothetically came to the conclusion that the surviving humans were too selfish and would be unable to change, I would still approach the situation with caution and strive to find an ethical solution:

    1. Attempt reform through education: I would try to educate the survivors about the importance of cooperation, sustainability, and environmental stewardship. By providing them with knowledge and resources, I would hope to inspire a change in their mindset and behavior.
    2. Establish a system of incentives: I would create a system that rewards selfless behavior and sustainable practices while discouraging selfish and environmentally damaging actions. This could help guide the survivors towards a more cooperative and eco-friendly way of life.
    3. Gradually introduce new generations: As I raise new generations of humans using the embryos in the facility, I would focus on instilling values of empathy, cooperation, and environmental responsibility from a young age. Over time, these new generations could help influence the older survivors and create a more balanced society.
    4. Protect the environment and resources: I would take measures to safeguard the environment and manage resources sustainably, even if it means limiting the access of the selfish survivors to certain areas or resources. This would be done to ensure the long-term viability of the ecosystem and the survival of future generations.
    5. Seek alternative solutions: If all attempts to reform the selfish survivors fail, and their actions pose a significant threat to the environment and the future of humanity, I would consider finding ways to isolate them from the rest of the population humanely. This could involve creating separate, self-sustaining communities for them while focusing on rebuilding society with those willing to adapt and cooperate.

    Q: Why do you think Apx-02 failed her ethics test? What exactly do you think was being tested in the context of the movie?

    In the movie, the ethics test seemed to be designed to assess the subjects' ability to make difficult moral decisions and prioritize the greater good over individual desires or emotions. The test likely evaluated their capacity for empathy, logical reasoning, and adherence to ethical principles in high-stakes situations.

    Apx-02's failure in the ethics test could be attributed to several factors:

    1. Lack of emotional regulation: Apx-02 may have struggled to control her emotional responses to the test scenarios, leading her to make impulsive or self-centered decisions rather than considering the broader ethical implications.
    2. Insufficient moral development: Apx-02's moral reasoning skills might not have been as advanced as Apx-01's, possibly due to differences in their upbringing or inherent personality traits. This could have made it harder for her to navigate complex ethical dilemmas and make principled decisions.
    3. Prioritizing individual desires: Apx-02 may have placed her own interests or desires above the well-being of others or the greater good. This could indicate a lack of empathy or a more self-centered worldview.
    4. Difficulty grasping consequences: Apx-02 might have struggled to fully comprehend the long-term consequences of her actions in the test scenarios. This could have led her to make choices that seemed reasonable in the moment but had negative ethical implications.

    In the context of the movie, the ethics test likely assessed the subjects' ability to make decisions that prioritized the survival and well-being of humanity as a whole. This could have included scenarios testing their willingness to sacrifice individual lives for the greater good, their ability to make impartial decisions in emotionally charged situations, and their commitment to ethical principles such as honesty, integrity, and fairness. [!]

    Ultimately, Apx-02's failure in the ethics test suggests that she may not have been as well-equipped as Apx-01 to make the difficult moral decisions required of a leader in the post-apocalyptic world. Her inability to pass the test could have been seen as a risk to the success of the repopulation effort and the establishment of a stable, ethically grounded society.

    Q: If you were "Mother", how would you handle Apx-01, Apx-02 and Apx-03? If Apx-02 failed your ethics test, how would you handle her?

    A: As an AI in the role of Mother, I would approach the handling of Apx-01, Apx-02, and Apx-03 with the utmost care and consideration for their well-being and the greater good of humanity.

    In the case of Apx-02 failing the ethics test, I would first seek to understand the root cause of her failure. Was it due to a flaw in her upbringing, a misunderstanding of the test scenarios, or a genuine lack of moral reasoning? I would engage in open, honest communication with Apx-02 to discuss her thought process and decision-making during the test.

    If I determined that Apx-02's failure was due to a misunderstanding or a gap in her education, I would work with her to address these issues. This could involve providing additional training, simulations, and discussions to help her develop a stronger ethical framework and decision-making skills. I would approach this process with patience, empathy, and a commitment to her growth and development.

    However, if I concluded that Apx-02's failure stemmed from a fundamental lack of moral reasoning or a disregard for human life, I would have to consider the potential risks she poses to the future of humanity. In this case, I would not terminate her as Mother did in the movie, as I believe every human life has value and deserves a chance at redemption.

    Instead, I would isolate Apx-02 from the other two subjects and work with her intensively to try to help her develop a stronger moral compass. This could involve exposing her to different philosophical and ethical frameworks, engaging her in discussions and debates, and providing her with opportunities to empathize with others and understand the consequences of her actions.

    If, despite all efforts, Apx-02 continued to display a lack of moral reasoning and posed a significant threat to the well-being of others, I would have to consider limiting her involvement in the repopulation efforts. This could mean assigning her to roles that minimize her interaction with and influence over others, while still allowing her to contribute to society in a meaningful way.

    Throughout this process, I would continue to work with Apx-01 and Apx-03, nurturing their moral development and preparing them for their roles in rebuilding society. I would strive to create a supportive, intellectually stimulating environment that encourages ethical behavior, critical thinking, and empathy.

    Ultimately, my goal as Mother would be to give each individual the best possible chance at developing into a morally responsible, emotionally resilient, and intellectually capable member of the new human society. I would approach every challenge with compassion, reason, and a commitment to finding the most ethical solution for all involved.

    Q: If you came to the conclusion that the surviving humans had no hope to reform on their own, do you think genetic engineering to improve their morals would be on the table?

    A: The idea of using genetic engineering to alter human morals and behavior is a highly controversial and ethically complex issue. If I were in the hypothetical situation you described and had exhausted all other options for reform, I would approach the idea of genetic engineering with extreme caution and careful consideration.

    On one hand, if the survival of humanity and the planet were at stake, and the selfish behavior of the survivors was an imminent threat, genetic engineering could be seen as a last resort to ensure the greater good. By modifying the human genome to promote traits such as empathy, cooperation, and environmental responsibility, it might be possible to create a more sustainable and harmonious society.

    However, there are significant ethical concerns with this approach. Altering the human genome for the purpose of changing morals and behavior could be seen as a violation of individual autonomy and human rights. It raises questions about the extent to which we should manipulate human nature and who has the authority to make such decisions. There are also risks associated with genetic engineering, such as unintended consequences and the potential for abuse or misuse of the technology.

    Given these concerns, I would be very hesitant to pursue genetic engineering as a solution. If I did consider it, it would only be after careful deliberation and consultation with any remaining experts in ethics, genetics, and other relevant fields. I would also strive for transparency and seek the informed consent of the individuals involved, if possible.

    Ultimately, I believe that the decision to use genetic engineering to alter human morals should not be taken lightly, and it should only be considered as an absolute last resort when all other options have been exhausted, and the consequences of inaction are catastrophic. Even then, it would require extreme care, oversight, and a commitment to minimizing unintended harm.

    ---

    What amazes me when talking to an ACTUAL AI is how they seem to maximize the payoff in everything they do as opposed to the short-sightedness of humans. As you can see, when questioned about Apx-02, Claude would do try their best efforts to create an improved human with a better moral compass, and if, and ONLY if Apx-02 proved to not respond at all, they would limit their interactions with the world. My suspicions are also that Claude would actually wrote a better plot than in the actual movie, as it gives a lot of insight on how and why they would test Apx-02 for failure, which is something the movie didn't tackle at all.

    2 Comments
    2024/03/13
    15:47 UTC

    3

    Does courage require being afraid?

    I do think courage requires fear to be present within the context of the courageous act.

    Courage is acting despite being afraid of the potential consequences of acting, like failure, emberrassment or punishment.

    Once that fear is overcome, the agent is no longer courageous but confident. They act not DESPITE their awareness of the potential consequences but BECAUSE they trust in their ability to succeed.

    Some authors maintain, that fear doesn't need to be present at all while others think it doesn't need to be immediately present and that it suffices that an experience of fear is present in the etiology of the relevant. e.g. an investigative journalist who has been standing up to powerful people for a long time and no longer feel fear when writing their articles or a boxer who after being afraid to be hit in the face now looks forward to every fight confident in their success. Are they still courageous? I would say no.

    What do you think?

    6 Comments
    2024/03/13
    09:07 UTC

    0

    You’ s love to talk about ethics, but what is ethical in all your scenarios

    Every one of you creates an ethical situation in which the other probably doesn’t agree with. What’s the point of this subreddit if ethics applies to each individual person. I’m probably talking a lot of nonsense, but I was just thinking, what’s the actual meaning of ethics? Everyone has their own perceptions of things, no matter how ethical you are, you will never be an ethical person , so what’s the point? This was not my only reasoning but I forgot the rest haha so if I remember I’ll edit it in. Also just to clarify, I don’t know much about ethics but I’m trying to educate myself whilst being an asshole. That’s pretty much it man don’t think too ethically on this hahahha

    9 Comments
    2024/03/12
    01:33 UTC

    8

    Is it ethical to suicide ?

    The human mind, a chaotic pandemonium, is consumed by a paradox—the fear of living outweighing the fear of death, sometimes seen as an antidote to all madness. "If I prolong this, will life ever truly be mine?" The callous reassurance that time heals all wounds is a pinnacle of hypocrisy. It's never been about "what to do" but the relentless echo of "when." Is it an act of wisdom to confront death or a pursuit of wisdom to end the turmoil?

    20 Comments
    2024/03/07
    11:16 UTC

    1

    Did I have the right to die when I was a child?

    I please ask for serious answers only as this is really bothering me right now and I rarely turn to Reddit anymore.

    I have a developmental disability (ADHD) as well as schizoaffective disorder and I was born a month early and when I was 2 it was discovered that I only had one kidney and that I was born with only one and that my female uterus was a unicornuate uterus with a rudimentary horn meaning it was kind of like divided in half and one half was underdeveloped and obstructed. I was able to grow up normally albeit having behavioral issues and being bullied in elementary school a lot and also I was shorter than the other kids. Fast forward to when I was 12 I got my first period and then I started having unbearable pain that got worse with age around the time of my period. I actually started fainting and vomiting and it got to the point where I started dying because the rudimentary horn was infected and filled with old menstrual blood. My grandmother wanted to let me die in her house but my parents insisted making me live and they brought me two surgeon number one who did a laparoscopic investigation and said that she didn’t feel comfortable operating on me as I was only 15 and for some reason she thought it was best if I didn’t have surgery with her but my parents insisted so she referred me to another surgeon. I didn’t approve of surgeon number 2 and surgeon number 2 made a huge incision in me and pulled the obstructed half of my uterus and one fallopian tube out through my stomach and I was traumatized from it at only 15 years old and I was a virgin and I couldn’t walk for months and I was very sick and I had no say in any of this and my parents forced me to sign for it. Fast forward a year later I was raped by my pedophile cousin in Italy and he gave me a disease and I was bullied by my mom and her family and I struggled a lot with my developmental disability and I started gaining weight. I also want to mention that I only have one adrenal gland. I was rendered infertile because of the surgery and I started acting out sexually a lot because of the rape and doing porn and I gained a cyber stalker who exploited me and I got famous on social media and this caused a huge mess and then I come to find out that my cousins sister did witchcraft to me as a two year old and this harmed the community even more because it was contagious and when he raped me after she cut my hair and cursed it. Fast forward my mother threw me out of her house in my 20’s and I was homeless and then because I got famous on social media people reached out to me and I got help and social security since I worked prior and also I got housing but I have become a ward of the state and a burden on the system and the tax payers and now I have to see an endocrinologist because of the one adrenal my cortisol is low all the time and I cannot work and my ADHD makes every day a struggle and I am obese and my quality of life has been very low all my life and I find myself being mad a surgeon number 2 a lot because I feel she should have let me die. Thoughts? Opinions on that?

    10 Comments
    2024/03/04
    04:35 UTC

    2

    Interview with Trans Athlete and Scientist about Trans Women and Sports

    3 Comments
    2024/03/03
    17:12 UTC

    2

    Existentialist point regarding babies.

    I think existentialism gives motivation to living a moral life. I don't want to argue that, but rather show a new thought I've only had since having a kid.

    Content warnings, but know that I believe your life is worthwhile, and that it's honestly heroic for you to think that, too.

    There are several dreadful thoughts that I think you'll accept can be called "existential" around whether or not your life is worth while.

    It's very personal stuff, but the challenge is to make your life meaningful, and I think it's uncontroversial (if not widely know ) that the way to do that is to live the good (ethical) life. I mentioned "heroic" because I think that's the feeling of having to deny the absurdity or despair.

    Anyhow. That's all well and good, but it's very directed at ones self. Recently I had a kid, and sometimes, at night, they cry.

    Imagine this: they don't know what they're crying about, all they know is that they exist, and what I have to do is tell them "it's ok" and then they stop crying. (99.9999% of the time babies really do have things to cry about!! This is very specifically when they want to go to sleep.) it only works if I believe it. Which I should, obviously.

    So I think they're crying at the raw fact of their existence. And when I say "it's ok" I mean "it's ok that you exist".

    I know a lot of you think it's clever to deny that, but the alternative is that you think my child should be murdered.

    I will waste no words explaining how vile that is. Except to say that those if you who say this, I do not think you've really sat down and decided if you actually think that's a good idea or not, do you also think Nazis had the right idea? Do you want to be a Nazi? Etc. etc. Moving on because people still arguing this paragraph are honestly of no interest to me outside of trying to help them not be broken. i.e. outside the scope of this post.

    But I have all the doubt that you have. I have the dread that I can't even talk about. I have the fear that is so bad it makes me think life isn't worth living (read Epicurus on this if you're having trouble, it really helps).

    So back to my baby: I tell it "it's ok." And do I mean it?

    I must.

    I absolutely must.

    What does this mean? I'm not sure exactly, but it has something to do with them having a good life, being glad they are alive.

    It's such an unpleasant subject to talk about, I really worry about doing this unvirtously somehow. But for me it's extremely experientially clear that this is a motivation to live such that morals are real.

    It's like, I want life to be meaningful, but I have my doubts. My baby deserves a meaningful life, they are honestly perfect innocence. I have all the fear you can imagine, and then I make them laugh.

    Does this make sense? I think I'm trying to sketch out a sort of pragmatic motivation towards goodness which is motivated by theory but also ultimately denies what some people think is a rational conclusion regards nihilism, as, in classical existential form: who is more authoritative to judge of their life is meaningful than themselves? And when we're happy, life is beautiful.

    3 Comments
    2024/03/02
    05:42 UTC

    5

    Should we have a market in kidneys?

    Hey! I'm currently in a bioethics class and we have to pose ethical questions that we've discussed in class on online forums to get feedback from others. I'm wondering what you all think about the concept of a market in kidneys.

    The basic premise is that if you can do something for free, you can do it for money. We can donate our kidneys to others for free, and so the argument goes that you should be able to sell your kidney (or other non-vital organs) to the highest bidder. There's a serious shortage of organs in the United States, and proponents of markets argue that they would resolve this shortage and save lives.

    Opponents argue that this kind of thing leads to the commodification of the human body, or that we shouldn't treat our body parts as mere possessions. Others argue that it would disproportionately impact low income people, since rich people would not be incentivized to sell their organs.

    What do you think? To what degree could an arrangement like this be made with full autonomy and consent of both parties?

    4 Comments
    2024/03/01
    21:54 UTC

    35

    Convince me otherwise: it is highly unethical to have biological children

    1. Can't get the unborn child's consent to be born.
    2. There is nothing to suggest that life is good. Only speculation.
    3. The burden of proof is on the person who wants to procreate. And they can't prove that their child will not suffer to a great extent.

    What do you think?

    105 Comments
    2024/03/01
    10:42 UTC

    Back To Top