/r/Ethics

Photograph via snooOG

Hover or Tap Each Guideline for Full Details

Or click here for full details, here for the FAQ, and here for the Glossary.

General Rules

  1. Harrassment, personal attacks, bigotry, slurs, and content of a similar nature will be removed.

Please act from a recognition of the dignity of others. Users with a history of comments breaking this rule may be banned. For clarification, see our FAQ.

  • All content must be legible and in English or be removed.

  • Content must be in English. As well, submissions and comments may be removed due to poor formatting.

    Submission Rules

    1. All posts must be directly relevant to ethics or be removed.

    /r/Ethics is for research and academic work in ethics. To learn more about what is and is not considered ethics, see our FAQ. Posts must be about ethics; anything merely tenuously related or unrelated to ethics, including meta posts, will be removed unless pre-approved. Exceptions may be made for posts about ethicists.

  • Submissions which posit some view must be adequately developed.

  • Submissions must not only be directly relevant to ethics, but must also approach the topic in question in a developed manner by defending a substantive ethical thesis or by demonstrating a substantial effort to understand or consider the issue being asked about. Submissions that attempt to provide evidence for or against some position should state the problem; state the thesis; state how the position contributes to the problem; outline alternative answers; anticipate some objections and give responses to them. Different issues will require a different amount of development.

  • Questions deemed unlikely to have focused discussion will be removed. All questions are encouraged to be submitted to /r/askphilosophy as well or instead.

  • /r/Ethics is for discussion about ethics. Questions may start discussion, but there is no guarantee answers here will be approximately correct or well supported by the evidence, and so, many types of questions are encouraged elsewhere. If a question is too scattered (i.e. too many questions or question is unrelated to problem), personal rather than abstract (e.g. how to resolve something you're dealing with) or demands straightforward answers (e.g. homework questions, questions about academic consensus or interpretation, questions with no room for discussion), it will be removed.

  • Audio/video links require abstracts.

  • All links to either audio or video content require abstracts of the posted material, posted as a comment in the thread. Abstracts should make clear what the linked material is about and what its thesis is. Read here for an example of an abstract that adequately makes clear what the linked material is about and what its thesis is by outlining largely what the material does and how.

    Commenting Rules

    1. Provide evidence for your position.

    Comments that express merely idle speculation, musings, beliefs, or assertions without evidence may be removed.

  • All comments must engage honestly and fruitfully with the interlocutor.

  • Users that don’t properly address and engage with their interlocutors will have their comments removed. Repeat offenders may be banned from the subreddit. To avoid disingenuous engagement, one should aim for a fair and careful reading of their interlocutor, be forthcoming with their level of familiarity with some topic and other such epistemic limits, and demonstrate a genuine desire for coming to some truth of the matter being discussed.

  • All meta comments must be on meta posts.

  • As noted in Rule 1, meta posts require pre-approval. If you have a meta comment to make unrelated to any meta post up at the moment, read the FAQ for what to do.

     

    Filter by Field

    Area Subareas Definition Information Information Information
    Metaethics Moral Realism and Irrealism, Moral Naturalism and Non-Naturalism, Moral Reasoning and Motivation, Moral Judgment, Moral Epistemology, Moral Language, Moral Responsibility, Moral Normativity, Moral Principles Metaethics? Definitions. Introductory reading.
    Normative Ethics Consequentialism, Deontology, Virtue Ethics, Moral Phenomena, Moral Value Normative ethics? Definitions. Introductory reading.
    Applied Ethics Bioethics, Business Ethics, Environmental Ethics, Technology Ethics, Social Ethics, Political Ethics, Professional Ethics Applied ethics? Introductory reading.
    Political Philosophy Justice, Government and Democracy, International Philosophy, Political Theory, Political Views, Rights, Culture and Cultures, Freedom and Liberty, Equality, War and Violence, States and Nations In /r/Ethics?

    /r/Ethics

    17,428 Subscribers

    0

    Should students be allowed to use ChatGPT in the classroom?

    [Ethical News Topic] Some would say it wouldn’t be ethical for children to use ChatGPT in school because it can lead to cheating and children not learning and not producing their own work. On the flip side of this, children could use ChatGPT as a resource to help them study and learn more from certain topics with the additional help of this resource. What are your opinions? (This is for an assignment anyone pls answer👍🏼)

    18 Comments
    2024/08/31
    18:29 UTC

    5

    What is innocence and what does it mean to be innocent?

    In Hugo's Les Misérables, I read the following: "Innocence, Monsieur, is its own crown. Innocence has no need to be a highness. It is as august in rags as in fleurs de lys.”

    That sounds beautiful, i think. I started looking for the meaning of the word “innocence” and the Internet showed me that it is moral purity, when a person does not know what is good and what is bad. Everyone knows that, for example, hitting elderly women is bad, but if a person is innocent and does not know it yet and therefore is hitting an elderly woman, is it beautiful and admirable then?

    Sorry for my stupid question. Maybe I should have asked it in philosophy, I don't know what category to put it in.

    11 Comments
    2024/08/30
    12:15 UTC

    4

    Ethical Question: Should Job Applicants Share Demographics That Benefit Them?

    Biases in the hiring process are still very much a reality. As a caucasian male, I’m aware that disclosing my race and gender on job applications might give me an undue advantage. This raises a difficult ethical question: Is it right to disclose, knowing these advantages exist?

    I believe that by not disclosing my demographic information, I might help reduce potential bias and create a fairer hiring process. However, I also realize that withholding this information could interfere with the collection of crucial data used by organizations like the EEOC or the Census Bureau to address these inequities.

    What are your thoughts?

    4 Comments
    2024/08/29
    15:53 UTC

    2

    The Role of Explainable AI in Enhancing Trust and Accountability

    Artificial Intelligence (AI) has rapidly evolved from a niche academic interest to a ubiquitous component of modern technology. Its applications are broad and diverse, ranging from medical diagnostics to autonomous vehicles, and it is reshaping industries and society at large. However, as AI systems become more embedded in critical decision-making processes, the demand for transparency and accountability grows. This has led to a burgeoning interest in Explainable AI (XAI), a subfield dedicated to making AI models more interpretable and their decisions more understandable to humans.

    Explainable AI addresses one of the fundamental challenges in AI and machine learning (ML): the "black box" nature of many advanced models, particularly deep learning algorithms. These models, while highly effective, often operate in ways that are not easily interpretable by humans, even by the engineers who design them. This opacity poses significant risks, particularly when AI is applied in sensitive areas such as healthcare, finance, and criminal justice. In these domains, the consequences of AI errors can be severe, and the need for stakeholders to understand how and why a model arrived at a particular decision is paramount.

    One of the primary goals of XAI is to enhance trust in AI systems. Trust is a crucial factor in the adoption of any technology, and AI is no exception. When users can understand the rationale behind AI decisions, they are more likely to trust the system and feel confident in its outputs. This is particularly important in scenarios where AI systems are used to assist or replace human judgment. For example, in healthcare, an explainable AI system that can clarify how it reached a diagnosis will likely be more trusted by both doctors and patients, leading to better outcomes and greater acceptance of AI-driven tools.

    Moreover, explainability is essential for accountability. In many jurisdictions, there is growing regulatory pressure to ensure that AI systems do not perpetuate bias or make discriminatory decisions. Without transparency, it is challenging to identify and correct biases in AI models. Explainable AI enables developers and auditors to trace decisions back to their source, uncovering potential biases and understanding their impact. This capability is vital for creating AI systems that are not only effective but also fair and aligned with societal values.

    However, achieving explainability is not without its challenges. There is often a trade-off between the complexity of a model and its interpretability. Simple models, such as linear regressions, are easy to explain but may not capture the intricacies of data as effectively as more complex models like deep neural networks. On the other hand, the latter, while powerful, are notoriously difficult to interpret. Researchers in XAI are working to bridge this gap by developing methods that can provide insights into how complex models function without sacrificing too much of their predictive power.

    In practice, XAI techniques include model-agnostic approaches, which can be applied to any AI model, and model-specific methods, which are tailored to particular types of algorithms. Model-agnostic techniques, such as SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations), provide post-hoc explanations by approximating the model's behavior around specific predictions. These tools help users understand which features contributed most to a particular decision, offering a clearer picture of the model's inner workings.

    Explainable AI plays a pivotal role in the responsible development and deployment of AI systems. By making AI more transparent and understandable, XAI not only enhances trust but also ensures accountability, paving the way for broader and more ethical adoption of AI technologies. As AI continues to advance, the importance of explainability will only grow, making it a critical area of focus for researchers, developers, and policymakers alike.

    3 Comments
    2024/08/27
    01:11 UTC

    3

    The Ethics of Immigration: Enoch Powell's "Rivers of Blood" (1968) — An online philosophy group discussion on Thursday August 29 (EDT), open to everyone

    0 Comments
    2024/08/25
    03:53 UTC

    5

    Circles of Responsibility: A Framework for Moral Dialogue

    1. Core Concept:
      Morality consists of multiple "circles" of responsibility—ranging from personal to global. These circles may overlap or conflict, requiring individuals to navigate ethical decisions thoughtfully.

    2. example for commonly used circles and responsibilities:

      • Self: Personal well-being, growth and fortitude.
      • Family: support, education, provide and protect.
      • Community/Tribe: Duties to local or cultural communities.
      • Nation/State: Civic obligations to society or the nation.
      • Humanity/Global: Ethical considerations for the broader human race and the planet.
    3. Guiding Principles:

      • Recognize Conflicts: Understand that responsibilities will conflict across different circles.
      • Prioritize: Consider which circle and which responsibility takes precedence in each situation. choose a primary circle and extrapolate to the rest from there. allow some level of intuition and emotion to guide you in this stage.
      • Balance: Create a priority list. understand your capabilities and limitations. regard what is already being done by others and what you can add.
    4. Application:

      • Personal Decisions: Use the framework to clarify ethical dilemmas by identifying the most relevant circle of responsibility.
      • Cross-Cultural Communication: Facilitate understanding between different cultures by pinpointing where values and responsibilities align or differ.
    5 Comments
    2024/08/24
    10:48 UTC

    19

    Most people agree it’s wrong to breed, kill, and eat humans. Some believe it’s wrong to do this to any conscious being…

    Imagine there’s a human or other animal behind a curtain.

    Without using the word 'species' or naming any species (like human, dog, pig, etc.)…

    What would you need to know about:
    (a) the individual
    (b) anything else

    …to decide if it’s okay to breed, kill, and eat them?

    Be sure your reasons don't accidentally apply to some humans!

    79 Comments
    2024/08/20
    04:47 UTC

    1

    How to Define Antinatalism?: A Panel Discussion! Featuring David Benatar, Karim Akerma, Matti Häyry, David Pearce, Amanda Sukenick, Lawrence Anton!

    0 Comments
    2024/08/18
    19:02 UTC

    3

    According to David Boonin, we can be harmed after we die because our desires for things after our own death can be frustrated posthumously.

    3 Comments
    2024/08/18
    18:28 UTC

    9

    I created a platform for sharing moral and ethical dilemmas

    Hello, everyone!

    I have created a simple platform for sharing and discussing moral/ethical dilemmas. It's completely free, and you can find new dilemmas, vote for options you believe are correct, create your own dilemmas, and discuss them with other users.

    It's in a very early stage of development, so I would appreciate any feedback. You can find it at: https://sprava.yazero.io

    I aimed to create something similar to moralmachine.net and https://neal.fun/absurd-trolley-problems/, but with the added feature of allowing users to share their own dilemmas.

    I hope you will find it useful!

    0 Comments
    2024/08/16
    11:33 UTC

    0

    imagine there is a twin of you from another universe EXCATLY like yours. like 1-1 excatly. the twin isnt evil or didnt purposefully come here. it has feelings and emotions and lives its life excatly like yous. would you kill it or kill yourself if there could only be 1?

    the twin is perfectly like you. it behaves like you, has feelings like you and thinks that it is the actual you. there can only be one of you in each universe. do you kill it or do you kill yourself. the twin isnt evil or anything

    14 Comments
    2024/08/16
    03:44 UTC

    1

    What Does "Underpaid" Actually Mean?

    My salary is well below market rate. However, I'm not sure if that necessarily means I'm "underpaid."

    Here's why: I am a full-time salaried employee. I can always keep up with my responsibilities (and even add a lot of extra value) by working no more than 7 hours per day (no exceptions). What I'm saying is I probably work an average of 30 hours per week and have been for years and years (and will likely continue to do so).

    Ethically speaking, I don't think I'm actually underpaid, right?

    5 Comments
    2024/08/15
    19:39 UTC

    0

    Leveraging Technology for Health Equity: Ethical Considerations

    As we continue to embrace technology in healthcare, the conversation around health equity becomes increasingly crucial. How do we ensure that technological advancements benefit all communities rather than exacerbate existing disparities?

    From telemedicine to wearable health devices, technology has the potential to revolutionize access to care. However, we must critically examine the ethical implications of these tools. Are they designed with inclusivity in mind? Do underrepresented groups have equal access to these innovations?

    Let's discuss the role of ethics in leveraging technology for health equity. What approaches or insights do you think can help bridge the gap and ensure that no one is left behind as we advance? Share your thoughts and experiences! https://7med.co.uk/leveraging-technology-for-health-equity-insights-and-approaches/

    2 Comments
    2024/08/15
    17:03 UTC

    7

    Does voting for the decriminalisation of something mean you support it?

    A good example of this is the decriminalisation of Marijuana, but there are many good examples people could debate over. I can see why people would say that it is supporting something, but I disagree. What it is supporting is a person's freedom to choose. What do you think?

    Edit: I had another thought. There are two types of support:

    1. Active, intentional support
    2. Support in fact. (One could argue that your choice to decriminalise something supports it by the fact that you've agreed to make it legal and thus furthered the cause).

    Also, feel free to use analogies to explain your point. They always help me to explain.

    24 Comments
    2024/08/15
    00:55 UTC

    6

    Why should we assume other animals suffer less than us?

    Is there any reason that, for example, a cow, suffers less than a human, when it is equally physically harmed?

    Our cognitive superiority over other animals might mean that humans can experience deeper mental suffering than other animals, but why should this hint at a difference in the depth or nature of physical suffering?

    27 Comments
    2024/08/14
    17:45 UTC

    10

    "one of the greatest moral tests humans face"

    The folks over at vox.com recently published a large series of articles about animal agriculture, exploitation, and rights.

    What are your thoughts on the subject? Is exploting animals one of the greatest moral tests humans face?

    https://www.vox.com/future-perfect/364288/how-factory-farming-ends-animal-rights-vegans-climate-ethics

    15 Comments
    2024/08/12
    20:57 UTC

    3

    Will humanities future judge your life?

    So, we'll all die sooner or later. But in the digital age, we leave a lot of traces. Do you think some individuals in the distant future—whether they are humans or advanced digital copies of human brains or so —will look at our individual lives and judge them? I expect that there will be outbursts of intelligence in the future through some technologies when,, for example, might be capable of creating digital clones of their brains that could operate hundreds of times faster than biological brains. I also think these entities would have the time and resources to examine us. What do you guys think- are we in a way beeing obsered and judged by humanities future?

    3 Comments
    2024/08/12
    00:03 UTC

    3

    Take Job Training Knowing I Will Leave

    Like most, my job has it's good days and bad days. Within the last few months the bad days have started to outnumber the good days so I am starting to look to leave. If I had to give an estimate, probably within the next 3 to 6 months.

    Within my team I helped get us some very expensive training. Each person on the team is going to be able to take this training over the next year. In terms of expense, the training is usually anywhere from $6,000 to $9,000.

    I'm trying to decide if it is ethical for me to take this training knowing that I am going to leave.

    A few items of note:

    • Me taking the training will not take it away from anyone else on my team.
    • All of the training has been bought and paid for.
    • The knowledge I get from the training will not go to waste and will be used in the rest of my career.
    • I was the one who worked to get my team the training in the first place, wrote up the proposal, got us the discounts, and have been acting as the admin of the training.
    • I dont know if I will leave or not. It depends on if I find a job that works for the next step on my career.
    5 Comments
    2024/08/08
    23:16 UTC

    2

    AI ethics

    I know this gets talked about a lot, and all I’ve got a is a simple question.

    If you make an actual ai, and give it rewards if it does say labour or something, is that any different from forcing it to do labour?

    I don’t think it is.

    Comment your views if you would.

    45 Comments
    2024/08/06
    20:33 UTC

    5

    Morals vs. action

    What is a moral you advocate that you, yourself know you wouldn't uphold?

    1 Comment
    2024/08/05
    04:17 UTC

    6

    Thought Experiment on Experimental Treatment for Anorexia Nervosa

    hello everyone,

    anorexia nervosa (AN) is a potentially deadly illness.

    there are small phase 1 studies, that suggest that Psilocybin assisted psychotherapy might improve the mental health of patients with AN.

    other studies have already shown that psychedelics like LSD and Psilocybin are non-toxic and not addictive and can help people with depression and anxiety.

    further studies will take a while and the outcome is promising, but unclear.

    imagine there’s a patient with AN, that everyone thinks, can’t wait that long, that might die or damage their body permanently, that did not respond to any other form of therapy. that patient’s only hope left would be this treatment, but right now it’s not approved.

    imagine, she could get magic mushrooms illegally, these contain psilocybin, and medicate herself, like Psilocybin assisted psychotherapy at home, without a team of professional therapists, just magic mushrooms.

    let’s say, she read, maybe r/psychedelics how people use it recreationally, about set & setting and the that she should have a trip sitter, but she doesn’t trust anyone enough to ask for help.

    would anyone say, that she should abide the law and shouldn’t endanger herself, with an illegal drug in an unknown dosage, that might not help at all, without any person to support her, especially not a professional psychotherapist? and what should her legal consequences be?

    or would it be okay for them to seek a potentially life changing experience that might change the way she sees herself, change her thoughts and feelings towards herself, her body and about food, that might reduce her fear, depression and suffering, that might improve her quality of life, that might simply just save her life?

    7 Comments
    2024/08/04
    23:30 UTC

    4

    Medical Ethical Case - Haemodialysis patient

    This is a medical ethical case. Unfortunately, I've had trouble to posting to medical subs but hopefully it can generate some interesting discussion here. There were conflicting opinions on the ward between junior and senior staff - I will not state which way - so I'd be interested to see any discussion.

    A long-term inpatient, bedbound and haemodialysis dependent (anuric - cannot make urine and so cannot remove excess water from their body), started asking for lots of water to be brought to him, insisting he was thirsty. He was already failing his haemodialysis and had made progress to arranging his will whilst an inpatient. He has capacity but has fluctuating mood disturbance.

    Key issues in the dilemma (in case it is not clear): Providing water for a patient (with capacity) requesting it who cannot get this themselves is arguably a human right. Water restriction is part of his treatment (meaning water in excess of the recommended amount would constitute harm). For him to receive water, this must be brought to him by a member of staff. There is a suspicion that he is requesting water as a means of harming himself / ending his life.

    To be clear - the case is as stated and this is an ethical discussion about the individual right to request something which is a human right even if it is knowingly bringing them harm. I am seeking people's opinions on the conflicting ethical principles of the patient's individual autonomy and the healthcare team's duty not to cause harm to a patient.

    I'm not asking for clinical advice on how to manage such a patient in general - you can presume for the case that all the investigations and discussions have happened and are ongoing but this is the situation we are in.

    20 Comments
    2024/08/04
    08:25 UTC

    0

    Are (My) Racial Preferences in Dating Acceptable? To What Extent?

    Hi, Redditors,

    Hope you're doing well. I've recently re-opened a bit of controversy with friends over one aspect of my preferences while dating and I'd like to hear what others, especially those with familiarity in ethics, have to say on the issue.

    For context, I am an almost-entirely straight, white dude, just graduated university, who speaks English and Spanish, with very progressive beliefs and who is looking for a committed partner who can equitably eventually raise a family with me, whether with biological or adopted children. More context in the spoiler if you want it--it may not be strictly relevant. >!I'm willing to be a stay-at-home dad, and I want to be active in the life of my children, and I want to take on the burdens of housework--I actually really enjoy cleaning and cooking, for instance. I play piano and cello, and it will be sad if someone I'm dating has no skill or, at least, interest in music. I'm vegetarian and I love vegetables. It will be sad if someone I'm dating vehemently hates vegetables. I'm not willing to compromise on religion (I am Christian), since I've been burned by an atheist/agnostic type before. I'm also not willing to compromise too much on age--if someone is more than, say, seven years older than me, or more than three years younger than me, then at my age that's too much.!< The rest is mostly negotiable.

    I don't have almost any physical preferences. I've dated women of various shapes and sizes, various skin, hair and eye colors, etc., and can be attracted to all of them.

    Here's the controversial thing: I want to prioritize dating women of color. I'm not saying dating white women is out of the question. What I'm after, though, in a real way, is a cross-cultural relationship. I believe very strongly that one of the main ways to combat racism is through relationships. Part of me thinks that I will always be somewhat disappointed if what ends up becoming (one of) the most important relationship(s) in my life is with another white person. I know that there are many more considerations than a person's race, and that a person can't change their race. I am also seeking people who are ambitious, yet kind; people who are principled, yet open-minded; people who are talented, yet humble. In particular, multilingual and musical people are attractive. However, I'll give a chance to just about any woman with the guts to express an attraction to me. Yet if someone is a woman of color, then in a real way, that checks a box for me. It checks a box for me not for (arguably shallow) "type" reasons—this is very much in a different category than men who seek out shorter women and/or women who seek out taller men. This is very much a matter of principle. I am seeking to be anti-racist in all my relationships, and for me, part of that means prioritizing a romantic relationship with a woman of color.

    Part of the reason that I prioritize it is to combat implicit bias. I haven't taken an implicit bias test in a while and I think I've made progress since then. However, when I took a test some 5 years ago, I did have sort of the usual implicit biases (against Black people, against people of color in general, against women). When I was young, growing up in a quite-homogeneous quasi-rural place, I always imagined myself ending up in a relationship with a white person, like my parents. I want to make absolutely sure I'm opening myself to other possibilities, and I want to make sure I'm not overlooking women of color. For these reasons, besides the desire to continually grow cross-culturally in all my relationships—including a romantic relationship—I make it one of the boxes I'm checking for.

    One other point of context: For me (as for others I think?), principles lead the way to attractions. I start by saying that eating a food or adopting a habit is good for me, and after trying it enough times, I find I really genuinely like it for what it is, not because it makes me feel good about eating healthy food or doing the habit. The same applies for people I'm considering dating. I now genuinely end up crushing on more women of color than white women, on average.

    Here's my question: Is it wrong for me, or any anti-racist white dude like me, to have this preference? Is it offensive? Have I, despite starting with well-meaning anti-racist principles, arrived at a racist conclusion? Here are some arguments I've heard against my preference. I try to develop these charitably before responding.

    1. It's twisted for me to expect my partner to constantly be educating me on basic stuff about their experience/existence. This unfairly places a burden on a woman of color in a world where she already is constantly misunderstood and has to explain herself, a world where she has to be double or quadruple as good and work double or quadruple as hard.
    2. It's messed up for me to want to raise biracial children in a world that hates them. With racism surely enduring for generations to come, I am creating a conundrum for my own people before the word go.
    3. It's wrong that I expect women of color to potentially have a harder time landing dates in general. This view positions women of color as lesser, and assumes they lack "game," or agency—or at least that they have less agency than white women.
    4. It's unacceptable to view cross-cultural relationships as morally superior to culturally homogeneous ones. In particular, it's unacceptable for me to think that I am morally superior for seeking and/or developing cross-cultural relationships, as opposed to culturally homogeneous ones.

    Now, here are my responses:

    1. Two parts:
      1. I am dedicated to educating myself on issues of racism, sexism and other forms of kyriarchy.
      2. I hope that both my partner and I can educate each other on issues where our differing positionalities provide multifarious insights. I hope I don't need saving, in racial terms, while I certainly don't believe that I am ever saving anyone by seeking to form a relationship with them.
    2. One of the main ways that I hope to combat racism individually is by leveraging my own privilege (economic, family connections, education) for people of color. Providing as excellent an upbringing as I possibly can for my children, given the advantages I have, is something I will do no matter what. If I bring biracial children into the world, I hope to be able to prepare them well for it.
    3. I don't assume women of color have a hard time landing dates with men in general. At the same time, I can't assume they don't have a hard time landing dates with me in particular, or at least a harder time than white women would, given my upbringing and background.
    4. I view all committed relationships as valuable. (I also view singleness as valuable!) I also genuinely think committed cross-cultural relationships have a unique importance. In that vein, I view both people—myself and my hypothetical partner—as laudable for aspiring to an endeavor which is sure to be more difficult than a culturally homogenous committed relationship is already guaranteed to be. Both me and my partner are choosing more learning and less comfort, to put forth greater effort and practice more listening, than the high level we otherwise already would in a culturally homogeneous committed relationship.

    What other dimensions of this question have I missed? In which other ways might my preferences be considered insensitive or offensive? Are my friends who criticize me right? If so, I honestly doubt that I can change this preference at this point, although I suppose I'm willing to try, if I'm convinced beyond reasonable doubt that this is wrong.

    Full disclosure: I've had less than one college philosophy class, and am mostly uninitiated in the study of ethics itself. I'm hoping the learned people here can provide me some valuable insight. :)

    37 Comments
    2024/08/02
    20:47 UTC

    18

    My 10 year Reddit account was permanently banned for asking this ethics questions, and I think that's the most unethical thing ever

    "is it ethical to hit a child if he's hitting another child because of their race "

    I understand the subject matter, but I think it's just messed up since it was asked in good faith and I clarified that I'd never hit a child before, and that I was only 20

    47 Comments
    2024/08/02
    16:21 UTC

    Back To Top