/r/EffectiveAltruism
Effective altruism is a growing social movement founded on the imperative to make the world as good a place as it can be, the use of evidence and reason to find out how to do so, and the audacity to actually try.
Effective Altruism is a growing social movement founded on the imperative to make the world as good a place as it can be, the use of evidence and reason to find out how to do so, and the audacity to actually try.
We invite people of all backgrounds and viewpoints to join our discussions and our efforts.
New to EA? Learn about the effective altruism movement.
Read through some related subreddits.
Socialize with fellow EAs on the EA Corner Discord server.
For more in-depth discussion, follow the EA Forum.
Rules:
Respect your fellow Effective Altruist. Do not insult each other. Do not respond to each other's arguments with low-effort snark or dismissiveness. Do not engage in shaming or artificial consensus-building to suppress each other's views.
No promotion without argument. If you are posting to promote your project, app, charity, survey or cause, you must provide a clear argument for its effectiveness.
No job ads. Career opportunities go in r/EAjobs.
/r/EffectiveAltruism
I am totally on board that we should spend money to save the most lives possible given our limited resources. That usually means donating mostly to things like Malaria Nets
But after going through a bout of depression, I’m wondering if mental health treatment might be more important to reduce suffering, even if it is hard to quantify based on the number of lives saved.
It seems like relatively well off people living with mental health issues might still be suffering less than people in poor places. But I don’t think that’s the case. Even though they have so much more, the fact the depression makes them so much less happy. When I went through depression, I knew I shouldn’t have been so sad about my life, since so many people have it worse off, but the bad feelings didn’t go away. That probably made it so that I was worse off than others, even though I “shouldn’t be”.
I think the amount of suffering people with depression have is orders of magnitude worse than lack of material needs. My life was probably worse than an average person 400 years ago even though I had so much more.
So how “effective” is mental health treatment compared to other charities?
Hi, I was wondering what the landscape of opinions within EA is, whether 1) to try as hard as possible to align, but finally build AGI, because potential benefits outweigh the risk; or 2) oppose AGI generally because it seems impossible to have a completely save AGI.
Who is thinking what, and is this debate happening?
The term "suffering" is rarely used in neuroscience literature. Which neuroscientific terms describe "suffering" best? Here are some examples:
What do you think which term fits best?
I want to identify the neural correlates of suffering in order to minimize it in severely suffering individuals.
It's the one we always see everywhere, right? But I can say that I personally felt/feel far more motivated by other numbers, and I'm pretty sure I know why.
I can understand the benefit of emphasizing the difference in value between an expensive vacation and someone's entire life, but to the extent that (1) the estimated cost to save a life {with the most effective interventions} is usually far HIGHER than people often estimate and the estimated cost of other desirable outcomes is often far LOWER; and (2) in a more nominal sense, you can help more people achieve many of those other desirable outcomes than you can save lives, given the same $$ amount (3) Depending on moral philosophical outlook of the other person, they might view reducing suffering as actually being more important than increasing lifespan (which of course they aren't mutually exclusive, but in terms of emphasis)
For example, instead of saying "with just 5% of your salary this year, you could literally save someone's life," you could say "with just 5% of your salary this year, you could literally restore sight to 600 blind people." Not sure what the best example WOULD be, but I'm just wondering if the example which IS always used is the best
I recently watched a video about Veganuary 2023 and how it was very ineffective because only a tiny fraction of people who participated remained vegan throughout. However, when you look at the stats as raw numbers, though proportionally small the impact seems to be very effective. This post is to make a simple analysis of it to see how effective it is. I think it's also important to learn that this is a bias called Denominator Neglect and here's a short video about it: Denominator neglect
To start, let's make reference to the fact that roughly $5000 is the price to save one life through buying malaria nets and preventing death by malaria. Not only is a life saved, but it improves quality of life greatly through those who would be affected non-lethally. This number is similar to other effective charities too. Givewell
Next, since we're measuring how many vegans are made. What is the impact of becoming vegan? It seems that by becoming vegan, you save roughly one animal per day. Over the course of 40 years this turns out to be roughly 14000 lives. Humane League - Omni Calculator
So now onto the money. Their recent report shows that they got roughly 2 million pounds or 2.6 million USD in funding for 2023.
Finally onto the vegans. They had 706965 people participate. Of them only 2.4% of people responded. 18% of those were already vegan which leaves roughly 2% non-vegans. Of those, 25% said they will continue with a vegan diet which is only 0.5% of the whole campaign. Looking at a number at just 0.5% seems really small and not that effective.
However rephrasing the whole thing in terms of the absolute numbers and not proportions. We get 16829 people responded, 13851 of those were non-vegans, and 3463 of those are new vegans that have been created due to the campaign.
So 2.6 million USD created 3463 vegans. Even if just one life was saved, that's $750 per life saved. But factoring in one life a day for 40 years, it amounts to 5c per life saved from this campaign. It appears that Veganuary was very effective in its campaign.
Notes
I'm new to EA and if I understood correctly then EA is about maximizing good for the largest number of individuals. The number of individuals is easy to measure and compare.
But how do you measure how much good an individual experience from something? If it's more about alleviating poverty by maximizing the money an individual earns than it's easy to measure because the quantity of money is objectively assessable.
But isn't it more about the actual psychological well-being someone experiences? If so, then it's much harder to measure, because psychological well-being is a subjective experience. For example, giving $1000 to individual A and B results in equal monetary "good" but the psychological well-being might be different, as there are interindividual differences in how much psychological well-being people derive from the same monetary value.
I've a neuroscience background and would approach this problem by measuring the brain activity in reward related networks in the brain because this is highly correlated with the magnitude of subjective well-being one experiences.
Any ideas?
How can we make sure that we are warned in time that astronomical suffering (e.g. through misaligned ASI) is soon to happen and inevitable, so that we can escape before it’s too late?
By astronomical suffering I mean that e.g. the ASI tortures us till eternity.
By escape I mean ending your life and making sure that you can not be revived by the ASI.
Watching the news all day is very impractical and time consuming. Most disaster alert apps are focused on natural disasters and not AI.
One idea that came to my mind was to develop an app that checks the subreddit r/singularity every 5 min, feeds the latest posts into an LLM which then decides whether an existential catastrophe is imminent or not. If it is, then it activates the phone alarm.
Any additional ideas?
hi all, i'm fairly new to the EA movement and i'd highly appreciate some insight on how you manage your contributions.
i donate regularly but i'm also big on giving to loved ones whenever i can. i don't want to live completely frugally as i have multiple hobbies but don't mind making some lifestyle changes. i'm also in the early stages of investing to broaden my income sources.
my current game plan:
look forward to hearing your thoughts on this!
Given my experience in scientific research, I think that I may want to donate to scientific research under a hits-based giving model, but I am not sure how to go about this. Are there organizations that apply EA principles for basic or translational research dollars? Can this be ethically justified over giving money to GiveWell?
This group is intended to be all inclusive and modern in the sense of creating a new kind of space. Every person can have a voice and a kind of ownership within the group. Traditionally it’s known that every sentient being is ultimately a Buddha so in that sense we can empower one another with minimum use of hierarchy while still preserving lineage and transmission. A grass roots, very human, and accessible approach presented in harmony with modern science and traditional methodology.
I.e. the government could try to have everyone's basic needs met and eliminate as much inefficiencies in the economy as possible, then encourage the people to work to donate their surplus income overseas, or there could be a program where people are guaranteed housing, food, water, electricity etc. as long as they work a minimum amount of hours a fortnight generating value that can be used in the name of EA
Of course this would never happen (at least under capitalism) but it's still a nifty idea I think
Obv not including active war zones etc
Let's say you make 50k USD after taxes. Living in the US you'd have around 10k left at the end of the year after living expenses (of course there are lots of variables but roll with me)
But let's say you move to Pakistan where the annual cost of living is around $3k. Then you'd have a whole $47k left over every year, or, extrapolated over 50 years, $2.3 million, as opposed to $500k, or if you invest it in the S&P500 for 30 years, $8.9 million compared to $1.9 million. Of course taxes and flights would complicate things, and this is assuming you stay in the same job and make the same salary for that whole period, which is unrealistic, but still, roll with me
So the difference we're talking about here is $7 million after 30 years, or enough to save 1750 lives, assuming that costs around $4k
With this in mind, maybe we've been focusing way too much on salary when looking for jobs that will help us maximise giving, when we should be focusing on the ability to work remotely
Now for the caveats:
-you would have to leave behind your friends, family and culture for the majority of your life, which is no small burden. I think it would be acceptable to go back home for a few months a year. A Lahore to NYC round-trip costs around 900USD, meaning you could go home 10x a year and still have an extra $28k a year than if you stayed in America, although this doesn't account for the extra living expenses. It's also worth noting that tickets shoot up to $2-3k around Christmas time. This would also contribute to climate change which is kinda an L
-you could be laid off or forced to work in-office at any moment. If this happens you could simply move back home while you search for another job
-you may be taking up a house that would otherwise be home to a native, which may increase property prices in the area, especially if many people start doing this. As a retort, by not living in your home country, you're freeing up a home for someone else there to live in
Apart from those negatives, I think it could actually be pretty fun to live in different countries for a few months of the year, all while saving more. You don't even have to stay put in the optimally cheapest country, you could spend time in Latin America, South and Southeast Asia, Africa, even Eastern Europe
Anyway thanks for reading! Thoughts?
I'm curious to hear thoughts on impartial altruism & evil.
Overall, all else equal, I think its fair to say that less suffering is better than more suffering no matter what living being is suffering. Even a sadist who enjoys the suffering of others (ie the types of people who invented the brazen bull) should not suffer for no reason.
However, when I think about tradeoffs between welfare, something feels wrong with treating a person like a Brazen Bull inventor the same as the average person. If it were between 10 minutes of Brazen Bull torture for the inventor and 5 minutes of Brazen Bull torture for an average civilian, would I be impartial? My intuitions would lean towards deprioritizing the "evil" person over the average person if it came to it. How much? I'm not sure.
At the same time, this intuition might be flawed because it opens up an uncomfortable path. If there are differences between the most evil and the innocent, does that mean there are differences across any two given people based on how "good/evil" they are? This also seems quite flawed.
If we bring the animal kingdom into this, it gets even more problematic because animals suffer but are often indifferent to the harm they cause one another. If we were to assign moral welfare based on human constructs of "good/evil", would animals get any worth at all? And ultimately evil humans are also just creatures acting on impulses of what their brains reward them for just like altruistic humans. Perhaps, we should disregard intuitions around "good/evil" and mainly focus on reducing suffering regardless of who is suffering.
Still, I'm not satisfied with any of the thoughts I have on this matter. Luckily, I'm not sure it's very practical in terms of most altruistic deeds.