/r/LessWrong

Photograph via snooOG

Raising the sanity waterline

This subreddit is for the discussion of Less Wrong and associated topics.

Related subreddits - active:

Dormant:

Rules:

  1. Read the Sequences.

  2. Your reasoning on this subreddit must be ironclad and have no logical flaws at all, or you are banned.

  3. Thou shalt not take the name of Eliezer Yudkowsky in vain

  4. Discussing that incident with the initials RB? No thank you.

  5. To be unbanned, prove that you made a recent donation of $100 or more to MIRI. Please provide evidence that the donation was counterfactual.

  6. The rules may or may not be (post-)ironic. Up to you to decide, based on your priors.

/r/LessWrong

7,786 Subscribers

4

Why is one-boxing deemed as irational?

I read this article https://www.greaterwrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality and I was in beginning confused with repeating that omega rewards irational behaviour and I wasnt sure how it is meant.

I find one-boxing as truly rational choice (and I am not saying that just for Omega who is surely watching). There is something to gain with two-boxing, but it also increases costs greatly. It is not sure that you will succeed, you need to do hard mental gymnastic and you cannot even discuss that on internet :) But I mean that seriously. One-boxing is walk in the park. You precommit a then you just take one box.

Isnt two-boxing actually that "holywood rationality"? Like maximizing The Number without caring about anything else?

Please share your thoughts, I find this very enticing and want to learn more

15 Comments
2024/11/18
20:25 UTC

6

Writing Doom – Award-Winning Short Film on Superintelligence (2024)

1 Comment
2024/11/10
08:47 UTC

11

Any on-site LessWrong activities in Germany?

Hello everyone, my name is Ihor, my website is https://linktr.ee/kendiukhov, I live in Germany between Nuremberg and Tuebingen. I am very much into rationality/LessWrong stuff with a special focus on AI safety/alignment. I would be glad to organize and host local events related to these topics in Germany, like reading clubs, workshops, discussions, etc. (ideally, in the cities I mentioned or near them), but I do not know any local community or how to approach them. Are there any people from Germany in this Reddit or perhaps do you know how can I get in touch with them? I went to some ACX meetings in Stuttgart and Munich but they were something a bit different.

3 Comments
2024/11/07
12:32 UTC

1

Mind Hacked by AI: A Cautionary Tale, A Reading of a LessWrong User's Confession

1 Comment
2024/10/28
10:24 UTC

1

Questioning Foundations of Science

There seems to be nothing more fundamental than belief. Here's a thought. What do u think?

https://x.com/10_zin_/status/1850253960612860296

21 Comments
2024/10/26
19:30 UTC

2

Questions about precommitment.

Hey I'm new to this but,

I was wondering if a precommitment is broken and then again maintained is it still precommitment? (In decision/game theory)

Or precommitment is a one time thing? That once broken cannot be fixed?

Also, Can a ACAUSAL TRADE happen between an agent who CANNOT reliably precommit (like a human) and an another agent who CAN reliably precommit?

Or does it fall apart if one agent does not/Or able to precommit?

Also can humans EVEN precommit in game theory way Or decision theory way (like ironclad) Or not? (Please answer this one especially)

1 Comment
2024/10/21
09:03 UTC

8

How do you read LessWrong?

I've been a lurker for a little while, but always struggle with the meta-task of deciding what to read. Any reccs?

6 Comments
2024/09/30
16:33 UTC

6

What happened to the print versions of the sequences?

I've been planning on reading the sequences, and saw that the first two books were published as print versions some time ago (https://rationalitybook.com).

Map and Territory and How to Actually Change Your Mind are the first of six books in the Rationality: From AI to Zombies series. As of December 2018, these volumes are available as physical books for the first time, and are substantially revised, updated, and polished. The next four volumes will be coming out over the coming months.

Seems like nothing happened since then. Was that project cancelled? I was looking forward to reading it all in print, because I'm staring at screens long enough on a daily basis to enjoy reading on paper much more.

2 Comments
2024/09/16
09:05 UTC

16

Rationality: From AI to Zombies

Hey everyone,

I recently finished reading Harry Potter and the Methods of Rationality and loved it! Since then, I've been hearing a lot about Rationality: From AI to Zombies. I know it's a pretty lengthy book, which I'm okay with, but I came across a post saying it's just a collection of blog posts and lacks coherence.

Is this true? If so, has anyone tried to organize it into a more traditional book format?

5 Comments
2024/07/31
09:13 UTC

7

Any love for simulations?

I recently read "Rationality: From AI To Zombies" by Eliezer Yudkowsky. The love for Bayesian methodologies really shines through.

I was wondering if anyone has ever used a simulation to simulate different outcomes before making a decision? I recently used a Monte Carlo Simulation before buying an apartment, and it worked quite well.

Even though it is hard to capture the complexity of reality in one simulation, it at least gave me a baseline.

I wrote a post about it here: From Monte Carlo to Stockholm.

Would you consider using simulations in your everyday life?

3 Comments
2024/07/17
10:45 UTC

6

What are essential pieces of LW

Where should I start reading? I read hpmor, nothing else by Eliezer or anything on LW because it seems to me very intimidating and fomo attacks when I start reading something on there.

2 Comments
2024/07/12
19:51 UTC

6

What would you like to see in a new Internet forum that "raises the sanity waterline"?

I am thinking of starting a new custom website that focuses on allowing people with unconventional or contrarian beliefs to discuss anything they like. I am hoping that people from across political divides will be able to discuss anything without the discourse becoming polemical or poisoned.

Are there any "original" features you think this forum should include? I am open to any and all ideas.

(For an example of the kind/quality of forum design ideas I am talking about--whether or not you can abide Mencius Moldbug, I'm not here to push his agenda in general--see this essay. Inspired by that, I was thinking that perhaps there could be a choice of different types of karma that you can apply to a post, rather than just mass upvoting and downvoting. Like you choose your alignment/karma flavour, and your upvotes or downvotes are cast according to that faction...)

5 Comments
2024/06/17
02:45 UTC

5

LessWrong Community Weekend 2024

Applications are now open for the LessWrong Community Weekend 2024!

Join the world’s largest rationalist social gathering, which brings together 250 aspiring rationalists from across Europe and beyond for 4 days of socializing, fun and intellectual exploration. We are taking over the whole hostel this year and thus have more space available. We are delighted to have Anna Riedl as our keynote speaker - a cognitive scientist conducting research on rationality under radical uncertainty.

As usual we will be running an unconference style gathering where participants create the sessions. Six wall-sized daily planners are filled by the attendees with 100+ workshops, talks and activities of their own devising. Most are prepared upfront, but some are just made up on the spot when inspiration hits.

Find more details in the official announcement: https://www.lesswrong.com/events/tBYRFJNgvKWLeE9ih/lesswrong-community-weekend-2024-applications-open-1?utm_campaign=post_share&utm_source=link

Or jump directly to the application form: https://airtable.com/appdYMNuMQvKWC8mv/pagiUldderZqbuBaP/form

Inclusiveness: The community weekend is family & LGBTQIA+ friendly and after last year's amazing experience we are increasing our effort into creating a diverse event where people of all ages, genders, backgrounds and experiences feel like home.

Price: Regular ticket: €250 | Supporter ticket: €300/400/500+
(The ticket includes accommodation Fr-Mo, meals, snacks. Nobody makes any money from this event and the organizer team is unpaid.)

This event has a special place in our heart, and we truly think there’s nothing else quite like it. It’s where so many of us made friends with whom we have more in common than each of us would’ve thought to be possible. It’s where new ideas have altered our opinions or even changed the course of life - in the best possible way.

Note: You need to apply and be accepted via the application form above. RSVPs via Facebook don't count.

Looking forward to seeing you there!

0 Comments
2024/06/06
08:13 UTC

3

Question about the statistical pathing of the subjective future (Related to big world immortality)

There's a class of thought experiments, including quantum immortality that have been bothering me, and I'm writing to this subreddit because it's the Less Wrong site where I've found the most insightful articles in this topic.

I've noticed that some people have different philosophical intuitions about the subjective future from mine, and the point of this post is to hopefully get some responses that either confirm my intuitions or offer a different approach.

This thought experiment will involve magically sudden and complete annihilations of your body, and magically sudden and exact duplications of your body. And the question will be if it matters for you in advance whether one version of the process will happen, or another.

First, 1001 exact copies of you come into being, and your original body is annihilated. Each of 1000 of those copies immediately appear in one of 1000 identical rooms, where you will live for the next one minute. The remaining 1 copy will immediately appear in a room that looks different from the inside, and you will live there for the next one minute.

As a default version of the thought experiment, let's assume that exactly the same happens in each of the identical 1000 rooms, deterministically remaining identical up to the end of the one minute period.

Once the one minute is up, a single exact copy of the still identical 1000 instances of you is created and is given a preferable future. At the same time, the 1000 copies in the 1000 rooms are annihilated. The same happens with your version in the single different room, but it's given a less preferable future.

The main question is if it would matter for you in advance whether it's the version that was in the 1000 identical rooms that's given the preferable future, or it's the single copy, the one that spent time in the single, different room that's given the preferable future. In the end, there's only a single instance of each version of you. Does the temporary multiplication make one of the possible subjective futures ultimately more probable for you, subjectively?

(The second question is if it matters or not whether the events in the 1000 identical rooms are exactly the same, or only subjectively indistinguishable from the perspective of your subjevtive experience. What if normal quantum randomness does apply, but the time period is only a few seconds, so that your subjective experience is basically the same in each of the 1000 rooms, and then a random room is selected as the basis for your surviving copy? Would that make a difference in terms of the probablitiy of the subjective futures?)

20 Comments
2024/05/28
09:56 UTC

5

Please help me find the source on this unhackable software Yudkowsky mentioned

I vaguely remember that in one of the posts Yudkowsky mentioned that there was some mathematically proven unhackable software that was hacked by exploiting the mechanics of the circuitry of the chips. I can’t seem to find the source on this, can anyone help please.

3 Comments
2024/05/19
15:07 UTC

1

What do you people think of Franklin Veaux?

Always thought them and Yudowsky were quite similar on a fundamental level.

0 Comments
2024/05/19
04:07 UTC

0

Another basilisk anxiety post. I know, I know. I would so appreciate someone giving me a little bit of their time. Thank you in advanced

Hello all! This will be a typical story. I discovered this in 2018 and had a major mental breakdown where I didn’t eat or sleep for two weeks. I got on medication realized I had ocd and things were perfect after that.

This year I am having a flare up of OCD and it is cycling through so many different themes, and unfortunately this theme has come up again.

So I understand that “pre committing to never accepting blackmail” seems to be the best strategy to not worry about this. However when I was not in a period of anxiety I would make jokes to myself like “oh the basilisk will like that I’m using chat gpt right now” and things like that. When I’m not in an anxious period I am able to see the silliness of this. I am also nice to the AIs in case they become real, not even for my safety but because I think it would suck to become sentient and have everyone be rude to me, so it’s more of a “treat others how you’d like to be treated” lol. I keep seeing movies where everyone’s mean to the AIs and it makes me sad lol. Anyways, that makes me feel I broke the commitment not to give into blackmail. Also as an artist, I avoid AI art (I’m sorry if that’s offensive to anyone who uses it, I’m sorry) and now I’m worried that is me “betraying the AI”. Like I am an AI infidel.

I have told my therapists about this and I have told my friends (who bullied me lovingly for it lol) but now I also think that was breaking the commitment not to accept blackmail because it is “attempting to spread the word”. Should I donate money? I remember seeing one thing that said buy a lottery ticket with the commitment of donating it to AI. Because “you will win it in one of the multiverses” but I don’t trust the version of me to win to not be like “okay well there are real humans I can help with this money and I want to donate it to hunger instead”.

I would also like to say I simply do not understand any of the concepts on LessWrong, I don’t understand any of the acausal whatever or the timeless decision whatever. My eyes glaze over when I try lol. To my understanding if you don’t fully understand and live by these topics it shouldn’t work on you?

Additionally I am a little religious, or religious-curious. And I understand that all this goes out the window when we start talking immortal souls. That the basilisk wouldn’t bother to torture people who believe in souls as there is no point. But I have gone back and forth from atheist to religious as I explore things so I am worried that makes me vulnerable.

Logically I know the best ocd treatment is to allow myself to sit in the anxiety, not engage in research with these things and the anxiety will go away. However I feel I need a little reassurance before I can let go and work on the ocd.

Should I continue to commit to no blackmail even though I feel I haven’t done this perfectly? Or should I donate a bit? What scares me is the whole “dedicate your life to it” thing. That isn’t possible for me, I would just go full mentally ill and non functional at that point.

I understand you all get these posts so much and they must be annoying. Would any of you have a little mercy on me? I would really appreciate some help from my fellow human today. I hope everyone is having a wonderful day.

21 Comments
2024/05/18
16:37 UTC

Back To Top