/r/LessWrong
Raising the sanity waterline
This subreddit is for the discussion of Less Wrong and associated topics.
Related subreddits - active:
Dormant:
Rules:
Read the Sequences.
Your reasoning on this subreddit must be ironclad and have no logical flaws at all, or you are banned.
Thou shalt not take the name of Eliezer Yudkowsky in vain
Discussing that incident with the initials RB? No thank you.
To be unbanned, prove that you made a recent donation of $100 or more to MIRI. Please provide evidence that the donation was counterfactual.
The rules may or may not be (post-)ironic. Up to you to decide, based on your priors.
/r/LessWrong
Hello,
I'm a journalist at the Guardian working on a piece about the Zizians. If you have encountered members of the group or had interactions with them, or know people who have, please contact me: oliver.conroy@theguardian.com.
I'm also interested in chatting with people who can talk about the Zizians' beliefs and where they fit (or did not fit) in the rationalist/EA/risk community.
I prefer to talk to people on the record but if you prefer to be anonymous/speak on background/etc. that can possibly be arranged.
Thanks very much.
There are certain thoughts that are considered acausal information hazards to the ones thinking them or to humanity in general. Thoughts where the mere act of thinking them now could put one into a logical bind that deterministically causes the threat to come into existence in the future.
Conversely, are there any kind of thoughts that have an opposite effect? Thoughts that act as a kind of poison pill to future threats, prevent them from coming into existence in the future, possibly by introducing a logic bomb or infinite loop of some sort? Has there been any research or discussion of this anywhere? If so, references appreciated.
I read this article https://www.greaterwrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality and I was in beginning confused with repeating that omega rewards irational behaviour and I wasnt sure how it is meant.
I find one-boxing as truly rational choice (and I am not saying that just for Omega who is surely watching). There is something to gain with two-boxing, but it also increases costs greatly. It is not sure that you will succeed, you need to do hard mental gymnastic and you cannot even discuss that on internet :) But I mean that seriously. One-boxing is walk in the park. You precommit a then you just take one box.
Isnt two-boxing actually that "holywood rationality"? Like maximizing The Number without caring about anything else?
Please share your thoughts, I find this very enticing and want to learn more
Hello everyone, my name is Ihor, my website is https://linktr.ee/kendiukhov, I live in Germany between Nuremberg and Tuebingen. I am very much into rationality/LessWrong stuff with a special focus on AI safety/alignment. I would be glad to organize and host local events related to these topics in Germany, like reading clubs, workshops, discussions, etc. (ideally, in the cities I mentioned or near them), but I do not know any local community or how to approach them. Are there any people from Germany in this Reddit or perhaps do you know how can I get in touch with them? I went to some ACX meetings in Stuttgart and Munich but they were something a bit different.
There seems to be nothing more fundamental than belief. Here's a thought. What do u think?
I've been a lurker for a little while, but always struggle with the meta-task of deciding what to read. Any reccs?
I've been planning on reading the sequences, and saw that the first two books were published as print versions some time ago (https://rationalitybook.com).
Map and Territory and How to Actually Change Your Mind are the first of six books in the Rationality: From AI to Zombies series. As of December 2018, these volumes are available as physical books for the first time, and are substantially revised, updated, and polished. The next four volumes will be coming out over the coming months.
Seems like nothing happened since then. Was that project cancelled? I was looking forward to reading it all in print, because I'm staring at screens long enough on a daily basis to enjoy reading on paper much more.
Hey everyone,
I recently finished reading Harry Potter and the Methods of Rationality and loved it! Since then, I've been hearing a lot about Rationality: From AI to Zombies. I know it's a pretty lengthy book, which I'm okay with, but I came across a post saying it's just a collection of blog posts and lacks coherence.
Is this true? If so, has anyone tried to organize it into a more traditional book format?
I recently read "Rationality: From AI To Zombies" by Eliezer Yudkowsky. The love for Bayesian methodologies really shines through.
I was wondering if anyone has ever used a simulation to simulate different outcomes before making a decision? I recently used a Monte Carlo Simulation before buying an apartment, and it worked quite well.
Even though it is hard to capture the complexity of reality in one simulation, it at least gave me a baseline.
I wrote a post about it here: From Monte Carlo to Stockholm.
Would you consider using simulations in your everyday life?
Where should I start reading? I read hpmor, nothing else by Eliezer or anything on LW because it seems to me very intimidating and fomo attacks when I start reading something on there.
I am thinking of starting a new custom website that focuses on allowing people with unconventional or contrarian beliefs to discuss anything they like. I am hoping that people from across political divides will be able to discuss anything without the discourse becoming polemical or poisoned.
Are there any "original" features you think this forum should include? I am open to any and all ideas.
(For an example of the kind/quality of forum design ideas I am talking about--whether or not you can abide Mencius Moldbug, I'm not here to push his agenda in general--see this essay. Inspired by that, I was thinking that perhaps there could be a choice of different types of karma that you can apply to a post, rather than just mass upvoting and downvoting. Like you choose your alignment/karma flavour, and your upvotes or downvotes are cast according to that faction...)
Applications are now open for the LessWrong Community Weekend 2024!
Join the world’s largest rationalist social gathering, which brings together 250 aspiring rationalists from across Europe and beyond for 4 days of socializing, fun and intellectual exploration. We are taking over the whole hostel this year and thus have more space available. We are delighted to have Anna Riedl as our keynote speaker - a cognitive scientist conducting research on rationality under radical uncertainty.
As usual we will be running an unconference style gathering where participants create the sessions. Six wall-sized daily planners are filled by the attendees with 100+ workshops, talks and activities of their own devising. Most are prepared upfront, but some are just made up on the spot when inspiration hits.
Find more details in the official announcement: https://www.lesswrong.com/events/tBYRFJNgvKWLeE9ih/lesswrong-community-weekend-2024-applications-open-1?utm_campaign=post_share&utm_source=link
Or jump directly to the application form: https://airtable.com/appdYMNuMQvKWC8mv/pagiUldderZqbuBaP/form
Inclusiveness: The community weekend is family & LGBTQIA+ friendly and after last year's amazing experience we are increasing our effort into creating a diverse event where people of all ages, genders, backgrounds and experiences feel like home.
Price: Regular ticket: €250 | Supporter ticket: €300/400/500+
(The ticket includes accommodation Fr-Mo, meals, snacks. Nobody makes any money from this event and the organizer team is unpaid.)
This event has a special place in our heart, and we truly think there’s nothing else quite like it. It’s where so many of us made friends with whom we have more in common than each of us would’ve thought to be possible. It’s where new ideas have altered our opinions or even changed the course of life - in the best possible way.
Note: You need to apply and be accepted via the application form above. RSVPs via Facebook don't count.
Looking forward to seeing you there!