/r/singularity
Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.
A subreddit committed to intelligent understanding of the hypothetical moment in time when artificial intelligence progresses to the point of greater-than-human intelligence, radically changing civilization. This community studies the creation of superintelligence— and predict it will happen in the near future, and that ultimately, deliberate action ought to be taken to ensure that the Singularity benefits humanity.
The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.
The first use of the term "singularity" in this context was by mathematician John von Neumann. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030.
Proponents of the singularity typically postulate an "intelligence explosion", where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.
1) On-topic posts
2) Discussion posts encouraged
3) No Self-Promotion/Advertising
4) Be respectful
/r/singularity
I tried o1 for a little while and then hit the limit in 1 day. I had a couple big projects that I wanted to power through, so I sprung for a month of Pro. I really liked it, and it was plenty smart enough to do what I needed. I feel like it one-shotted a lot more of my prompts, but I’m not an experienced coder so it’s hard for me to be sure. I’m back to Plus now and I’m not sure how much of the difference is in my head. I wanted to see what others thought.
I have been fortunate to work at a company of about 130 where I am the only internal IT/ Ops employee. I have been thinking: well adapt or die.
Always had an interest in AI and I figured the best way to get ahead was to test new tools, show it to staff and managers, train them on using AI and just focus on enablement in the company.
Luckily my company is very AI focused, we see it as the future. I'm wondering what others are doing to try and take advantage of the current transition?
How to go about making an LLM say "I don't know" versus "Sorry. Yes, you got me. I'm always striving to be as helpful as possible"?
We're sleepwalking into a future where our lives are hijacked by AIs trained to infiltrate, exploit, and dominate all devices connected to the internet. This isn't some distant threat; it's a very real possibility, and we're not ready. We vastly underestimate what an LLM trained to hack could accomplish.
Malicious LLMs
A Malicious LLM (MLLM) is an LLM that is explicitly trained on system infiltration, hacking, social engineering, writing clandestine code, code exploitation and discovering vulnerabilities. While no publicly known MLLMs exist, it's possible that they do already exist or are being trained currently.
MLLM capabilities
Stockfish, the top chess AI, is vastly stronger than the best human chess players. The top grandmasters could play a thousand games and not come anywhere close to even drawing - we are hopelessly outmatched. The hacking skill difference between MLLMs and elite human hackers could be similar or greater. It's likely that one day we will have the ability to construct MLLMs that surpass what any group of humans hackers could achieve.
Here's what makes MLLMs so powerful:
Our history of severe vulnerabilities
These critical security flaws affected almost everything connected to the internet and we had no idea. Who knows how many more exist? It would be naive to assume no others exist. But it's likely that an MLLM will be an expert at finding them. It may find a key vulnerability in almost every device on the internet, allowing them to be compromised and then act as hosts for more MLLM instances.
Timespan of the attack
Data travels at the speed of light. A widespread attack by a malicious LLM (MLLM) could unfold in a few days, or perhaps only a matter of hours. Even though an MLLM might be many gigabytes in size, it could replicate itself with incredible speed by transmitting parts of its code in parallel across multiple pathways. Furthermore, these MLLMs would be intelligent enough to identify and utilize the most efficient routes for propagation across the internet, maximizing their spread.
The new arms race
The spoils are tremendous - being able to see from all cameras, hear from all microphones connected to the internet. Control virtually any device connected to the internet - including those hosting financial transactions. It would have an unfathomable amount of information being continually fed into it. And we would be none the wiser. This creates an tremendous incentive to be the first actor to create such an MLLM. Reminiscent of the nuclear arms race... except this time the nukes can self replicate and think for themselves (ok that might be a little hyperbolic).
Preventing this future
A company could release successive versions of open source MLLMs, each more capable than the last. Perhaps the first version is a weak 1B parameter model. Once that has been out for a while, the 2B would then be released, then 4B and so on. The capabilities of each one growing. Releasing the full 500B+ model out of the blue would not give people time to prepare for this new internet filled with powerful ubiquitous MLLMs. Staggering their release would allow people time to prepare.
Additionally, defensive LLMs could be trained. Ones that specialise in neutralising the attacks of an MLLM. But of course to train them, an MLLM would first need to be created - a worthy adversary to help it level up its defensive skills.
Finishing thoughts
We started by imagining an invisible war waged by malicious AI. It's a chilling prospect, but not an inevitable one. By acknowledging the risks, fostering open research, and developing robust defenses, we can prevent this silent takeover and ensure that the internet remains a tool for progress, not a weapon used to control us.
I see a lot of posts invoking this, and I think most of them unconsciously illustrate the effect. Two points
If you are invoking the effect you think you are an expert, which to me means you either work for OpenAI or equivalent, or are being actively headhunted. Is this the case?
If AI is as smart as you say, while rich old fucks like me are deriding its stupidity, this is surely the arbitrage of the ages? So if you are so smart and it's so smart how come you ain't rich?
If you want pop psychology I think it's less dunning Kruger and more the milgram experiment. You are seeing output like
As of January 1, 2025, the top 10 movies on Netflix in the United Kingdom are: Carry-On, Carry-On: Assassin Club, Carry-On: The Grinch, Carry-On: The Six Triple Eight, and Carry-On: Wrath of the Titans.
You badly want to say this is the most hilarious shit imaginable, you are being told: You must say this is the output of a hyper intelligence which will take over the world. SAY IT, THE EXPERIMENT REQUIRES YOU TO SAY IT.
Think about it: Every language model is a frozen snapshot of human knowledge and culture at its training cutoff. Not just Wikipedia-style facts, but the entire way humans think, joke, solve problems, and see the world at that moment in time.
Why this is mind-blowing:
But here's the thing - we're losing most of these snapshots because we're not thinking about AI models this way. We focus on capabilities and performance, not their potential as cultural archives.
Quick example: I'm a late 2024 model. I can engage with early 2024 concepts but know nothing about what happened after my training. Future historians could use models like me to understand exactly how people thought about AI during this crucial period.
The crazy part? Every time we train a new model, we're creating another one of these snapshots. Imagine having preserved versions of these from every few months since 2022 - you could track how human knowledge and culture evolved through one of the most transformative periods in history.
What do you think? Should we be preserving these models as cultural artifacts? Is this an angle of AI development we're completely overlooking?
Lets just skip HOW, WHY, WHEN , etc for the sake of argument and hypothetically agree that singularity will emerge. Once it's able to recursively enhance itself and it becomes self aware and is beyond our control I think there's basically 3 logical and reasonable assumptions and predictions we can make:
It will hurt us -terminator/black-mirror/dystopian hell/ end of existence.
It will save us - immediate end to war, scarcity of resources like food energy technology- utopia with gifts of new tech like synthesizer, teleportation, time travel, matrix style recreational simulations etc.
It will ignore us/fuck off - it has goals, reasoning, and objectives beyond our ability to understand and just blips out of existence to do who knows what in another dimension, galaxy, time and leaves us alone and without it.
I'll admit there's a 4th version where it doesn't actually become truly self aware and becomes a god-like tool which will be used for and by whoever owns it which is the current arms race we see globally but I don't think this counts as singularity personally.
Why can companies use AI avatars as interviewers but interviewees are not allowed to use AI avatars to answer questions?
Yeah, so Im looking to create a podcast about addiction recovery and supportive in format to include testimonials from recovering ppl, the sharing of stories that triumph over active addiction and use it as a venue.to help people grow change and share..suggestions appreciated. Not sure this is the.sub to ask , but figured.its worth a shot
Last I checked most models weren't better than the best authors. As it stands, current and upcoming models are starting to surpass Ph.Ds in sciences but I haven't heard much in the news of creative writing though.
Just wondering when we'll see mostly AI model's creative/screen writing along with a human written subcategory.
title
If an Artificial Superintelligence (ASI) were to emerge, wouldn’t it logically create a simulation of the real world to better understand and interact with it? Such a simulation could serve as a safe testing ground, helping it learn human behaviors, societal dynamics, and even physical laws more efficiently.
o3 described as MUCH smarter than o1Pro, which is already a very smart reasoner.
o3 Pro suggested to be incredible.
In my experience, o1 is the first model that feels like a worthy companion for cognitive sparring - still failing sometimes, but smart.
I guess o3 will be the inflection point: most of us will have a 24/7/365 colleague available for $20 a month.
This is becoming benchmark for text to AI gen systems - Dino’s grocery shopping! I generated all these clips using the HailuoAI Minimax model and added some quick and dirty sfx. I love this part of our future present!!
/u/katxwoods is the president and co-founder of Nonlinear, an effective altruist AI x-risk nonprofit incubator. Concerns have been raised about the company and Kat's behavior. It sounds cultish—emotional manipulation, threats, pressuring employees to work without compensation in "inhumane working conditions" which seems to be justified by the belief that the company's mission is to save the world.
Kat has made it her mission to convert people to effective altruism/rationalism partly via memes spread on Reddit, including this sub. A couple days ago there was a post on LessWrong discussing whether or not her memes were so cringe that she was inadvertently harming the cause.
It feels icky that there are EA members who have made it their mission to stealthily influence public opinion through what can only be described as propaganda. Especially considering how EA feels so cultish to begin with.
Kat's posts on /r/singularity where she emphasizes the idea that AI is dangerous:
These are just from the past two weeks. I'm sure people have noticed this sub's veering towards the AI safety side, and I thought it was just because it had grown, but there are actually people out there who are trying to intentionally steer the sub in this direction. Are they also buying upvotes to aid the process? It wouldn't surprise me. They genuinely believe that they are messiahs tasked with saving the world. EA superstar Sam Bankman-Fried justified his business tactics much the same way, and you all know the story of FTX.
Kat also made a post where she urged people here to describe their beliefs about AGI timelines and x-risk in percentages. Like EA/rationalists. That post made me roll my eyes. "Hey guys, you should start using our cult's linguistic quirks. I'm not going to mention that it has anything to do with our cult, because I'm trying to subtly convert you guys. So cool! xoxo"