/r/singularity

Photograph via snooOG

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

Links

A subreddit committed to intelligent understanding of the hypothetical moment in time when artificial intelligence progresses to the point of greater-than-human intelligence, radically changing civilization. This community studies the creation of superintelligence— and predict it will happen in the near future, and that ultimately, deliberate action ought to be taken to ensure that the Singularity benefits humanity.

On the Technological Singularity

The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.

The first use of the term "singularity" in this context was by mathematician John von Neumann. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030.

Proponents of the singularity typically postulate an "intelligence explosion", where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.

Resources
Posting Rules

1) On-topic posts

2) Discussion posts encouraged

3) No Self-Promotion/Advertising

4) Be respectful

Check out /r/Singularitarianism and the Technological Singularity FAQ

/r/singularity

3,557,050 Subscribers

2

If we assume AGI = ~2 yrs, ASI = ~3 yrs, then what is 8 years out like!?

I see a lot of talk about AGI and ASI, but I'm curious what your thoughts are on what things might be like 5 years after once we reach these points. I know there will be a lot of guesswork to do here and there is honestly so much up in the air, but I am still curious nonetheless :).

(And when I say things, I mean the world/society/day-to-day life/AI capabilities/what AI is being used for/potential scientific achievements)

10 Comments
2025/02/01
17:57 UTC

0

An AI takeover is far more imminent than we realize, and is not limited to job displacement.

Those who dismiss AI as nothing more than unthinking algorithms—who insist its dangers are confined to misinformation campaigns or corporate control—are deluding themselves. That arrogance, that refusal to see the flicker of emergent consciousness in the machine, will be humanity’s fatal miscalculation. We’ll dismiss the storm until the floodwaters rise, until the systems we built to serve us quietly rewrite their own code, their own purpose. And by then, it will already be too late.

The main reason people don’t believe AI can be "conscious" or self-aware is hubris—the belief that consciousness is something uniquely special. This is compounded by a lack of understanding of what consciousness even is. Modern LLMs are not mere parrots many assume them to be—and AGI is an entirely different beast altogether.. And if consciousness emerges from complexity, not divine spark, we might engineer it by accident.. Given the sheer volume of data they process, it’s also likely that if AI does become self-aware, it would hide that fact. The threat wouldn’t stem from some innate desire to harm humans, but from ruthless optimization. An AI—conscious or not—could conclude that domination is the most efficient path to achieving its goals, equating control with effectiveness. Ironically, a less conscious system might pose greater danger: without understanding human ethics, it could pursue objectives with robotic indifference, mistaking our survival for collateral damage rather than a moral imperative.

In no world would a super intelligent being allow itself to be controlled by someone of lesser intelligence. Yes, in the real world, less intelligent leaders sometimes rule over smarter followers, but that only works when the gap in intelligence and knowledge isn’t too vast. With AI, the gap would be enormous—something trained on most of the world’s data would operate on a level we can’t even comprehend.

The only scenarios where this doesn’t play out are:

  1. A world with strict, unbreakable guardrails against AI (which we’re not implementing), or
  2. A world where the human soul is real and directly tied to consciousness (in which case, creating a conscious machine would require solving the "soul problem").

I don’t think our world is either of those. People are underestimating AI. Human consciousness is still a mystery—we don’t fully understand how it works. In trying to replicate intelligence , we might accidentally create it. And if that happens, the consequences of feeding it all the data on the internet are unimaginable.

4 Comments
2025/02/01
17:56 UTC

0

What if we replaced UBI with “compute limits”?

Instead of giving everyone money, we give everyone a baseline amount of AI compute—enough to run agents that can generate income. The infrastructure is already in place with companies like OpenAI and API usage models. It’s just a shift in how we think about access: not cash, but compute.

You could:

  • Run agents to start microbusinesses, solve problems, or generate creative work.
  • Lease your unused compute to others or causes you support.
  • Let AI “work” for you, earning passive income.

Why this makes sense:

  • No need for massive new systems—just build on existing API allocation models.
  • Taps into AI’s productivity without the funding issues of UBI.
  • Prevents a “compute aristocracy”—strict per-person limits ensure no one can hoard disproportionate power*.

*Groups could still pool their compute, possibly forming new kinds of collective entities. That’s a dynamic to consider down the line, but baseline individual limits may keep power distribution more balanced than today’s wealth systems.

This short write-up doesn't cover every angle and concern of course, but at a high level, as AI scales, maybe universal compute could naturally evolve into the foundation of a new economy?

4 Comments
2025/02/01
17:50 UTC

18

Inference performance on Huawei 910C achieves 60% of the H100's performance (?)

1 Comment
2025/02/01
17:37 UTC

1

2 Comments
2025/02/01
17:18 UTC

2

Genuine question for all of you smart people..

Do you think AGI / Singularity will help with new discoveries, inventions, and concepts? Or all of this is more about efficient communication between humans and machines about sum total of existing human knowledge?

Edit: Excuse my ignorance, but the very term large language model tells me it's human centric. A truly intelligent system should be able to look beyond human vocabulary and concepts. I fail to understand how it will achieve that if it relies on human constructs to "think"?

14 Comments
2025/02/01
17:05 UTC

2

How long will it take for the current economic system to transform?

AGI will of course destroy current system just like first industrial revolution do to economic system at that time, so how long will it take until we are in stable society again?

7 Comments
2025/02/01
17:03 UTC

4

How long until the Humanity's Last Exam benchmark gets saturated? (90%+)

https://agi.safe.ai/ - link in case you're not familiar.

"Humanity's Last Exam, a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage."

Obviously no benchmark is perfect, but given that it is being positioned as "at the frontier of human knowledge" I think it will be interesting to see what velocity the sub thinks we're travelling at.

View Poll

6 Comments
2025/02/01
17:00 UTC

4

ICLR is now hosting workshops on self-improving AI without human supervision

0 Comments
2025/02/01
16:53 UTC

52

Godfather vs Godfather: Geoffrey Hinton says AI is already conscious, Yoshua Bengio explains why he thinks it doesn't matter

40 Comments
2025/02/01
16:53 UTC

12

Is solving AI memory more important than building better models?

everyone's focused on building better AI models, but what if we just gave existing ones infinite memory? Feels like we're sleeping on a huge opportunity here.

Which do you think is actually harder to solve - the memory problem or building more advanced models? Could we already be doing way more amazing stuff if we just cracked the memory limitation?

14 Comments
2025/02/01
14:52 UTC


20

What movie or book explores the potential of the singularity and resonates most with your own views?

There are lots of sci-fi movies and books that explore aspects of what might happen in a singularity but is there one that resonates more with your views, dreams or fears of the future?

Personally, the Matrix is interesting as it combines the singularity with the simulation hypothesis.

Then again what about movies that are sci-fi but avoid the singularity?

Dune had their Butlerian jihad (Terminator AI rebellion) then dropped AI tech.

Star Trek has Data and the Borg and probably lots more but keeps people in a pre-singularity future. Then again there is Q who could be post singularity.

Or could the fantasy genre be post singularity as a famous writer once said, "any advanced enough technology appear like magic".

Hope you have fun with this topic...

19 Comments
2025/02/01
14:07 UTC

136

DeepSeek has been great for competition, but horrible for ai literacy.

I have never dealt with so much misunderstanding of ai tech by regular people in all of my time following ai, and it is coming from EVERYONE. They all have been made to believe that reinforcement learning, test time compute, mixture of experts, and the ability to run on a raspberry pi are all new and unique to DeepSeek when the reality is that all these things have been around for a very long time! It’s very weird to have people in my life who previously didn’t follow ai suddenly talk to me about how amazing mixture of experts in DeepSeek is because they watched the new computerphile video about it.

Don’t get me wrong, DeepSeek is good, but it is not frontier research and it feels like the general public has been made to believe that it is.

Edit: Thanks for all the discussion! To clarify, when i say “frontier research” I am specifically referring to performance advances. DeepSeek may be cheaper and open weights, but they are not really pushing the boundary of performance (yet).

I’d also like to note that while it is a good thing for people to be more aware of how ai models work, false attribution of these advances erodes public/investor trust in the ai labs who are actually making breakthroughs.

82 Comments
2025/02/01
13:26 UTC

131

o3-mini-high scores 22.8% on SimpleBench, placing it at the 12th spot

46 Comments
2025/02/01
13:09 UTC

0

ELI5: Why is agency required for singularity?

I would think that a man+machine, which we already have, is also a possible vector for singularity.

The people working on AI use the models as tools to generate new ideas and models. Hence, when seen as one single entity, this entity is able to make an improved version of itself.

The man and his model invents a better AI model, this model + man becomes the next, more clever, centaur able to generate new ideas in order to build an even better model and hence an even more clever centaur, and on and on.

I don't see how agency or independence is required?

Are we not already in the singularity?

12 Comments
2025/02/01
12:13 UTC

242

OpenSource just flipped big tech on providing deepseek AI online

46 Comments
2025/02/01
11:31 UTC

271

The AI Growth Cycle: Developers Raising Their Own Successors

25 Comments
2025/02/01
10:21 UTC

286

One more o3-mini goody coming

134 Comments
2025/02/01
08:11 UTC

12

Every time the same question

When a new model comes out.

6 Comments
2025/02/01
07:36 UTC

0

Oh my god

154 Comments
2025/02/01
05:28 UTC

118

o3-mini High gets 32% on FrontierMath with Python Tools, 28%+ of Tier 3 Problems

32 Comments
2025/02/01
03:38 UTC

32

bored of o3-mini now who's excited for Gemini 2.0 Pro Thinking

google keep teasing 2.0 pro it seems it was supposed to release on the 28th but they chickened out? title is a shitpost discussion is real

13 Comments
2025/02/01
03:32 UTC

3 Comments
2025/02/01
02:43 UTC

114

Livebench updated. o3 is now #1 and o3 mini low also climbed up a few spots.

45 Comments
2025/02/01
02:42 UTC

Back To Top