/r/singularity

Photograph via snooOG

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

Links

A subreddit committed to intelligent understanding of the hypothetical moment in time when artificial intelligence progresses to the point of greater-than-human intelligence, radically changing civilization. This community studies the creation of superintelligence— and predict it will happen in the near future, and that ultimately, deliberate action ought to be taken to ensure that the Singularity benefits humanity.

On the Technological Singularity

The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.

The first use of the term "singularity" in this context was by mathematician John von Neumann. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030.

Proponents of the singularity typically postulate an "intelligence explosion", where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.

Resources
Posting Rules

1) On-topic posts

2) Discussion posts encouraged

3) No Self-Promotion/Advertising

4) Be respectful

Check out /r/Singularitarianism and the Technological Singularity FAQ

/r/singularity

3,419,558 Subscribers

3

"You're overreacting, AI isn't going to replace you." (Posted by MsMisseeks on r/shadowrun)

1 Comment
2024/12/12
15:05 UTC

2

This was posted just two days ago. From worst to best. Google was always the chosen one huh?

0 Comments
2024/12/12
15:04 UTC

3

Any Atoms for Peace fans?

1 Comment
2024/12/12
15:00 UTC

2

What is your favorite AI bot to talk too?

TO- question stands.

0 Comments
2024/12/12
14:45 UTC

23

How good does AI need to be for people to start getting made redundant?

19 Comments
2024/12/12
14:30 UTC

13

We’re likely much closer to continuous-learning models than people think

I’ve been thinking about what would truly cross the threshold into AGI, and something that came up is the concept of continuous, self-directed learning. Not just an LLM responding to prompts or expanding its dataset when fine-tuned by a human, but an AI that could agentically recognize its own limitations, create reinforcement finetunes for itself, and iterate on its capabilities without external intervention.

Imagine this scenario:

A model is trained on its initial dataset—nothing groundbreaking, just the usual supervised learning stuff. But then, it hits a roadblock. Maybe it encounters tasks outside its training scope (e.g., advanced physics problems, creating original art styles, or even understanding a niche cultural context). Instead of stopping there or needing a human to intervene, this hypothetical system could:

1.	Recognize it’s failing or underperforming on the task.
2.	Hypothesize what kind of training or experience it needs to improve.
3.	Agentically search for existing datasets and create reinforcement simulations tailored to its specific deficits.
4.	Run those simulations, fine-tune itself, and try again—without any hand-holding from developers.

This would go beyond even the most sophisticated models today because it means the system isn’t just passively learning but actively deciding what to learn and how. It would be solving its own bottlenecks. And if this cycle repeats indefinitely, wouldn’t that essentially mean the model is building its own version of intelligence on top of the scaffolding we gave it?

I know this is speculative, but the pieces are kind of there already, aren’t they? There’s work on self-play in reinforcement learning, dynamic fine-tuning, and models that can optimize themselves to some extent. Combine that with advanced agents that have memory and task autonomy, and we’re not far off from a system that could bootstrap itself into general problem-solving.

If we reached this point, would this count as AGI? Or does true AGI still require something fundamentally “human” (e.g., consciousness, emotions, etc.)? Curious to hear what you all think!

2 Comments
2024/12/12
14:27 UTC

0

Antis react to the news that a model is being trained only public domain images. It was NEVER about copyright or the dataset.

17 Comments
2024/12/12
14:07 UTC

115

In Light of Recent Events

41 Comments
2024/12/12
12:00 UTC

12

ChatGPT made this pic for me

0 Comments
2024/12/12
11:38 UTC

10

8 Comments
2024/12/12
11:23 UTC

205

It's crazy how the public essentially doesn't care about Gemini. This video has not even 30k views after a day. I wonder why Google won't advertise these models better? Looking at Google trends Gemini and chatgpt searches are again like they were a week ago.

168 Comments
2024/12/12
10:17 UTC

1

Looks like Gemini thinks in audio tokens... It got the "menhir" notion right in audio, but kinda retranscripted it wrongly on text ("Men here's"). Why this ?

4 Comments
2024/12/12
10:12 UTC

154

This is the first thing that feels like agi

To be clear I'm sure o1 is smarter on PhD questions but gemini actually feels general, I'm a Plasterer who specialises in historic buildings and so have some extremely obscure items and materials, it can read tell me what these items are (not necessarily know everything) and it's 3d spatial awareness is good but not flawless but wow this is the first thing that feels generally smart.

I'm not a computer scientist I'm just a general member of public with an interest in ai, so I really do mean it's generally intelligent. I can't wait to see the simple benchmark results

74 Comments
2024/12/12
08:31 UTC

100

This is insane, more examples in the link

40 Comments
2024/12/12
07:40 UTC

132

Rumors of something afoot, and OAI being induced into taking action thanks to Google's releases... Just rumors for the time being though...

96 Comments
2024/12/12
04:35 UTC

574

Image editing possibilities are gonna be insane

66 Comments
2024/12/12
03:41 UTC

6

Other subs for talking about a conceptual tech singularity?

Would LOVE to see this not be downvoted :) please give it a read!

In theory, this sub should be about talking about "the singularity" as a concept - in terms of its implications, what it would look like, how human society could change, all that. Ofc AGI/ASI is a huge part of that. But there's also huge exploration to be done in terms of culture, social implications, impacts on fields like agriculture and infrastructure, human welfare, geopolitics... all sorts.

Exploring non-AI technology aspects is huge too - self-healing robotics, compliant mechanisms, materials science, fusion or renewable energy, things like eco-friendly (esp in a long-term sustainable mining sense) ways of acquiring elements or alternatives for rare metals used in components. It's not hard to imagine why a software-driven world, a post-singularity world, would be extremely dependent on these things.

I encourage you to scroll through this sub. By a wide, WIDE margin, the posts are about LLMs. Often just noting incremental progress in them, or reposting social posts of LLM folks along with "discuss".

I want to reiterate - yes, software development and AI are a huge part of "the singularity", and you could reasonably argue that AI is its core. But 1) it's definitely not the only part, and 2) LLMs are not the only aspect of AI. Reinforcement learning is totally different. Spatial computing is totally different. There's even lots of valid debate whether or not LLMs constitute AI at all (a debate which I'm uninterested in getting into, IMO whether or not something "counts" as "AI" is like asking if something "counts" as "art". It's all subjective, semantic, pedantic, and boring lol)

So... are there other subs or spaces yall can recommend to explore the singularity in its whole?

Ideally without having to separately follow a ton like /renewables, /machinelearning, /fusionenergy or whatever. Again, /singularity would be perfect for that, except that in practice it's dominated by LLM discussion.

(In a more cynical world I'd suggest rebranding this sub to something LLM-specific, but I can't reasonably see that happening so yeah, suggestions?)

Thanks!

2 Comments
2024/12/12
02:34 UTC

410

Project Astra is the coolest thing I've seen since the original release of ChatGPT two years ago

If you don't know, Project Astra is basically like OpenAI's Advanced Voice Mode, but you can share live video with the model too.

If you haven't tried it yet, YOU HAVE TO TRY IT

https://aistudio.google.com/live

Works best on Mobile imo. I recreated basically this video and it worked flawlessly the first time.

https://youtu.be/nXVvvRhiGjI

89 Comments
2024/12/12
01:37 UTC

6

Wasn't a few weeks, but Google granted my wish.

https://www.reddit.com/r/singularity/comments/1crm7jq/using_chatgpt4o_to_commentate_my_gameplay/

Honestly, it was faster than I expected. I said it's possible in a few weeks, but I thought it would be more like a few years realistically.

3 Comments
2024/12/12
01:31 UTC

6

Voice recognition story

A friend asked me "Why does Alexa always understand you?" I found out the reason through a weird series of events.

My lab in grad school moved and I spent about 6 months unpacking a box or two a week as our stuff got moves. At some point, the guy in the lab next to us asks if we want to be in an MRI brain study about speech recognition and we'd get like 75 bucks.

A week later, we head down to the MRI lab and my lab mate and I said hi to the guy, anothwr grad student. He got like unreasonably excited.

"No way! You've got it!"

After I slow him down, i try to ask him what he's talking about. So he explains that his thing is studying accents, and in the field, to measure something, you have to set a "zero" point. And for accents, you're it. You have the official."neutral" accent." Guy knew where I grew up within like 50 miles.

Turns out, there's a lot of data on the neutral accent, and companies like to train their voice recognition models on that to start.

So when the robots take over, they'll understand what I say very clearly as I welcome our new overlords.

So I got that going for me.

4 Comments
2024/12/12
01:22 UTC

36

It's not just you

15 Comments
2024/12/12
01:19 UTC

3

Are there any games even if it's like a pixel game with ai npcs you can actually talk to like by texting? ...

...

3 Comments
2024/12/12
00:58 UTC

Back To Top