/r/singularity
Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.
A subreddit committed to intelligent understanding of the hypothetical moment in time when artificial intelligence progresses to the point of greater-than-human intelligence, radically changing civilization. This community studies the creation of superintelligence— and predict it will happen in the near future, and that ultimately, deliberate action ought to be taken to ensure that the Singularity benefits humanity.
The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.
The first use of the term "singularity" in this context was by mathematician John von Neumann. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030.
Proponents of the singularity typically postulate an "intelligence explosion", where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.
1) On-topic posts
2) Discussion posts encouraged
3) No Self-Promotion/Advertising
4) Be respectful
/r/singularity
TO- question stands.
I’ve been thinking about what would truly cross the threshold into AGI, and something that came up is the concept of continuous, self-directed learning. Not just an LLM responding to prompts or expanding its dataset when fine-tuned by a human, but an AI that could agentically recognize its own limitations, create reinforcement finetunes for itself, and iterate on its capabilities without external intervention.
Imagine this scenario:
A model is trained on its initial dataset—nothing groundbreaking, just the usual supervised learning stuff. But then, it hits a roadblock. Maybe it encounters tasks outside its training scope (e.g., advanced physics problems, creating original art styles, or even understanding a niche cultural context). Instead of stopping there or needing a human to intervene, this hypothetical system could:
1. Recognize it’s failing or underperforming on the task.
2. Hypothesize what kind of training or experience it needs to improve.
3. Agentically search for existing datasets and create reinforcement simulations tailored to its specific deficits.
4. Run those simulations, fine-tune itself, and try again—without any hand-holding from developers.
This would go beyond even the most sophisticated models today because it means the system isn’t just passively learning but actively deciding what to learn and how. It would be solving its own bottlenecks. And if this cycle repeats indefinitely, wouldn’t that essentially mean the model is building its own version of intelligence on top of the scaffolding we gave it?
I know this is speculative, but the pieces are kind of there already, aren’t they? There’s work on self-play in reinforcement learning, dynamic fine-tuning, and models that can optimize themselves to some extent. Combine that with advanced agents that have memory and task autonomy, and we’re not far off from a system that could bootstrap itself into general problem-solving.
If we reached this point, would this count as AGI? Or does true AGI still require something fundamentally “human” (e.g., consciousness, emotions, etc.)? Curious to hear what you all think!
To be clear I'm sure o1 is smarter on PhD questions but gemini actually feels general, I'm a Plasterer who specialises in historic buildings and so have some extremely obscure items and materials, it can read tell me what these items are (not necessarily know everything) and it's 3d spatial awareness is good but not flawless but wow this is the first thing that feels generally smart.
I'm not a computer scientist I'm just a general member of public with an interest in ai, so I really do mean it's generally intelligent. I can't wait to see the simple benchmark results
Would LOVE to see this not be downvoted :) please give it a read!
In theory, this sub should be about talking about "the singularity" as a concept - in terms of its implications, what it would look like, how human society could change, all that. Ofc AGI/ASI is a huge part of that. But there's also huge exploration to be done in terms of culture, social implications, impacts on fields like agriculture and infrastructure, human welfare, geopolitics... all sorts.
Exploring non-AI technology aspects is huge too - self-healing robotics, compliant mechanisms, materials science, fusion or renewable energy, things like eco-friendly (esp in a long-term sustainable mining sense) ways of acquiring elements or alternatives for rare metals used in components. It's not hard to imagine why a software-driven world, a post-singularity world, would be extremely dependent on these things.
I encourage you to scroll through this sub. By a wide, WIDE margin, the posts are about LLMs. Often just noting incremental progress in them, or reposting social posts of LLM folks along with "discuss".
I want to reiterate - yes, software development and AI are a huge part of "the singularity", and you could reasonably argue that AI is its core. But 1) it's definitely not the only part, and 2) LLMs are not the only aspect of AI. Reinforcement learning is totally different. Spatial computing is totally different. There's even lots of valid debate whether or not LLMs constitute AI at all (a debate which I'm uninterested in getting into, IMO whether or not something "counts" as "AI" is like asking if something "counts" as "art". It's all subjective, semantic, pedantic, and boring lol)
So... are there other subs or spaces yall can recommend to explore the singularity in its whole?
Ideally without having to separately follow a ton like /renewables, /machinelearning, /fusionenergy or whatever. Again, /singularity would be perfect for that, except that in practice it's dominated by LLM discussion.
(In a more cynical world I'd suggest rebranding this sub to something LLM-specific, but I can't reasonably see that happening so yeah, suggestions?)
Thanks!
If you don't know, Project Astra is basically like OpenAI's Advanced Voice Mode, but you can share live video with the model too.
If you haven't tried it yet, YOU HAVE TO TRY IT
https://aistudio.google.com/live
Works best on Mobile imo. I recreated basically this video and it worked flawlessly the first time.
https://www.reddit.com/r/singularity/comments/1crm7jq/using_chatgpt4o_to_commentate_my_gameplay/
Honestly, it was faster than I expected. I said it's possible in a few weeks, but I thought it would be more like a few years realistically.
A friend asked me "Why does Alexa always understand you?" I found out the reason through a weird series of events.
My lab in grad school moved and I spent about 6 months unpacking a box or two a week as our stuff got moves. At some point, the guy in the lab next to us asks if we want to be in an MRI brain study about speech recognition and we'd get like 75 bucks.
A week later, we head down to the MRI lab and my lab mate and I said hi to the guy, anothwr grad student. He got like unreasonably excited.
"No way! You've got it!"
After I slow him down, i try to ask him what he's talking about. So he explains that his thing is studying accents, and in the field, to measure something, you have to set a "zero" point. And for accents, you're it. You have the official."neutral" accent." Guy knew where I grew up within like 50 miles.
Turns out, there's a lot of data on the neutral accent, and companies like to train their voice recognition models on that to start.
So when the robots take over, they'll understand what I say very clearly as I welcome our new overlords.
So I got that going for me.
...