/r/singularity
Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.
A subreddit committed to intelligent understanding of the hypothetical moment in time when artificial intelligence progresses to the point of greater-than-human intelligence, radically changing civilization. This community studies the creation of superintelligence— and predict it will happen in the near future, and that ultimately, deliberate action ought to be taken to ensure that the Singularity benefits humanity.
The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.
The first use of the term "singularity" in this context was by mathematician John von Neumann. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030.
Proponents of the singularity typically postulate an "intelligence explosion", where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.
1) On-topic posts
2) Discussion posts encouraged
3) No Self-Promotion/Advertising
4) Be respectful
/r/singularity
Teslabot is very impressive, but they haven't released anything new since June 2024. I expected them to unveil something at AI Day but they didn't. I'm thinking that they will release some new info sometime October or December of 2024. Or maybe even next year in 2025. That or they are more focused on the AI Grok. What do you guys think?
LLMs and knowledge / reasoning based AI's get a lot of attention (justifiably), but I always felt super inspired by the early action models that promised to do things like using software, browsing through websites, actually *using* your computer.
The economic value and power of having a model able to do that seems absolutely gigantic, and it also saves the problem of having to hook up LLMs to endless custom actions because they can just use our systems already designed for humans. But I've heard almost no-one talking about them in the last few months - anyone know any convincing companies working on developing these? Are the big AI players even touching these at the moment?
The "Australia Project" is a fictional project talked about in a story called Manna created by a fictional person named Eric Renson.
A short description of it: A billion shares of "ownership" which equates to citizenship are sold for $1000 and are used to fund a moneyless society where all needs are taken care of by robots. Anything that can be recycled is and most commodities are simply available when requested. Initially the founder buys a section of Australia with the money but then Australia decides to merge with it and the society has access to all of Australia.
Is there any such actual projects like this? Some sort of fund meant to support a techno-communist society once technology is capable of providing it.
I just figured it out: we already have a machine that’s smarter than any individual human. Consider ChatGPT, for example—it can converse on virtually any topic known to mankind. While it's true that AI like GPT isn't specialized enough to outmatch a doctor or an expert in any specific field, its broad "understanding" and ability to engage in discussions across an incredibly wide range of topics is unmatched by any human.
No human, no matter how knowledgeable, can match the breadth of information that AI can access and synthesize. While people excel in depth within their specialized areas, no single person can possess the comprehensive, cross-disciplinary knowledge that AI demonstrates. In that sense, ChatGPT and similar models represent a form of intelligence that goes beyond human limits.
Yes, GPT is fundamentally a text completion machine, predicting the next word based on what it has learned from vast amounts of data. But if you think about it, human conversation can also be reduced to pattern recognition, experience recall, and response generation based on context—just like text completion. The difference is that AI can do this across virtually all fields, without the limitations of human memory or cognitive biases.
The singularity isn’t just about machines surpassing human intelligence in one domain—it’s about them being able to engage meaningfully across all domains simultaneously. In that regard, we’ve already crossed the threshold. We have created something that, while not "human" in its reasoning, possesses a form of generalized intelligence that no individual person can rival.
So, when we talk about the singularity or artificial general intelligence, maybe it's already here—just not in the form we expected. It’s not about creating something that thinks exactly like us, but something that can assist, augment, and outperform us in ways that no individual could ever hope to match.
I just came across an intriguing research paper on the Segment Anything Model (SAM), a foundation model designed for generalized image segmentation. The study explores its robustness against adversarial attacks and suggests it could be an early prototype of an Artificial General Intelligence (AGI) pipeline.
The paper posits that the emergence of robust behavior from massive model parameters and extensive training data might be a glimpse of what’s to come—a unified AGI capable of seamlessly handling a diverse range of tasks.
Imagine smarter robots that can perceive and interact with their environments accurately, even in dynamic or adversarial conditions. This would mean more reliable robotic helpers in warehouses, hospitals, and homes.
When I was a kid I thought I would never get bored, that thousands of excellent movies and tv shows were being generated constantly.
Hollywood DO produce thousands of movies and tv shows every year, and that's without mentioning all the independent movies made all other the states...but 99.99% are absolute trash, and will never be remembered.
And yet some of the trash I've seen had some good ideas and good lines that were lost like pearls in a sea of cinematic diarrhea. I feel a huge potential lies in every story, but bad writing, acting and directing just ruins the whole thing.
And the most excellent shows and directors produce very little content over long period of time. Even independents on Youtube take an eternity to make a new memorable episode.
But I wonder the amount of masterpieces these talented, but limited in resources artists could produce in 10 years thanks to AI tools.
There are so many books, failed movies and comics who deserve a reboot, a second chance.
(Look at The Boys for ex. The comic was absolute garbage, but the show is pure gold.)
The Singularity of entertainment can't start soon enough.
Are current models lacking some essential aspect of intelligence that is required to reach human level?
In other words, is there a fundamental limitation that requires a new paradigm to be discovered?
While it seems likely that mere scaling is NOT enough for AGI, it is possible that small, gradual progress in model scale and architecture will achieve it.
On the other hand, it is also possible that current models are missing some important capability, like agency, long-term memory, original insights, or something else.
While various candidates for this "essentially human capability" have been proposed, I don't find any of them compelling. Everything I've read so far could emerge from either scaling or architecture -- things like creativity, originality, inherent values, initiative/motivation, long-term memory formation, etc.
The definitive way to prove the gap exists would be a test that LLMs would consistently fail -- I haven't seen anything like this. While there are still tests that LLMs score poorly on, this seems like a training issue rather than an inherent limitation.
Is it human conceit to believe that we are special? Has 70+ years of slow progress on AI convinced us the problem is unsolvable? Should we be humble about our ability to replicate biology, or be inspired by the fact that jets fly far faster and further than birds without flapping their wings?
Today, Cerebras released their inference service making use of their proprietary WSE3 chips. They show much faster speed than any other provider including Groq.
Here the inference speed for Llama 3.1 - 8B:
And here the inference speed for Llama 3.1 - 70B:
More information on their release blog post: https://cerebras.ai/blog/introducing-cerebras-inference-ai-at-instant-speed
Really cool to see competitors coming up in the inference game!