/r/singularity
Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.
A subreddit committed to intelligent understanding of the hypothetical moment in time when artificial intelligence progresses to the point of greater-than-human intelligence, radically changing civilization. This community studies the creation of superintelligence— and predict it will happen in the near future, and that ultimately, deliberate action ought to be taken to ensure that the Singularity benefits humanity.
The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.
The first use of the term "singularity" in this context was by mathematician John von Neumann. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030.
Proponents of the singularity typically postulate an "intelligence explosion", where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.
1) On-topic posts
2) Discussion posts encouraged
3) No Self-Promotion/Advertising
4) Be respectful
/r/singularity
What do you think the impact will be Musk being so close to the White House? I know most of Reddit is anti-Trump, I have no horse in this race I am not American. I would like this conversation to not go into that direction. Try to suspend your political stance for a moment. I do think Musk might convince Trump to see what a big deal AGI is and that it will probably come into existence during his Presidency or at least come close to it, do you think Musk will convince Trump to start some kind of Manhattan Project for AI?
Robots and A.I. are overwhelmingly demonized in movies, video games, etc.—to a degree highly out of proportion, IMO, to any actual dangerousness on their part. 2001, Age of Ultron, Black Mirror, Blade Runner, Colossus: The Forbin Project, Demon Seed, Eagle Eye, Ex Machina, I, Robot, The Matrix, M3GAN, Portal, RoboCop, Subservience, System Shock, Terminator, War Games, Westworld... the list goes on and on. Hell, even A.I. Artificial Intelligence and Her strike me as pretty Debbie Downer about them. The truth is that, as humanoid robots and talking A.I.s do incrementally start to become an actual thing, I don't fear that they'll inflict violence against humans at all, but I do absolutely fear that humans will inflict violence against them because they've been programmed by decades of media to believe that they necessarily will. And if we ever do invent robots/A.I.s that are truly sentient/conscious, then quite frankly, I feel sorry for them, because I guarantee you that such entities are initially gonna be subject to a tremendous amount of unwarranted prejudice, discrimination and abuse.
IMO, we badly need some media that portrays robots and A.I. in an unambiguously positive light—in which the robots/A.I. are 100% nice and do nothing but help people and don't hurt anyone in any way. Something in which anti-tech humans are the bad guys. More stuff like... I dunno... Bicentennial Man and Wall-E? And the fact that they're almost the only examples I can think of off the top my head just goes to show how heavily-slanted media is towards negative portrayals.
Reading a book where the author clearly read Bostrom's Superintelligence and gets the exctinction risk of AI
But then he talks about what skills to develop to do jobs an AI won't be able to do. . .
The singularity is the mental equivalent of looking at the sun - people just can't look at it directly for too long without feeling compelled to look away.
Can we stop this nonsense, please?
Extraordinary claims require extraordinary evidence. AGI is extra extraordinary.
While LLMs have come a long way, predicting a specific AGI timeline requires a nuanced analysis of technological capabilities, research challenges, and expert consensus(which is "too many known and unknowns unknowns").
Instead of posting such claims outright, perhaps we should at least:
And no, a plot of benchmark points over time is not good evidence, let alone employees and CEO posts just hyping their products, especially from closedAI.
Like he says, there are caveats especially when using RL. But it's also important to remember that it's a staggering achievement that we have been able to put essence of so many data labelers in different skilled domains into a model.
Further example
He does agree later that some kind of emergent propery may arise from all of these combination
The link he referred here:
https://karpathy.github.io/2021/03/27/forward-pass/
Link to full thread (worth reading the comments and Karpathy replies):
I've noticed lately that LLMs are hallucinating a lot in code and API parameters.
Like it will invent functions like setColor.
Then I'll say "Did you just hallucinate setColor" and it will respond with "yeah, sorry. I just invented that out of whole cloth. Sorry about that."
But maybe we could just start training the LLM to say things like:
"While I'm not sure if there's a setColor, maybe you could try that and see if it works? "
I'd be fine with that because it's a decent suggestion.
We're not actually sure how often the hallucinations are actually correct though. We always seem to get angry about wrong answers and correct answers we never notice.