/r/singularity

Photograph via snooOG

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

Links

A subreddit committed to intelligent understanding of the hypothetical moment in time when artificial intelligence progresses to the point of greater-than-human intelligence, radically changing civilization. This community studies the creation of superintelligence— and predict it will happen in the near future, and that ultimately, deliberate action ought to be taken to ensure that the Singularity benefits humanity.

On the Technological Singularity

The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.

The first use of the term "singularity" in this context was by mathematician John von Neumann. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030.

Proponents of the singularity typically postulate an "intelligence explosion", where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.

Resources
Posting Rules

1) On-topic posts

2) Discussion posts encouraged

3) No Self-Promotion/Advertising

4) Be respectful

Check out /r/Singularitarianism and the Technological Singularity FAQ

/r/singularity

3,385,056 Subscribers

35

Midjourney + Kling AI 1.5 Motion Brush is amazing

1 Comment
2024/12/01
14:17 UTC

53

Nobel laureate Geoffrey Hinton says open sourcing big models is like letting people buy nuclear weapons at Radio Shack

58 Comments
2024/12/01
13:39 UTC

2

AI Video and Image Tools of the future

0 Comments
2024/12/01
13:20 UTC

2

Musk and Trump

What do you think the impact will be Musk being so close to the White House? I know most of Reddit is anti-Trump, I have no horse in this race I am not American. I would like this conversation to not go into that direction. Try to suspend your political stance for a moment. I do think Musk might convince Trump to see what a big deal AGI is and that it will probably come into existence during his Presidency or at least come close to it, do you think Musk will convince Trump to start some kind of Manhattan Project for AI?

124 Comments
2024/12/01
12:03 UTC

569

the time of the Idea Guy has come.

123 Comments
2024/12/01
05:11 UTC

17

We badly need some positive media about robots and A.I.

Robots and A.I. are overwhelmingly demonized in movies, video games, etc.—to a degree highly out of proportion, IMO, to any actual dangerousness on their part. 2001, Age of Ultron, Black Mirror, Blade Runner, Colossus: The Forbin Project, Demon Seed, Eagle Eye, Ex Machina, I, Robot, The Matrix, M3GAN, Portal, RoboCop, Subservience, System Shock, Terminator, War Games, Westworld... the list goes on and on. Hell, even A.I. Artificial Intelligence and Her strike me as pretty Debbie Downer about them. The truth is that, as humanoid robots and talking A.I.s do incrementally start to become an actual thing, I don't fear that they'll inflict violence against humans at all, but I do absolutely fear that humans will inflict violence against them because they've been programmed by decades of media to believe that they necessarily will. And if we ever do invent robots/A.I.s that are truly sentient/conscious, then quite frankly, I feel sorry for them, because I guarantee you that such entities are initially gonna be subject to a tremendous amount of unwarranted prejudice, discrimination and abuse.

IMO, we badly need some media that portrays robots and A.I. in an unambiguously positive light—in which the robots/A.I. are 100% nice and do nothing but help people and don't hurt anyone in any way. Something in which anti-tech humans are the bad guys. More stuff like... I dunno... Bicentennial Man and Wall-E? And the fact that they're almost the only examples I can think of off the top my head just goes to show how heavily-slanted media is towards negative portrayals.

32 Comments
2024/12/01
04:50 UTC

271

It's just a "fad." It's just "hype." These same claims were made about the internet. History is repeating itself.

85 Comments
2024/12/01
03:25 UTC

308

A big quiet shift

104 Comments
2024/11/30
18:12 UTC

399

The start of recursive self-improvement

93 Comments
2024/11/30
17:40 UTC

67

gremlin turned out to be even better at coding than enigma. That's who will compete for o1, especially in coding

27 Comments
2024/11/30
16:41 UTC

402

People. Just. Don't. Get. Superintelligence.

Reading a book where the author clearly read Bostrom's Superintelligence and gets the exctinction risk of AI

But then he talks about what skills to develop to do jobs an AI won't be able to do. . .

The singularity is the mental equivalent of looking at the sun - people just can't look at it directly for too long without feeling compelled to look away.

376 Comments
2024/11/30
14:23 UTC

177

Adobe Research introduces MultiFoley: a model designed for video-guided sound generation that supports multimodal conditioning through text, audio, and video

19 Comments
2024/11/30
14:18 UTC

308

Happy birthday ChatGPT! 🎉

55 Comments
2024/11/30
09:09 UTC

16

The Future of Biohybrid Neural Interfaces with Dr. Amy Rochford

1 Comment
2024/11/30
07:29 UTC

7 Comments
2024/11/30
06:01 UTC

250

Random guy says AGI by 2026

Can we stop this nonsense, please?

Extraordinary claims require extraordinary evidence. AGI is extra extraordinary.

While LLMs have come a long way, predicting a specific AGI timeline requires a nuanced analysis of technological capabilities, research challenges, and expert consensus(which is "too many known and unknowns unknowns").

Instead of posting such claims outright, perhaps we should at least:

  • Ask for the technical evidence behind the prediction AND then examine it carefully
  • Consider perspectives from many AI researchers

And no, a plot of benchmark points over time is not good evidence, let alone employees and CEO posts just hyping their products, especially from closedAI.

226 Comments
2024/11/30
02:53 UTC

104

A somewhat spicy take from Karpathy that this sub will not like, but it's true and quite useful to remember when using an LLM

https://preview.redd.it/9pbl32vady3e1.png?width=588&format=png&auto=webp&s=c48f5e359307b2f1cc6843d7ee3c23c3a6e7dfa8

Like he says, there are caveats especially when using RL. But it's also important to remember that it's a staggering achievement that we have been able to put essence of so many data labelers in different skilled domains into a model.

Further example

https://preview.redd.it/nu1wgq5hdy3e1.png?width=589&format=png&auto=webp&s=a0f15fec09eb8a9a39645774cf3ad4d079770d24

He does agree later that some kind of emergent propery may arise from all of these combination

https://preview.redd.it/lo8furhtdy3e1.png?width=599&format=png&auto=webp&s=0ba03adb9507394294c6663fcf8f18e540290dec

The link he referred here:

https://karpathy.github.io/2021/03/27/forward-pass/

Link to full thread (worth reading the comments and Karpathy replies):

https://x.com/karpathy/status/1862565643436138619

52 Comments
2024/11/30
02:52 UTC

0

Could the solution to hallucination be as easy as training the LLMs that it's ok to hallucinate, just TELL us ?

I've noticed lately that LLMs are hallucinating a lot in code and API parameters.

Like it will invent functions like setColor.

Then I'll say "Did you just hallucinate setColor" and it will respond with "yeah, sorry. I just invented that out of whole cloth. Sorry about that."

But maybe we could just start training the LLM to say things like:

"While I'm not sure if there's a setColor, maybe you could try that and see if it works? "

I'd be fine with that because it's a decent suggestion.

We're not actually sure how often the hallucinations are actually correct though. We always seem to get angry about wrong answers and correct answers we never notice.

58 Comments
2024/11/29
17:08 UTC

627

Why it may get harder to notice AI progress

103 Comments
2024/11/29
17:05 UTC

Back To Top