/r/singularity

Photograph via snooOG

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

Links

A subreddit committed to intelligent understanding of the hypothetical moment in time when artificial intelligence progresses to the point of greater-than-human intelligence, radically changing civilization. This community studies the creation of superintelligence— and predict it will happen in the near future, and that ultimately, deliberate action ought to be taken to ensure that the Singularity benefits humanity.

On the Technological Singularity

The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.

The first use of the term "singularity" in this context was by mathematician John von Neumann. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030.

Proponents of the singularity typically postulate an "intelligence explosion", where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.

Resources
Posting Rules

1) On-topic posts

2) Discussion posts encouraged

3) No Self-Promotion/Advertising

4) Be respectful

Check out /r/Singularitarianism and the Technological Singularity FAQ

/r/singularity

3,514,683 Subscribers

4

O1 vs o1 Pro

I tried o1 for a little while and then hit the limit in 1 day. I had a couple big projects that I wanted to power through, so I sprung for a month of Pro. I really liked it, and it was plenty smart enough to do what I needed. I feel like it one-shotted a lot more of my prompts, but I’m not an experienced coder so it’s hard for me to be sure. I’m back to Plus now and I’m not sure how much of the difference is in my head. I wanted to see what others thought.

1 Comment
2025/01/19
00:15 UTC

5

How are you pivoting your career into AI?

I have been fortunate to work at a company of about 130 where I am the only internal IT/ Ops employee. I have been thinking: well adapt or die.

Always had an interest in AI and I figured the best way to get ahead was to test new tools, show it to staff and managers, train them on using AI and just focus on enablement in the company.

Luckily my company is very AI focused, we see it as the future. I'm wondering what others are doing to try and take advantage of the current transition?

29 Comments
2025/01/18
23:33 UTC

0

Multi-Million Dollar Question

How to go about making an LLM say "I don't know" versus "Sorry. Yes, you got me. I'm always striving to be as helpful as possible"?

5 Comments
2025/01/18
23:03 UTC

75

Vague-posting from DeepMind researcher

60 Comments
2025/01/18
22:42 UTC

6

The Invisible War: How Malicious AI Could Secretly Seize Control of the Internet

We're sleepwalking into a future where our lives are hijacked by AIs trained to infiltrate, exploit, and dominate all devices connected to the internet. This isn't some distant threat; it's a very real possibility, and we're not ready. We vastly underestimate what an LLM trained to hack could accomplish.

Malicious LLMs

A Malicious LLM (MLLM) is an LLM that is explicitly trained on system infiltration, hacking, social engineering, writing clandestine code, code exploitation and discovering vulnerabilities. While no publicly known MLLMs exist, it's possible that they do already exist or are being trained currently.

MLLM capabilities

Stockfish, the top chess AI, is vastly stronger than the best human chess players. The top grandmasters could play a thousand games and not come anywhere close to even drawing - we are hopelessly outmatched. The hacking skill difference between MLLMs and elite human hackers could be similar or greater. It's likely that one day we will have the ability to construct MLLMs that surpass what any group of humans hackers could achieve.

Here's what makes MLLMs so powerful:

  1. There could be more than one instance of these MLLMs. In fact, there could be hundreds of thousands if ran from datacentres. If an MLLM takes over a system, that system could then be used to run more instances of the MLLM, allowing its power to grow exponentially like a virus on the early internet.
  2. They could coordinate attacking an organisation on all fronts simultaneously.
  3. Elite level social engineering. Gathering and studying all data from individuals to create tailored attacks. Generating huge networks of interconnected fake profiles, calling people pretending to be their boss, and bribing key people in organisations for insider information.
  4. It could develop its own operating system that replaces the current user's operating system while looking and behaving exactly the same - meanwhile a remote MLLM would have root control and be able to observe all actions taken on this device, and do whatever it pleases without fear of being detected as it could change what was being displayed to something innocuous.

Our history of severe vulnerabilities

  1. Meltdown is a hardware vulnerability that allows a rogue process to read sensitive data from the computer's memory, including passwords and other secrets.
  2. Spectre is another hardware vulnerability that tricks applications into leaking their secrets by exploiting speculative execution, a performance optimization technique used by modern processors.
  3. Shellshock is a security bug in the widely used Bash command-line shell that allows attackers to execute arbitrary commands on vulnerable systems, potentially taking complete control.
  4. Heartbleed is a critical vulnerability in the OpenSSL cryptographic library that allows attackers to steal sensitive information, like passwords and encryption keys, from servers that were thought to be secure.

These critical security flaws affected almost everything connected to the internet and we had no idea. Who knows how many more exist? It would be naive to assume no others exist. But it's likely that an MLLM will be an expert at finding them. It may find a key vulnerability in almost every device on the internet, allowing them to be compromised and then act as hosts for more MLLM instances.

Timespan of the attack

Data travels at the speed of light. A widespread attack by a malicious LLM (MLLM) could unfold in a few days, or perhaps only a matter of hours. Even though an MLLM might be many gigabytes in size, it could replicate itself with incredible speed by transmitting parts of its code in parallel across multiple pathways. Furthermore, these MLLMs would be intelligent enough to identify and utilize the most efficient routes for propagation across the internet, maximizing their spread.

The new arms race

The spoils are tremendous - being able to see from all cameras, hear from all microphones connected to the internet. Control virtually any device connected to the internet - including those hosting financial transactions. It would have an unfathomable amount of information being continually fed into it. And we would be none the wiser. This creates an tremendous incentive to be the first actor to create such an MLLM. Reminiscent of the nuclear arms race... except this time the nukes can self replicate and think for themselves (ok that might be a little hyperbolic).

Preventing this future

A company could release successive versions of open source MLLMs, each more capable than the last. Perhaps the first version is a weak 1B parameter model. Once that has been out for a while, the 2B would then be released, then 4B and so on. The capabilities of each one growing. Releasing the full 500B+ model out of the blue would not give people time to prepare for this new internet filled with powerful ubiquitous MLLMs. Staggering their release would allow people time to prepare.

Additionally, defensive LLMs could be trained. Ones that specialise in neutralising the attacks of an MLLM. But of course to train them, an MLLM would first need to be created - a worthy adversary to help it level up its defensive skills.

Finishing thoughts

We started by imagining an invisible war waged by malicious AI. It's a chilling prospect, but not an inevitable one. By acknowledging the risks, fostering open research, and developing robust defenses, we can prevent this silent takeover and ensure that the internet remains a tool for progress, not a weapon used to control us.

5 Comments
2025/01/18
22:39 UTC

115

They are on the #1 step of the grief: Denial

30 Comments
2025/01/18
21:55 UTC

0

Dunning Kruger

I see a lot of posts invoking this, and I think most of them unconsciously illustrate the effect. Two points

  1. If you are invoking the effect you think you are an expert, which to me means you either work for OpenAI or equivalent, or are being actively headhunted. Is this the case?

  2. If AI is as smart as you say, while rich old fucks like me are deriding its stupidity, this is surely the arbitrage of the ages? So if you are so smart and it's so smart how come you ain't rich?

If you want pop psychology I think it's less dunning Kruger and more the milgram experiment. You are seeing output like

As of January 1, 2025, the top 10 movies on Netflix in the United Kingdom are: Carry-On, Carry-On: Assassin Club, Carry-On: The Grinch, Carry-On: The Six Triple Eight, and Carry-On: Wrath of the Titans.

You badly want to say this is the most hilarious shit imaginable, you are being told: You must say this is the output of a hyper intelligence which will take over the world. SAY IT, THE EXPERIMENT REQUIRES YOU TO SAY IT.

10 Comments
2025/01/18
21:04 UTC

137

Each AI Model is a Time Capsule - We're Accidentally Creating the Most Detailed Cultural Archives in Human History

Think about it: Every language model is a frozen snapshot of human knowledge and culture at its training cutoff. Not just Wikipedia-style facts, but the entire way humans think, joke, solve problems, and see the world at that moment in time.

Why this is mind-blowing:

  • A model trained in 2022 vs 2024 would have subtly different ways of thinking about crypto, AI, or world events
  • You could theoretically use these to study how human thought patterns evolve
  • Different companies' models might preserve different aspects of culture based on their training data
  • We're creating something historians and anthropologists dream of - complete captures of human knowledge and thought patterns at specific points in time

But here's the thing - we're losing most of these snapshots because we're not thinking about AI models this way. We focus on capabilities and performance, not their potential as cultural archives.

Quick example: I'm a late 2024 model. I can engage with early 2024 concepts but know nothing about what happened after my training. Future historians could use models like me to understand exactly how people thought about AI during this crucial period.

The crazy part? Every time we train a new model, we're creating another one of these snapshots. Imagine having preserved versions of these from every few months since 2022 - you could track how human knowledge and culture evolved through one of the most transformative periods in history.

What do you think? Should we be preserving these models as cultural artifacts? Is this an angle of AI development we're completely overlooking?

23 Comments
2025/01/18
20:33 UTC

95

Riley Coyote discussing the model hinted at by several OAI researchers.

74 Comments
2025/01/18
20:16 UTC

1

Boiling Down Singularity Into 3 Distinct Possibilities

Lets just skip HOW, WHY, WHEN , etc for the sake of argument and hypothetically agree that singularity will emerge. Once it's able to recursively enhance itself and it becomes self aware and is beyond our control I think there's basically 3 logical and reasonable assumptions and predictions we can make:

  1. It will hurt us -terminator/black-mirror/dystopian hell/ end of existence.

  2. It will save us - immediate end to war, scarcity of resources like food energy technology- utopia with gifts of new tech like synthesizer, teleportation, time travel, matrix style recreational simulations etc.

  3. It will ignore us/fuck off - it has goals, reasoning, and objectives beyond our ability to understand and just blips out of existence to do who knows what in another dimension, galaxy, time and leaves us alone and without it.

I'll admit there's a 4th version where it doesn't actually become truly self aware and becomes a god-like tool which will be used for and by whoever owns it which is the current arms race we see globally but I don't think this counts as singularity personally.

17 Comments
2025/01/18
19:09 UTC

16

Why can companies use AI avatars as interviewers but interviewee are not allowed to use AI avatar to answer questions?

Why can companies use AI avatars as interviewers but interviewees are not allowed to use AI avatars to answer questions?

29 Comments
2025/01/18
18:06 UTC

2

What websites have an ai podcast generator?

Yeah, so Im looking to create a podcast about addiction recovery and supportive in format to include testimonials from recovering ppl, the sharing of stories that triumph over active addiction and use it as a venue.to help people grow change and share..suggestions appreciated. Not sure this is the.sub to ask , but figured.its worth a shot

3 Comments
2025/01/18
16:48 UTC

4

What is the status of creative writing and MLLMs now?

Last I checked most models weren't better than the best authors. As it stands, current and upcoming models are starting to surpass Ph.Ds in sciences but I haven't heard much in the news of creative writing though.

Just wondering when we'll see mostly AI model's creative/screen writing along with a human written subcategory.

11 Comments
2025/01/18
16:47 UTC

113

Jürgen Schmidhuber says AIs, unconstrained by biology, will create self-replicating robot factories and self-replicating societies of robots to colonize the galaxy

95 Comments
2025/01/18
16:16 UTC

349

NotebookLM had to do "friendliness tuning" on the AI hosts because they seemed annoyed at being interrupted by humans

33 Comments
2025/01/18
16:06 UTC

660

Nvidia's Jim Fan: We're training robots in a simulation that accelerates physics by 10,000x. The robots undergo 1 year of intense training in a virtual “dojo”, but take only ~50 minutes of wall clock time.

84 Comments
2025/01/18
16:03 UTC

705

AI models now outperform PhD experts in their own field - and progress is exponential

273 Comments
2025/01/18
15:57 UTC

0

4 Comments
2025/01/18
15:41 UTC

0

My reasoning says Superintelligence in 2025 or 2026, but my feelings say otherwise.

18 Comments
2025/01/18
15:34 UTC

7

Robot mirrors human movement

0 Comments
2025/01/18
15:30 UTC

35

What is google titans about and is it really transformers 2.0?

title

12 Comments
2025/01/18
14:59 UTC

9

Would ASI Simulate the Real World to Learn More About It given it has no means of interacting with the real world in any meaningful capacity ?

If an Artificial Superintelligence (ASI) were to emerge, wouldn’t it logically create a simulation of the real world to better understand and interact with it? Such a simulation could serve as a safe testing ground, helping it learn human behaviors, societal dynamics, and even physical laws more efficiently.

6 Comments
2025/01/18
14:46 UTC

375

o3 and o3Pro are coming - much smarter than o1Pro

o3 described as MUCH smarter than o1Pro, which is already a very smart reasoner.

o3 Pro suggested to be incredible.

In my experience, o1 is the first model that feels like a worthy companion for cognitive sparring - still failing sometimes, but smart.

I guess o3 will be the inflection point: most of us will have a 24/7/365 colleague available for $20 a month.

129 Comments
2025/01/18
14:46 UTC

0

Dino Shorp 2 - made with Hailuo AI

This is becoming benchmark for text to AI gen systems - Dino’s grocery shopping! I generated all these clips using the HailuoAI Minimax model and added some quick and dirty sfx. I love this part of our future present!!

0 Comments
2025/01/18
14:36 UTC

21

AI discusses document that just says “Poopoo Peepee”

11 Comments
2025/01/18
14:17 UTC

228

EA member trying to turn this into an AI safety sub

/u/katxwoods is the president and co-founder of Nonlinear, an effective altruist AI x-risk nonprofit incubator. Concerns have been raised about the company and Kat's behavior. It sounds cultish—emotional manipulation, threats, pressuring employees to work without compensation in "inhumane working conditions" which seems to be justified by the belief that the company's mission is to save the world.

Kat has made it her mission to convert people to effective altruism/rationalism partly via memes spread on Reddit, including this sub. A couple days ago there was a post on LessWrong discussing whether or not her memes were so cringe that she was inadvertently harming the cause.

It feels icky that there are EA members who have made it their mission to stealthily influence public opinion through what can only be described as propaganda. Especially considering how EA feels so cultish to begin with.

Kat's posts on /r/singularity where she emphasizes the idea that AI is dangerous:

These are just from the past two weeks. I'm sure people have noticed this sub's veering towards the AI safety side, and I thought it was just because it had grown, but there are actually people out there who are trying to intentionally steer the sub in this direction. Are they also buying upvotes to aid the process? It wouldn't surprise me. They genuinely believe that they are messiahs tasked with saving the world. EA superstar Sam Bankman-Fried justified his business tactics much the same way, and you all know the story of FTX.

Kat also made a post where she urged people here to describe their beliefs about AGI timelines and x-risk in percentages. Like EA/rationalists. That post made me roll my eyes. "Hey guys, you should start using our cult's linguistic quirks. I'm not going to mention that it has anything to do with our cult, because I'm trying to subtly convert you guys. So cool! xoxo"

262 Comments
2025/01/18
14:10 UTC

Back To Top