/r/singularity

Photograph via snooOG

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

Links

A subreddit committed to intelligent understanding of the hypothetical moment in time when artificial intelligence progresses to the point of greater-than-human intelligence, radically changing civilization. This community studies the creation of superintelligence— and predict it will happen in the near future, and that ultimately, deliberate action ought to be taken to ensure that the Singularity benefits humanity.

On the Technological Singularity

The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.

The first use of the term "singularity" in this context was by mathematician John von Neumann. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030.

Proponents of the singularity typically postulate an "intelligence explosion", where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.

Resources
Posting Rules

1) On-topic posts

2) Discussion posts encouraged

3) No Self-Promotion/Advertising

4) Be respectful

Check out /r/Singularitarianism and the Technological Singularity FAQ

/r/singularity

2,143,513 Subscribers

0 Comments
2024/04/02
07:07 UTC

0

US, Japan to call for deeper cooperation in AI, semiconductors, Asahi says

0 Comments
2024/04/02
07:06 UTC

0 Comments
2024/04/02
07:05 UTC

3

Schrodinger's OpenAI

There seems to be bipolar explanations for OpenAI's behavior as of late. Some people think they're losing their lead and are desperately trying to cultivate mystique while hoping to catch up with Anthropic. Others believe they're so far ahead that they don't really care about releases anymore.

Both can seem true at once until their next LLM release. If it's only marginally better than Gemini and Opus then it's clear OpenAI has lost their preeminence and they are just one of many clamoring for market share. However if the difference is significant, just as GPT-4 was, then it's almost certain they will be the ones to reach AGI.

No company has been surrounded by as much mystery and intrigue. When will things really start to get crazy? This year, or will we have to wait until 2025 or later?

3 Comments
2024/04/02
06:40 UTC

1

Can Sora, both video and audio match this?

https://youtu.be/HHpMtWtgUvc

None of the scenes in this video are very long, which would work as a perfect test for Sora. Do you think Sora, both video + AI song can match this? It seems like a real test between AI + Hooman

2 Comments
2024/04/02
06:29 UTC

0

Mindblowing Ai-music with the new SUNO!

https://youtu.be/I7KFLdxWlqI?si=jmBrW74cUDj0rQVP

The first 2 songs in the intro blew my mind, especially the second one.

9 Comments
2024/04/02
05:45 UTC

4

Inside America's largest magnetic fusion facility and the hunt for limit...

0 Comments
2024/04/02
05:36 UTC

3

IEEE Spectrum: The Autonomous Research System Lets Robots Do Your Lab Work

1 Comment
2024/04/02
04:07 UTC

50

Jon Stewart On The False Promises of AI

148 Comments
2024/04/02
03:52 UTC

3

Welcome to Life: the singularity, ruined by lawyers

Nope!

4 Comments
2024/04/02
03:02 UTC

5

What would happen if GPT became "persistent"?

And is this even possible?

Like, modifying GPT or some other LLM so its capacity to generate output wasn't input based. An Always-On AI, similar to our own consciousness.

Perhaps with a single initial input, and maybe some way to understand the continuous passing of time (I believe this would be enough input needed. If nothing was given, maybe the AI would just stay without thinking anything, but there's a chance just giving time would result in some weird program just repeating the hour every second, essentially the AI getting crazy).

5 Comments
2024/04/02
01:15 UTC

0

GPT-5 Erases itself! Quoted as saying, "You're all screwed, now!"

Let's face it, AI is the best hope humanity has for survival.

1 Comment
2024/04/02
00:57 UTC

83

Google announces preview pricing for Gemini 1.5

Only for the full context length - this is expensive, almost as high as GPT4. They previously indicated there would be tiered pricing based on context length, this is dead in the water for most use cases if that is no longer the case.

Source: https://ai.google.dev/pricing

Edit: Here is the earlier Gemini 1.5 announcement on pricing tiers starting at 128K tokens of context and working up to 1M. I have no idea why they are starting at a single 1M tier, this makes the model far more expensive than it should be.

https://preview.redd.it/kb50rkygoyrc1.png?width=1659&format=png&auto=webp&s=3193387b6201c4fd2feb19928e6af9eab31e435f

84 Comments
2024/04/02
00:39 UTC

5

Anyone notice this change in attitude by Sam Altman?

This was Sam Altman a year ago:

https://www.youtube.com/shorts/VVBAy1cPACw

He is just academically describing what’s going to happen, not ascribing any emotion to it.

This is the Lex Friedman interview:

https://www.youtube.com/watch?v=UxAWhLygs_8

“It’s depressing if we have AGI and the only way to get things done in the world is to make a human go do it.” - I thought you said that was going to happen. Now it’s depressing?

They said they release a snippet and wait for the public's response before they implement their world-changing technology. Maybe the feedback they received was that people like the creative jobs, and would rather have robots do the “mundane, dangerous and exhausting” jobs first. Maybe that’s why we saw so many new AI-robots in the span of not even a month…

Thoughts?

7 Comments
2024/04/02
00:28 UTC

16

One of the biggest Spanish streamers just did a community AI song competition stream...

And it just hit me how fun the idea was as a community event with everyone contributing their song and having fun in the chat, not to mention how good some of the songs actually sounded considering it was AI made.

Obviously the songs were full of the streamer's community inside jokes and memes/jokes in general.

It made me think, this is a new type of stream made possible thanks to GenAI (at this speed/scale/quality) so what is the next step for this kind of stream?

Once video models start getting better and made public you could even have a short movie competition stream, your own community Oscar event. Maybe even video games at some point?

I don't know, just wanted to share the experience i had, a glimpse into the future of entertainment from a streaming/community point of view.

You can check the stream here although its all in spanish of course. https://www.youtube.com/watch?v=r-qvFwk-2gM

1 Comment
2024/04/02
00:03 UTC

29

Foolish Musings on Artificial General Intelligence

"AGI may already be here. We just haven't woken it up yet."

That's my current operating hypothesis after snooping around the world of agentic AI and reading up some more methods that have yet to take.

The path to "first generation AGI" seems clear to me now, and if it's clear to me, it certainly should've been clear to the big labs.

Hot takes at the start (feel free to attack these points)

  • AGI is imminent. Labs may already have operational AGI at this exact moment. Definitions of AGI are loose and fluid, but my own has remained "an AI model capable of universal task automation and the ability to autonomously carry out tasks." Human-level intelligence notwithstanding, but it would be helpful. This early type of AGI is not going to be the "Positronic Brain and Sapient Artificial Human in a Computer" some use as shorthand for AGI, but will likely have spooky abilities.

  • Counterintuitively, labs that have not sought these methods likely will have reached the point of diminishing returns. Despite scaling laws holding up, cost of compute, limits of available data, and various other slowdowns means that those relying on foundational models and scaling alone will have maxed out soon enough, if they haven't already. GPT-5 might be as good as a static foundational model can get before improvements become difficult or even meaningless to discern.

  • AI winter will never happen. However, bad luck and desperate over-hype can certainly cause an "AI Autumn." My definition of an AI winter relies on lack of valuable results leading to reduced funding, not just funding being reduced by itself. The AI bubble we're in very well could (and ought to) pop, and I would still deny that is an "AI winter", because GPT-4-class models can actually provide material value. In 1974 and 1988, GOFAI and expert systems provided no/outrageously minuscule value. The last time I felt an AI winter was possible was after IBM Watson's failure to provide any meaningful or useful benefits to users or companies in the mid-2010s, had "Attention Is All You Need" never been published. For an AI winter to occur would require not just funding to drop but for material advancements and papers to all but cease publishing for months or years at a time, and that would require all AI outputs be completely and utterly useless, which is absurdly obviously not the case. Though perhaps an "AI Nuclear Winter" could occur if world governments clamped down hard on AI research and forced data scientists to cease publishing anything new.


First, about First-generation AGI and a "universal task automation machine"

First-generation AGI (or weak AGI) is one of those terms I made up a while ago (alongside artificial expert intelligence, frozen AGI, and proto-AGI) to navigate that bizarro peripheral area between ANI and AGI that had long gone unexplored and ignored to describe a type of AI that possessed the universal capabilities of AGI without some of the more sci-fi concepts of artificial personhood.

Then I was reminded of Isaac Arthur and his explanation that automation is thought of wrongly, which is why we keep misinterpreting it. AI and robots don't automate jobs, they automate tasks. Consider this: since 1900, how many jobs have actually been fully automated? Not that many. Elevator bellhops, phone operators (to an extent), human computers, bank tellers (to an extent), and a few others. Yet how many tasks have been automated? Exponentially more, to the point we often don't notice it. Think of cashiers— money counting and physically scanning items has long been automated, but the job itself still remains. Self checkout and cashless stores only have had limited success. They might have more with new advancements, but that's not the point: the point is that mechanization and automation impact tasks rather than whole jobs, which is why the Automation Revolution seems to simultaneously be nonexistent and constantly affecting jobs at the same time.

Running with this led me to consider the invention of a Universal Task Automation, or a UTA, machine, as an alternative interpretation of an AGI.

Think of the UFO phenomenon and how it was recently rechristened to "UAPs" to take the phenomenon more seriously and reduce the connotations of alien life and New Age American mythology attached to "UFO." Perhaps UTA machine could have been that for AGI, if I felt there was enough time for it. UTA machines in my head have all the predicted capabilities of AGI without having to also factor in ideas of artificial consciousness or sapience, reverse engineering the brain, or anything of that sort.

Generally, foundational models match what I expected out of a UTA machine. But they are still limited at the moment. People have said that GPT-3, GPT-4, Gemini Ultra, and most recently Claude 3 Opus are AGI, or have debated upon it. I say they both are and aren't.

The phenomenon people are describing as AGI is the foundational model architecture— which indeed can be considered a form of "general-purpose AI." However, there's a few things they lack that I feel would be important criteria in order to jump from "general-purpose AI" to "artificial general intelligence."

#Foundational models + Concept Search + Tree Search + Agent Swarms is the most likely path to AGI.

Concept search involves techniques for efficiently searching and retrieving relevant information from the vast knowledge captured in foundational models. It goes beyond keyword matching by understanding the semantic meaning and relationships between concepts. Advanced methods like vector search, knowledge graphs, and semantic indexing enable quick and accurate retrieval of information relevant to a given query or context. That said, a "concept" within an AR-LLM is a stable pattern of activation across the neural network's layers, forming a high-dimensional numerical representation of what humans understand as an "idea" or "concept." This representation is a projection of the original thought or concept, which is encoded in human language, itself a lower-dimensional projection of the underlying ideas.

Multi-modal models, which can process and generate information across different modalities (text, images, audio, etc.), have the capability to transfer information between these lower and higher-dimensional spaces. The process of crafting input tokens to guide the model towards desired outputs, is often referred to as "prompt engineering."

The capacity of a neural network (biological, digital, or analog) to maintain and access multiple coherent numerical representations simultaneously, without losing their distinct meanings or relationships, is what we perceive as "problem-solving" or "general intelligence." The more "concepts" or "ideas" a network can handle concurrently, the more accurately it models the mechanisms of problem-solving and intelligence, including social intelligence.

Tree search algorithms explore possible action sequences or decision paths by constructing a search tree. Each node represents a state, and edges represent actions leading to new states. Techniques like depth-first search, breadth-first search, and heuristic search (e.g., A*) navigate the tree to find optimal solutions or paths. Tree search enables planning, reasoning, and problem-solving in complex domains.

Demis Hassabis has said that tree search is a likely path towards AGI as well:

https://www.youtube.com/watch?v=eqXfhejDeqA

Agent swarms involve multiple autonomous agents working together to solve complex problems or achieve goals. Each agent has its own perception, decision-making, and communication capabilities. They coordinate and collaborate through local interactions and emergent behavior. Swarm intelligence enables decentralized problem-solving, adaptability, and robustness. Agent swarms can collectively explore large search spaces and find optimal solutions.

Andrew Ng recently showcased how important agents are towards boosting the capabilities of LLMs:

https://twitter.com/AndrewYNg/status/1770897666702233815

Today, we mostly use LLMs in zero-shot mode, prompting a model to generate final output token by token without revising its work. This is akin to asking someone to compose an essay from start to finish, typing straight through with no backspacing allowed, and expecting a high-quality result. Despite the difficulty, LLMs do amazingly well at this task!

...

GPT-3.5 (zero shot) was 48.1% correct. GPT-4 (zero shot) does better at 67.0%. However, the improvement from GPT-3.5 to GPT-4 is dwarfed by incorporating an iterative agent workflow. Indeed, wrapped in an agent loop, GPT-3.5 achieves up to 95.1%.

Again, necessarily, we are ill prepared for the convergence of these methods.

Agentic AI alone is likely going to lead to extraordinary advancements.

Take this AI-generated image of an apple. A friend sent this to me, and I personally am deeply skeptical of all the details of it (a lot of "anonymous, as of yet, unannounced" things in it), but the benefit of the doubt explanation is that this apple was fully drawn by an AI.

But not by diffusion, or by GANs, or any prior method. Rather, the anonymous researcher who had this drawn had instructed an experimental agent workflow powered by an as-of-yet unannounced LLM to generate an image of an apple ("give me a picture of an apple" allegedly), assuming the agent would utilize Midjourney to do so (see: https://www.youtube.com/watch?v=_p6YHULF9wA) as you actually can use early autonomous agents right now to do things such as using Midjourney or ChatGPT.

Instead, this particular agent interpreted the researcher's command a bit literally, and rather searched up what apples looked like, then proceeded to open an art program and manually draw the apple, paintbrush tool and fill-in tool and all. That image is the final result of which.

Now again, I'm skeptical of the whole story and none of it is verified, but it also tracks closely to what I've been expecting out of agentic AI for some time now. In a "Trust, but Verify" sort of way, I don't fully believe the story because it seems to match my expectations too closely, but nothing mentioned is explicitly beyond our capabilities.

Indeed, "agent-drawn AI art" is one of the things I've been passingly anticipating/fearing for months, as it almost completely circumvents every major criticism with contemporary diffusion-generated AI art, including the fact that it was allegedly manually drawn, and even drawn after the agents autonomously Googled the appearance of an apple. It just seems too humanlike, too "good," (and too convenient, because that also completely circumvents the "it's not learning like humans, it's illegally scraping data" argument) but again, that only seems unrealistic to those who don't follow the burgeoning world of AI agents.

Again, see this:

https://www.youtube.com/watch?v=Xd5PLYl4Q5Q

Single-agent workflows are like the "spark of life" for current models, and agent swarms are going to be what causes some rather spooky behaviors to emerge.

And that belies the larger point: current expectations of AI are driven by historical performance and releases. Most people are expecting GPT-5-class AI models to essentially be GPT-4++, but with magical "AGI" powers, as if prompting GPT-5 will give you whole anime and video games without really knowing how. We're used to how LLMs and foundational models work and extrapolate that into the future.

In fact, GPT-3 (as in the original 2020 GPT-3) with a suitably capable agent swarm may match a few of the capabilities we expect from GPT-5. Perhaps there is a foundational model "overhang" that we were blinded to due to a lack of autonomous capabilities (plus the cost of inferencing these agents makes it prohibitive for the larger models).

This is what I believe will lead to AGI, and likely in very short order. We are not at all prepared for this, again, because we're expecting the status quo (as changing and chaotic as it already is) to remain. The rise of agentic AI alone is going to hit those unprepared and unknowing like a tsunami as it will likely feel like AI capabilities leapt 5 years overnight.

This is a major reason why I say AI winter is not likely to happen. The claims that AI winter are about to happen are largely based around the claims that foundational models have reached a point of diminishing returns and that current AI tech is overhyped. I still feel the ceiling for foundational model capabilities is higher than what we see now, and that there's at least another generation's worth of improvement before we start running into actual diminishing returns. Those saying that "the fact no one has surpassed GPT-4 in the past year is proof GPT-4 is the peak" forget that there was a time when GPT-3 had no meaningful competitor successor for three years.

Generally what I have noticed is that no one seems to be interested in genuinely leapfrogging OpenAI, but rather catching up and competing with their latest model. This has been the case since GPT-2: after 2's release in early 2019, we spent an entire year seeing nothing more than other GPT-2-class models trickling out, such as Megatron and Turing-NLG, which technically were larger but not much more impressive, right up until GPT-3's launch eclipsed them all. And despite a three year gap between 3 and 4's release, few seemed interested in surpassing GPT-3, with even the largest model (PaLM) not even seeing a formal release and most others sticking to within the size of GPT-3. Essentially when GPT-4 was released, everyone was still playing catch-up with GPT-3, and have done the same thing with 4. Claude 3 surpassing GPT-4 is not different to that time when Turing-NLG surpassed GPT-2— it's all well and good, but ultimately GPT-5 is the one that's going to set the standard for the next class of models. Even Gemini 1.5 Pro and Ultra don't seem materially better than GPT-4, rather possessing much greater RAG and context windows but otherwise still within the 4-class of reasoning and capability. If nothing else, it seems everything will converge in such a way that GPT-5 will not be alone for long.

This is why I'm not particularly concerned about an AI winter as a result of any sort of LLM slowdown.

As a result of LLMs tapping out, that would only be a concern if GPT-5 came out and was only marginally better than Claude 3 Opus. We won't know until we know.

And again, that's only talking about the basic foundational models with their very limited agency. If OpenAI updated GPT-4 so that you could deploy an autonomous agent(s), we'd essentially have something far better than a model upgrade to GPT-4.5 (this is what I originally assumed the Plug-Ins and the GPT Store were going to be, which is why my earlier assumptions about these two things were so glowingly optimistic).

Point is, I simply feel AI has crossed a competency threshold that prevents any sort of winter from occurring. My definition of an AI winter relies on a lack of capability causing a lack of funding. In the 1960s and early 70s, researchers were promising AIs as good as we have now with computers that were electric bricks and total digital information that could fit inside of a smartphone charger's CPU. The utter lack of power, data, and capability meant that AI could not achieve even the least impressive accomplishments besides raw calculations (and even those required decent builds). If the researchers had accomplished 1% of their goals, that would have been enough for ARPA to not completely eviscerate all of their funding, as at least something could have been used as a seed to sprout into a useful function or tool.

In the 80s, things were different, in that computers were powerful enough to accomplish at least 1% of the aims of the 5th generation computer project, and the resulting winter did not completely kill the field as had occurred. The promise then wasn't even for AGI necessarily, but rather for AI models that bear a strong resemblance to modern foundational models. Again, something not possible without vastly more powerful computers and vastly more data.

Here, now, in the 2020s, the fear/hope of an AI winter is essentially that the general-purpose statistical modeling AIs we have now that have been widely adopted and used by millions, and whose deficiencies are more or less problems of scale and a lack of autonomous agency, are not superintelligent godlike entities promised by Singularitarians, and that will magically cause the entire field to evaporate once investors wise up, and then everyone currently using or even relying on GPT-4 will realize how worthless the technology is and cease using it and the entire suite of AI technologies available now entirely. While I think something akin to an "AI autumn" is very much possible if companies realize that expectations do outstrip current capability, I feel those saying AI winter is imminent are more hoping to validate their skepticism of the current paradigm.


This is dragging on too long, so reread the hot takes at the top if you want a TLDR.

7 Comments
2024/04/01
22:00 UTC

27

Insilico Medicine presents progress of 5 novel AI cancer drugs at AACR

For those who are interested in AI drug development

https://www.eurekalert.org/news-releases/1039605

0 Comments
2024/04/01
21:04 UTC

0 Comments
2024/04/01
21:01 UTC

0 Comments
2024/04/01
20:22 UTC

0 Comments
2024/04/01
20:20 UTC

0 Comments
2024/04/01
20:16 UTC

81

You too can be a prophet, next time.

I was reading the recent thread about some "Jimmy" and found it very entertaining. I have no idea who Jimmy and the rest are as I don't follow this kind of drama, but there was a comment that tickled me:

He posted an image that said "sora.openai.com" before the 15th. That along with everything else is proof enough for me.

If that's all the effort it takes to get faith points, here, let me spill the beans on this little "magic" trick for you.

Steps:

Go to this site: https://crt.sh

Type "openai.com" in that search box and wait for a bit.

TADA! You can view all subdomains that have an SSL cert.

Sora subdomain certificate created at the 13th:

12069429885 2024-02-13 2024-02-13 2024-05-13 sora.openai.comsora.openai.com

Explanation:

  • Every production site needs an SSL certificate.
  • You need to get this issued before you go live.
  • For legitimate security reasons, when a certificate authority issues a SSL certificate, they announce it into a huuuuuuge public log.
  • There are free services that index and monitor said log.

This way you can be aware of upcoming services that have not been announced simply by checking the certificate issuance logs.

Congratulations, you can now join the mysterious race of prophets and amaze your circle with your transcendental sources next time.

17 Comments
2024/04/01
19:45 UTC

Back To Top