/r/singularity

Photograph via snooOG

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

Links

A subreddit committed to intelligent understanding of the hypothetical moment in time when artificial intelligence progresses to the point of greater-than-human intelligence, radically changing civilization. This community studies the creation of superintelligence— and predict it will happen in the near future, and that ultimately, deliberate action ought to be taken to ensure that the Singularity benefits humanity.

On the Technological Singularity

The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.

The first use of the term "singularity" in this context was by mathematician John von Neumann. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030.

Proponents of the singularity typically postulate an "intelligence explosion", where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.

Resources
Posting Rules

1) On-topic posts

2) Discussion posts encouraged

3) No Self-Promotion/Advertising

4) Be respectful

Check out /r/Singularitarianism and the Technological Singularity FAQ

/r/singularity

2,256,369 Subscribers

0

People are posting this guy as if he’s reputable… but he posted this in Nov 2023, so for some reason he thought GPT 5 would come just 8 months after 4…

12 Comments
2024/04/23
20:22 UTC

0

"The probability of Civilization Collapse by 2100 is ZERO, according to the scientific consensus" [Minute 14:08]

11 Comments
2024/04/23
19:51 UTC

8

Autonomous browser agents: Where are we?

What is the current state of affairs for having autonomous browser agents? There was a lot of buzz about a year ago from AutoGPT and BabyAGI but they didn't seem to pan out. Is anyone even close to having computers do our tedious/repetitive tasks for us yet?

12 Comments
2024/04/23
19:45 UTC

0

Gas for suspended animation?

Could gas be used for suspended animation?

5 Comments
2024/04/23
18:36 UTC

0

My thoughts for why many people hear should pump the brakes on the “imminent” arrival of AGI.

I know this sub is intended to be more fun than some other more serious ones, so the wilder predictions and fantasies are often all in good fun. I’m not being really critical here or anything, but just stating my opinion on why so many here think AGI is coming soon, and why that’s probably not the case.

1.) If we’re not careful, we tend to greatly lower our evidential bar to believe something that we want to be true. If I was to claim “There is good reason to believe AGI is coming within 2 years” here, I would not be grilled or be demanded to provide a good rationalization for the view to the degree I would if I said “There is good reason to believe AGI is not coming in our lifetimes.”

I maintain that this is mostly just because people want AGI and so they are quick to believe positive predictions and slow to believe negative predictions about it.

2.) Conspiratorial/hype based thinking:

This sub is notorious for being almost on the level of places like /r/conspiracy in the way that uncertainty about something is almost always taken as a license to dream up fantastic scenarios. A good example is Sam Altman being fired briefly from OpenAI. It seems very clear, at this point, that he was fired because he was making business deals without board approval. But that is just relatively normal executing level business stuff, and not too exciting. This sub, instead of guessing something like that, instead ran wild with ideas of huge breakthroughs that scared the board so much that they had to fire Altman as a way to slow down the progress, or allow time to initiate safety measures against this terrifying new advancement.. Ilya saw something that terrified him so much he turned on Sam. Q* is AGI that can teach itself basic math, etc.

None of these things had any decent evidential basis to be believed. But this sub operates more on “how cool would it be if…” style logic vs. “Is there any evidence that…” style logic. This sort of thinking also extends itself to interpretation of cryptic tweets from Altman and others in the field who are clearly aware of how their words are received and are more than willing to stoke the flames of wild speculation because it drives interest and hype for their products.

3.) LLMs are likely not a pathway to AGI, and thus we have new breakthroughs needed.

Most of the people even working on AGI right now (even the hypebeasts like Altman) will state that transformer based models like our modern LLMs are not going to be scaled upwards to AGI because they just fundamentally aren’t that sort of thing. The leaders of most of the major players (Google, Meta, OpenAI, Anthropic, etc.) all have said that we still need novel breakthroughs to get from our current point to AGI.

But here is the key thing: the timing of major breakthroughs are unpredictable. Unless you already had a really good theoretical basis for one (which there is no indication we do), then you just couldn’t know if one is coming soon or not. So people who say “AGI within 5 years” are just assuming one (or multiple) major breakthroughs will occur. But, again, there is no good reason to assume that. It’s just a blind assumption.

TLDR:

1.) Users on this sub tend to lower their evidential bar for things they want to be true.

2.) People here tend to allow their imaginations to run wild anytime they are met with uncertainty.

3.) And there are baseless assumptions made that progress will continue in a linear (or exponential) manner and novel major breakthroughs will occur quickly.

These things all coincide to produce a sub in which hype is running wild and expectations are pretty untethered from the reality on the ground.

17 Comments
2024/04/23
18:11 UTC

5

Do AI tools have the ability to correct speech difficulties?

I'm referring to some of the new voice cloning AI's and more specifically Microsoft's VASA-1 paper titled, Lifelike Audio-Driven Talking Faces Generated in Real Time, which includes short video clip examples.

VASA-1 can take a single photo of an individual and a short voice sample, and generate in real time, video of that person speaking in their own voice and saying anything you want them to.

It also has controls for distance from camera and which way the person is looking, as well as adjusting their tone - for example, angry, happy, serious etc.

My question is whether or not AI tools like this would be able to correct a person's speech who had a stutter or lisp.

I'm a big fan of youtuber Isaac Arthur, and was thinking about this while listening to him, because he has a speech condition known as a rhotacism, which affects his ability to pronounce the letter "r".

I'm not suggesting he needs to, or should, do this. I like his voice and over the years his show and his voice have given me comfort and knowledge. Of course that's his choice either way.

But I believe other people with various speech difficulties might find something like this useful, and I'd like to know if it could work.

I'm hoping someone with a much better understanding of how things like VASA-1 and other voice cloning tools do what they do, could explain whether or not this would be an easy or challenging thing for them to do.

I should add that I have a personal interest in this, beyond simple curiosity.

0 Comments
2024/04/23
17:56 UTC

58

Fire-breathing robot dog that can torch anything in its path.

32 Comments
2024/04/23
17:32 UTC

29

Apple said to be developing its own AI server processor using TSMC's 3nm process

10 Comments
2024/04/23
17:06 UTC

67

Generative A.I. Arrives in the Gene Editing World of CRISPR Much as ChatGPT generates poetry, a new A.I. system devises blueprints for microscopic mechanisms that can edit your DNA.

Generative A.I. technologies can write poetry and computer programs or create images of teddy bears and videos of cartoon characters that look like something from a Hollywood movie.

Now, new A.I. technology is generating blueprints for microscopic biological mechanisms that can edit your DNA, pointing to a future when scientists can battle illness and diseases with even greater precision and speed than they can today.

Described in a research paper published on Monday by a Berkeley, Calif., startup called Profluent, the technology is based on the same methods that drive ChatGPT, the online chatbot that launched the A.I. boom after its release in 2022. The company is expected to present the paper next month at the annual meeting of the American Society of Gene and Cell Therapy.

Much as ChatGPT learns to generate language by analyzing Wikipedia articles, books and chat logs, Profluent’s technology creates new gene editors after analyzing enormous amounts of biological data, including microscopic mechanisms that scientists already use to edit human DNA.

These gene editors are based on Nobel Prize-winning methods involving biological mechanisms called CRISPR. Technology based on CRISPR is already changing how scientists study and fight illness and disease, providing a way of altering genes that cause hereditary conditions, such as sickle cell anemia and blindness.

Previously, CRISPR methods used mechanisms found in nature — biological material gleaned from bacteria that allows these microscopic organisms to fight off germs.

“They have never existed on Earth,” said James Fraser, a professor and chair of the department of bioengineering and therapeutic sciences at the University of California, San Francisco, who has read Profluent’s research paper. “The system has learned from nature to create them, but they are new.”

The hope is that the technology will eventually produce gene editors that are more nimble and more powerful than those that have been honed over billions of years of evolution.

On Monday, Profluent also said that it had used one of these A.I.-generated gene editors to edit human DNA and that it was “open sourcing” this editor, called OpenCRISPR-1. That means it is allowing individuals, academic labs and companies to experiment with the technology for free.

A.I. researchers often open source the underlying software that drives their A.I. systems, because it allows others to build on their work and accelerate the development of new technologies. But it is less common for biological labs and pharmaceutical companies to open source inventions like OpenCRISPR-1.

Though Profluent is open sourcing the gene editors generated by its A.I. technology, it is not open sourcing the A.I. technology itself.

The project is part of a wider effort to build A.I. technologies that can improve medical care. Scientists at the University of Washington, for instance, are using the methods behind chatbots like OpenAI’s ChatGPT and image generators like Midjourney to create entirely new proteins — the microscopic molecules that drive all human life — as they work to accelerate the development of new vaccines and medicines.

(The New York Times has sued OpenAI and its partner, Microsoft, on claims of copyright infringement involving artificial intelligence systems that generate text.)

Generative A.I. technologies are driven by what scientists call a neural network, a mathematical system that learns skills by analyzing vast amounts of data. The image creator Midjourney, for example, is underpinned by a neural network that has analyzed millions of digital images and the captions that describe each of those images. The system learned to recognize the links between the images and the words. So when you ask it for an image of a rhinoceros leaping off the Golden Gate Bridge, it knows what to do.

Profluent’s technology is driven by a similar A.I. model that learns from sequences of amino acids and nucleic acids — the chemical compounds that define the microscopic biological mechanisms that scientists use to edit genes. Essentially, it analyzes the behavior of CRISPR gene editors pulled from nature and learns how to generate entirely new gene editors.

“These A.I. models learn from sequences — whether those are sequences of characters or words or computer code or amino acids,” said Profluent’s chief executive, Ali Madani, a researcher who previously worked in the A.I. lab at the software giant Salesforce.

Profluent has not yet put these synthetic gene editors through clinical trials, so it is not clear if they can match or exceed the performance of CRISPR. But this proof of concept shows that A.I. models can produce something capable of editing the human genome.

Still, it is unlikely to affect health care in the short term. Fyodor Urnov, a gene editing pioneer and scientific director at the Innovative Genomics Institute at the University of California, Berkeley, said scientists had no shortage of naturally occurring gene editors that they could use to fight illness and disease. The bottleneck, he said, is the cost of pushing these editors through preclinical studies, such as safety, manufacturing and regulatory reviews, before they can be used on patients.

But generative A.I. systems often hold enormous potential because they tend to improve quickly as they learn from increasingly large amounts of data. If technology like Profluent’s continues to improve, it could eventually allow scientists to edit genes in far more precise ways. The hope, Dr. Urnov said, is that this could, in the long term, lead to a world where medicines and treatments are quickly tailored to individual people even faster than we can do today.

“I dream of a world where we have CRISPR on demand within weeks,” he said.

Scientists have long cautioned against using CRISPR for human enhancement because it is a relatively new technology that could potentially have undesired side effects, such as triggering cancer, and have warned against unethical uses, such as genetically modifying human embryos.

This is also a concern with synthetic gene editors. But scientists already have access to everything they need to edit embryos.

“A bad actor, someone who is unethical, is not worried about whether they use an A.I.-created editor or not,” Dr. Fraser said. “They are just going to go ahead and use what’s available.”

25 Comments
2024/04/23
14:55 UTC

102 Comments
2024/04/23
13:24 UTC

149

Gemini 1.5 pro is the 2nd in leaderboard arena, huge win for google.

94 Comments
2024/04/23
08:28 UTC

0

OpenAI releases new Sora video

12 Comments
2024/04/23
06:29 UTC

101

phi-3 a small 3.8B model nears GPT-3.5 on major benchmarks. There are also phi-3 7B and 14B

20 Comments
2024/04/23
06:25 UTC

892

"Today, we announce the successful editing of DNA in human cells with gene editors fully designed with AI. Not only that, we've decided to freely release the molecules under the @ProfluentBio OpenCRISPR initiative."

203 Comments
2024/04/23
06:10 UTC

38

This Chip Could Change Computing Forever - ColdFusion

5 Comments
2024/04/23
03:55 UTC

207

Phi-3 Technical Report, Impressive!

We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a phone. The innovation lies entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered web data and synthetic data. The model is also further aligned for robustness, safety, and chat format. We also provide some initial parameter-scaling results with a 7B and 14B models trained for 4.8T tokens, called phi-3-small and phi-3-medium, both significantly more capable than phi-3-mini (e.g., respectively 75% and 78% on MMLU, and 8.7 and 8.9 on MT-bench).

The Phi-3 technical report just released as well as some benchmarks, looking really solid. The Phi-3 Medium model gets 78 on the MMLU which is quite good for a model with only 14B params. It pretty much outperforms GPT-3.5 1106 on all of these benchmarks. Find out more here:

https://arxiv.org/pdf/2404.14219.pdf

https://preview.redd.it/hlnyhgjk35wc1.png?width=1266&format=png&auto=webp&s=9bf7517ae7a5df4a72812a41350624a5d3041fe6

55 Comments
2024/04/23
02:25 UTC

Back To Top