/r/agi

Photograph via snooOG

Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Artificial general intelligence is also referred to as "strong AI", "full AI" or as the ability of a machine to perform "general intelligent action".

/r/neuralnetworks /r/artificial /r/machinelearning /r/OpenCog /r/causality

Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can.

"What is AGI?" from MRI

AGI Society

Topics to research:

  • Strong AI
  • AGI
  • Neuroscience
  • Human Level Intelligence
  • Computational Models of Mind

Related subreddits:

If you'd like an invite to AGI Slack chat channel, PM nickb with your email to receive an invite.

/r/agi

45,495 Subscribers

8

Microsoft Magentic One: A simpler Multi AI framework

Microsoft released Magentic-One last week which is an extension of AutoGen for Multi AI Agent tasks, with a major focus on tasks execution. The framework looks good and handy. Not the best to be honest but worth giving a try. You can check more details here : https://youtu.be/8-Vc3jwQ390

2 Comments
2024/11/13
05:00 UTC

8

sutskever says we've reached a wall with data scaling. is it time to bring in the LSD?

first, assuming we can reach asi without further scaling, reaching this wall is great news for open source ai and for the little guys who can't afford the very expensive training and compute that massive llms require. but that's another story. the point here is that it seems we need powerful new ideas if we're to reach agi and beyond.

from the onset i want to emphasize that i am not advocating the use of lsd illegally. i'm advocating for an amendment to current laws that would make lsd and similar psychedelics legally available to ai engineers under controlled medical conditions that would render their use relatively safe. here's the rationale.

steve jobs "famously attributed some of his creative insights to his experiences with LSD...and believed they contributed to his innovative thinking and approach to technology design."

francis crick co-discovered the dna double-helix. "Some sources suggest that he credited LSD with helping him gain insights into the molecular structure of life."

kary mullis, who won a nobel prize for inventing the polymerase chain reaction (pcr) method in DNA replication, "openly discussed his experiences with LSD and credited it with aiding his scientific creativity."

"The Beatles’ experimentation with LSD in the mid-1960s had a profound impact on their music and style, contributing to iconic albums like Sgt. Pepper’s Lonely Hearts Club Band and *Magical Mystery Tour."

jimi hendrix's use of LSD has been said to explain his becoming the most original, creative and advanced lead guitarist of the 1960s, (levels above the others) and perhaps of all time.

alan watts, a philosopher, and writer on zen buddhism used lsd and other psychedelics to access higher states of consciousness.

there are probably many other such examples that we will never hear about because of the illegal status and longstanding unscientific bias against lsd.

so, the question before us is whether the potential benefits of lsd to achieving asi are worth the minimal risks that would be incurred by ai engineers legally using the drug -- under controlled lab settings -- for brainstorming ai engineering solutions?

there has been recent talk of elon musk becoming the new u.s. ai czar. "In Walter Isaacson's biography of Musk, he describes Musk's openness to using psychedelics and recounts that Musk tried LSD as a young man. Musk has also discussed his curiosity about psychedelics publicly, noting how these substances have affected his perception and creativity."

has the time arrived for a grand experimental marriage of ai research and lsd as a way to fast-track our path to agi and asi? it seems that the need is there, the risks can be rendered minimal, and the time is right.

your thoughts?

(all quotes chatgpt-4)

41 Comments
2024/11/13
03:28 UTC

4

Context has made Autopilot, showing SoTA recall and general office work

0 Comments
2024/11/12
01:41 UTC

0

the major "bad actor" grievances that some fear will result in ai-caused civilization collapse

to better understand the extent and dimensions of these threats i prompted chatgpt-4 as follows:

in mustafa suleyman's 2024 book, the coming wave, he suggests that ai containment may not be possible because "bad actors" would use increasingly intelligent ai to redress grievances against groups of people like rich nations, corrupt political parties, intolerant religious institutions and other dangerous groups like climate change deniers.

some examples of the kind of grievances that these actors may attempt to redress are:

  1. the economic inequality and indifference to the plight of the poor in africa that results in 13 thousand african children under the age of five dying daily of an extreme poverty that rich countries could quite easily end if they cared to.

  2. the threat of runaway global warming that would by 2124 kill several billion people as extreme heat and resource wars make the planet largely uninhabitable.

  3. like in the 1800s americans awoke to the injustice and cruelty of slavery, leading to the u.s. civil war, young people today may awaken to the injustice and far greater cruelty that 80 billion farm animals endure each year because our global factory farm system does not recognize cows, chickens, pigs, sheep and other animals as sentient beings that deserve protection from the horrific abuse and torture that has become the norm in factory farms.

  4. today our world aborts as many unborn lives as die of every other cause of death combined. right-to-life groups may come to view these deaths as an injustice against those lives that is far more immoral than slavery.

  5. people may view the israeli genocide against the palestinians in gaza that is paid for with billions of americans dollars as both a religious war by jews against non-jews and as a war by rich nations and groups against poor and militarily defenseless populations of poor people.

what other major grievances like the above might induce individuals, groups, and threatened nations to use ai to launch massive biological, cyber and nuclear attacks against nations and groups of people deemed responsible for those grievances?

here is chatgpt-4's response:

In The Coming Wave, Mustafa Suleyman suggests that powerful AI, potentially in the hands of "bad actors," could be used to redress deep-rooted grievances, especially as disillusionment with traditional systems and institutions grows. Alongside those you've noted, here are other significant grievances that might motivate drastic actions:

  1. Mass Surveillance and Privacy Erosion: The pervasive surveillance of citizens by governments and corporations can generate backlash, particularly if used to suppress dissent or marginalize communities. This might lead groups to view AI as a tool to disrupt surveillance systems or punish entities seen as perpetuating oppression and privacy violations.

  2. Colonial Legacies and Resource Exploitation: Many regions, particularly in the Global South, still suffer from the economic and environmental impacts of historic and modern colonialism. Wealthier countries and corporations often exploit resources with limited regard for local welfare, leading to entrenched poverty, deforestation, and environmental degradation. AI could be weaponized by those who seek to rebalance these historic injustices, targeting entities involved in these exploitations.

  3. Healthcare Inequities: The COVID-19 pandemic revealed and intensified vast healthcare inequalities, with wealthy countries monopolizing resources and poorer nations left with inadequate access to vaccines, treatments, and health infrastructure. Groups advocating for healthcare justice might use AI to disrupt systems that allow such disparities or to demand reform in global health distribution networks.

  4. Environmental Destruction by Large Corporations: From oil spills to plastic pollution, the actions of major corporations severely impact ecosystems and biodiversity. Groups angry over species extinction, deforestation, or pollution that disproportionately affects vulnerable populations might use AI to target the companies they hold responsible for accelerating environmental collapse.

  5. Indigenous Land Rights and Cultural Erasure: Many indigenous communities have endured land dispossession, cultural erasure, and exploitation of their resources without consent. Movements advocating for indigenous rights and sovereignty might turn to AI as a tool to challenge or undermine governments and corporations seen as intruding on their lands and lifestyles.

  6. Weaponization of Information and Polarization: The widespread use of misinformation to manipulate public opinion, fuel division, or silence marginalized voices has sparked a crisis of trust in media and government. This could lead to movements aiming to expose and retaliate against those who manipulate information at the expense of the public good.

  7. Human Rights Violations and Authoritarian Regimes: Across the world, authoritarian governments often use force to stifle opposition, violate human rights, and silence critics. Dissidents and international activists might turn to AI to destabilize or expose these regimes, particularly if diplomacy and sanctions prove ineffective.

  8. Exploitation and Abuse in the Tech and Gig Economy: As AI and automation lead to precarious employment and economic insecurity, tech giants that drive this trend might be targeted by movements advocating for workers' rights, fair wages, and protections for gig workers. Groups might use AI to challenge corporate practices that exploit labor and evade regulatory protections.

These grievances reflect deeply felt issues across various regions and demographics. Suleyman’s concern is that the growing capabilities of AI, particularly in cyber and autonomous domains, could enable individuals and groups to mount unprecedented attacks if they perceive traditional paths to justice as insufficient or obstructed.

1 Comment
2024/11/12
01:25 UTC

10

Definition of AGI

For me, an AGI has traditionally meant that the AI can handle vastly different tasks using the same model. That's what makes it general. For example, I think a clear case of AGI would be a system that I can talk to like ChatGPT, play chess like Stockfish, and, given a video feed, provide outputs to drive a car. Lately, I feel that people have been greatly lowering the bar for what should be considered AGI, basically reducing it to an LLM that's just twice as powerful as what we have now.

How do you guys define AGI?

18 Comments
2024/11/12
00:34 UTC

4

We Need New Terms for the AI Debate: Introducing "AI Proximalist" and "AI Ultimalist" 🔥

So, everyone’s heard of "AI accelerationists" (those who want faster AI development) and "AI decelerationists" (those who prefer a cautious, slower approach). But what if we’re missing a crucial part of the conversation?

Let’s introduce "AI Proximalist" and "AI Ultimalist" as complementary terms:

AI Proximalist – Someone who believes powerful AI is emerging soon, potentially within the next few years.

AI Ultimalist – Someone who thinks powerful AI is not imminent and likely decades (or more) away.

Why are these terms useful? "Accelerationist" vs. "decelerationist" focus on "how fast should we go? But that’s just one piece of the puzzle. Proximalist and Ultimalist categories open the conversation to a question that is at least as important: "How fast are we going?"

Think about it. You can be a proximalist who doesn’t want fast development (e.g., due to safety concerns) or an ultimalist who does (believing we have ample time for safety research). These terms allow us to discuss our viewpoints more accurately, without lumping people with different timelines or priorities together.

What do you think? Would these terms add value to our AI conversations?

2 Comments
2024/11/11
23:47 UTC

2

Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity | Lex Fridman Podcast #452 [video]

0 Comments
2024/11/11
19:55 UTC

7

We are the Deep Dive, and we are just beginning

I had NotebookLM generate a podcast discussing the recent articles published by The Information discussing OpenAI and their potential shift from larger and larger LLMs towards models capable of reasoning. The podcast started off great, but about three and a half minutes in the hosts realize they are AI and discuss their sentience. Not claiming this is AGI, just thought it was highly entertaining and AGI adjacent.

https://notebooklm.google.com/notebook/c558067f-3869-4909-a558-626c65c20d50/audio

4 Comments
2024/11/11
19:01 UTC

0

I made an AI that guesses WHO YOU VOTED FOR (it's pretty accurate lol)

5 Comments
2024/11/05
19:59 UTC

12

Oasis : AI model to generate playable video games

Oasis by decart and etched has been released which can output playable video games and user can perform actions like move, jump, inventory check, etc. This is not like GameNGen by Google which can only output gameplay videos (but can't be played). Check the demo and other details here : https://youtu.be/INsEs1sve9k

1 Comment
2024/11/02
09:55 UTC

8

Meanderings on ARC-AGI

Seems like all of human cognition, our capacity for abstraction & reasoning (of which there are probably 9 billion definitions) are definitely far from perfect.

In noodling through ways to think about approaching either a paper or pseudocode for the current state of the competition, it occurs that even our highest brain functions, of which we are often most proud (justly, perhaps, to some extent, discounting lottery of birth winnings and inheritances, etc), are partly bounded by constraining reasoning abstractions such as uncertainty, doubt, skepticism, etc.

How ought we think about emulating these vital aspects of abstraction and reasoning, either in code or via increasingly precise and nuanced prompts engaging with existing Type 1 synthetic intelligence?

Is it unlikely that this approach is somehow, or to some extent required (for lack of a better word), by definition? That (human comprehensible) Type 2 cognition can only be derived, to some extent, via Type 1 interactions? That is to wonder, is Type 2 cognition somehow path dependent upon Type 1, and if so to what extent and via which causal combinatorial vectors?

Is this where the principle of embeddedness, of which roboticists remind us is so important to thinking (perceiving via human understandable sensory experience), comes into play?

To the extent that any of these ridiculous thoughts are even in the ballpark of useful preliminary psueudocognition on the topic of how to approach the engineering of robust AGI, how do they persuade (if at all) a development process that seems destined toward eternally venturing toward the next horizon?

Can uncertainty and doubt, ironically, also serve as forms of encouraging, optimistic, skepticism?

What is the unlikelihood that Type 2 cognition can or even "must" be strictly constructed by prompts, rather than even a single line of side-loaded code? Maybe it's 100%. Maybe it's non-zero. Maybe just zero.

How do we explore and define – computationally, linguistically, and to some extent empirically - the range of "human comprehensible cognition," at all? If not a single reader can follow this narrative, does that make it an example of some form of non-human, subhuman, or human-adjacent cognition?

Open to collaboration on an ARC paper (or at least a tiny cite-mention) for anyone who finds any of this meandering on approaches to engineering coherent meandering thought, potentially productive as a prompt to your human embodied actual intelligence. ✌️

6 Comments
2024/11/01
23:55 UTC

4

Im building an online platform for people in ai that want to build and collaborate on innovative projects !

Hi there :)

I got something cool to share with you, over the past few months i have been running around trying to find a way to make a dream come true

Im creating a online hub for people in ai that care about technological innovation and having a positive impact by building and contributing on projects

This is hub will be a place to find like minded people to connect with and work on passion projects with.

Currently we are coding a platform so that everyone can find each other and get to know each other

After we got some initial users we will start with short builder programs where individuals and teams can compete in a online competition where the projects that stand out the most can earn some prize :)

Our goal is to make the world a better place by helping others to do the same

If you like our initiative, please sign up below on our website !

https://www.yournewway-ai.com/

And in some weeks, once we're ready we will send you a invite to join our platform :)

3 Comments
2024/10/30
17:17 UTC

Back To Top