/r/agi

Photograph via snooOG

Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Artificial general intelligence is also referred to as "strong AI", "full AI" or as the ability of a machine to perform "general intelligent action".

/r/neuralnetworks /r/artificial /r/machinelearning /r/OpenCog /r/causality

Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can.

"What is AGI?" from MRI

AGI Society

Topics to research:

  • Strong AI
  • AGI
  • Neuroscience
  • Human Level Intelligence
  • Computational Models of Mind

Related subreddits:

If you'd like an invite to AGI Slack chat channel, PM nickb with your email to receive an invite.

/r/agi

45,348 Subscribers

4

I built the simplist AI game town

I built a simple AI game town through RPGGO API, and any game on the platform can inject 2D assets through this.https://github.com/codingtmd/gamify-ai-town/ In this AI town, I can romance pretty girls. The API lets me skip all the AI complexity and just focus on making the game with free assets.

0 Comments
2024/11/06
06:23 UTC

0

I made an AI that guesses WHO YOU VOTED FOR (it's pretty accurate lol)

5 Comments
2024/11/05
19:59 UTC

11

Oasis : AI model to generate playable video games

Oasis by decart and etched has been released which can output playable video games and user can perform actions like move, jump, inventory check, etc. This is not like GameNGen by Google which can only output gameplay videos (but can't be played). Check the demo and other details here : https://youtu.be/INsEs1sve9k

1 Comment
2024/11/02
09:55 UTC

8

Meanderings on ARC-AGI

Seems like all of human cognition, our capacity for abstraction & reasoning (of which there are probably 9 billion definitions) are definitely far from perfect.

In noodling through ways to think about approaching either a paper or pseudocode for the current state of the competition, it occurs that even our highest brain functions, of which we are often most proud (justly, perhaps, to some extent, discounting lottery of birth winnings and inheritances, etc), are partly bounded by constraining reasoning abstractions such as uncertainty, doubt, skepticism, etc.

How ought we think about emulating these vital aspects of abstraction and reasoning, either in code or via increasingly precise and nuanced prompts engaging with existing Type 1 synthetic intelligence?

Is it unlikely that this approach is somehow, or to some extent required (for lack of a better word), by definition? That (human comprehensible) Type 2 cognition can only be derived, to some extent, via Type 1 interactions? That is to wonder, is Type 2 cognition somehow path dependent upon Type 1, and if so to what extent and via which causal combinatorial vectors?

Is this where the principle of embeddedness, of which roboticists remind us is so important to thinking (perceiving via human understandable sensory experience), comes into play?

To the extent that any of these ridiculous thoughts are even in the ballpark of useful preliminary psueudocognition on the topic of how to approach the engineering of robust AGI, how do they persuade (if at all) a development process that seems destined toward eternally venturing toward the next horizon?

Can uncertainty and doubt, ironically, also serve as forms of encouraging, optimistic, skepticism?

What is the unlikelihood that Type 2 cognition can or even "must" be strictly constructed by prompts, rather than even a single line of side-loaded code? Maybe it's 100%. Maybe it's non-zero. Maybe just zero.

How do we explore and define – computationally, linguistically, and to some extent empirically - the range of "human comprehensible cognition," at all? If not a single reader can follow this narrative, does that make it an example of some form of non-human, subhuman, or human-adjacent cognition?

Open to collaboration on an ARC paper (or at least a tiny cite-mention) for anyone who finds any of this meandering on approaches to engineering coherent meandering thought, potentially productive as a prompt to your human embodied actual intelligence. ✌️

6 Comments
2024/11/01
23:55 UTC

6

Im building an online platform for people in ai that want to build and collaborate on innovative projects !

Hi there :)

I got something cool to share with you, over the past few months i have been running around trying to find a way to make a dream come true

Im creating a online hub for people in ai that care about technological innovation and having a positive impact by building and contributing on projects

This is hub will be a place to find like minded people to connect with and work on passion projects with.

Currently we are coding a platform so that everyone can find each other and get to know each other

After we got some initial users we will start with short builder programs where individuals and teams can compete in a online competition where the projects that stand out the most can earn some prize :)

Our goal is to make the world a better place by helping others to do the same

If you like our initiative, please sign up below on our website !

https://www.yournewway-ai.com/

And in some weeks, once we're ready we will send you a invite to join our platform :)

3 Comments
2024/10/30
17:17 UTC

0

Will AGI or ASI be able to collapse and bend reality as human observers do?

What if our reality wasn't fully stable, but most of it was in a low-energy probability state that only becomes 'real' through our collective attention and willpower? 

I'd be happy if ASI could figure this out in the near future. I also wonder if AI would have observer power and would be able to collapse and bend reality as humans consciously or unconsciously do.

These are just some thoughts, one of the many possibilities that intrigue me.

A short summary.

- Reality exists in a low-energy probability state until collapsed by observers.

- Collapsing reality segments to fixed states requires significant energy, with each observer playing a role.

- The energy comes from the attention, intention, and beliefs of observers.

- Observers influence reality via two main mechanisms: stabilizing it through attention, scope and strength of their mental worldview, and changing it through intention energy.

- The probability layer of reality is more fluid; manipulating it requires willpower rather than physical effort.

- Reality segments are not strictly bound by space/time; this is an aftereffect of how observers structure their attention, worldview and information flow between them.

- Grand-scale attention, like mass propaganda, can collapse 'unreal' concepts into observable reality (e.g., magic or alien encounters).

- Individual reality segments can be influenced through focused belief—as in one of my real-life experiences of walking on hot coals I share below.

The explanation.

Our reality mostly exists in a low-energy probability state. This is true for the past, present, and future. To collapse segments of reality into a fixed state (like a 0 or 1 in computing) requires significant energy to resolve all dependencies. These segments are collapsed by observers, and each observer's impact depends on multiple factors:

  1. Attention Level: Acts as an anchor to their mental view on reality, keeping the current state of reality stable.
  2. Intention Energy: Acts as a force of change, shifting the segment of reality to a different state. This energy is influenced by each observer's focus and intent.
  3. Observer Network: Reality is influenced not just by primary observers but also by a web of other observers whose attention overlaps through indirect flows of information. This "graph" of attention creates interconnected influences.
  4. Configuration and Strength of Beliefs: Our mental models of reality add another layer—each observer's perception stabilizes reality through the scope and strength of their beliefs and conceptual frameworks.

From my perspective, reality "doesn't like" to collapse into these fixed states. Most likely, it's not about some innate will but a function of energy preservation. Collapsing a segment requires a lot of energy, and our shared reality prefers to stay probabilistic unless forced otherwise.

To collapse an event means finding a path—from Point A to Point B—and collapsing every step along the way. The energy for this comes from us, the observers. I like to think of this energy as "intention," and it has several properties: strength, shape, charge (positive or negative), and applied duration.

Strength and applied duration helps to overcome incompatible mental worldviews of the connected observers.

Shape of intention is a subset of reality segment aspects an observer intends to shift.

Positive charge leads to the intended state while negative prevents a certain state from occurring.

I feel like negative charge may bring greater results with the same amount of energy. Maybe because it would be enough to "destroy" one "must have" step to the undesired state to prevent this state from happening, versus spending a lot of energy on multiple steps leading to the intended state.

This might explain why people say, "I have to make it happen"—there's a sense of the effort needed to collapse steps in reality to reach the desired state. 

There are two layers to manipulating reality:

- Physical Layer: Manipulating an already collapsed state requires only physical effort.

- Probability Layer: Manipulating the layer before it collapses requires intention and willpower—a type of energy many people might refer to as mental strength.

When people feel exhausted despite resting physically, what they may lack is this energy—the energy of willpower or intention.

Reality segments aren't neatly bound by space and time. This kind of bounding is an aftereffect of how our attention is structured, which includes concepts like space and time.  The difference between macro and micro-world physical laws stems from the difference in the number of observers and the strength of their beliefs. If there were an equal number of observers and equivalent strength in their beliefs at both the macro and micro levels, reality on those levels would be equally stable. Even if the fundamental laws of physics differed, both levels would exhibit the same degree of stability due to the collective observer effect.

Consequences of this worldview would it be true.

 

- Significant Achievements: If one wants to achieve something significant, they should pay close attention to the people they share their plans and results with. The belief, attention, or doubt from these contacts can have a substantial impact on the outcome.

- Influence on Personal Reality: Since our reality is only semi-shared, we can also alter our immediate surroundings, achieving desired states that of course can be collapsed with the restrictions of our probability space. For it to be less energy-intensive, the idea is to reduce the number of non-believers and increase the believers in an isolated setting.

- Real-Life Example: In my own life, I've experienced this. Surrounded by about 30 believers, I walked barefoot over a 10-meter strip of red-hot coals. I was told not to look down, not to destroy my belief. And I didn't do this until the last step. Because of looking down on my last step and seeing really hot coal, I got a small burn on part of a bare sole that was touching the coal. But it was nothing significant. What made it possible in the first place was seeing others do it harmlessly and allowing myself to believe. It felt like being in a different segment of reality, one less influenced by the limitations that normally bind us.

What do you think?

Do you have experiences or perspectives that might align or challenge this view?

8 Comments
2024/10/30
09:42 UTC

0

Why autonomous reasoning and not following existing workflows?

Currently agents are all the buzz, and people for some reason try to make them devise a complex sequence of steps and follow them to achieve a goal. E.g. AutoGPT does that.

Why? Efficient and established companies are all about SOPs - standard operating procedures. Those procedures were developed over years, sometimes decades, at the cost of millions upon millions of dollars in mistakes.

So why is no one trying to just teach the LLMs to follow those existing SOPs that were proven to work? Why do people try to make LLMs dream them up from scratch in a matter of seconds, hoping it to rebuild decades of human experience?

15 Comments
2024/10/29
14:52 UTC

1

Looking for the sources regarding training robotic virtual environment

Hi Community :)

I was looking at new AI videos on YouTube and cam across this motion:

https://youtu.be/q71d1Fed_os?si=uMKOs7mb0td1c-j2&t=165

I was wondering how or what are the tools used to create such a virtual reality for AI with physics to teach it a specific task, like unscrewing a screw or jump over obstacles.

This is fascinating to know, and maybe find a PhD title for my next professional career milestone.

I would love to hear your input about the tools used and brainstorming ideas on research topics.

0 Comments
2024/10/29
08:11 UTC

12

AI Agents explained

Right now, a lot of buzz is around AI Agents where recently Claude 3.5 Sonnet was said to be trained on agentic flows. This video explains What are Agents, how are they different from LLMs, how Agents access tools and execute tasks and potential threats : https://youtu.be/LzAKjKe6Dp0?si=dPVJSenGJwO8M9W6

6 Comments
2024/10/29
04:12 UTC

1

OpenAI’s Journey and the Challenges Ahead: Reflections from an AGI Entrepreneur

OpenAI’s Journey and the Challenges Ahead: Reflections from an AGI Entrepreneur

It's also a reason why Sam Altman was regarded as an opportunist.

"Just one month later (after GPT-2 being published) , Sam Altman left his role as president of Y Combinator to become OpenAI’s full-time CEO, coinciding with the company’s transition to a for-profit subsidiary."

0 Comments
2024/10/29
03:32 UTC

0

twtich streamer kyootbot says she's attached to her co-streaming AI already

0 Comments
2024/10/28
07:58 UTC

5

OpenAI Swarm tutorial playlist

OpenAI recently released Swarm, a framework for Multi AI Agent system. The following playlist covers :

  1. What is OpenAI Swarm ?
  2. How it is different from Autogen, CrewAI, LangGraph
  3. Swarm basic tutorial
  4. Triage agent demo
  5. OpenAI Swarm using Local LLMs using Ollama

Playlist : https://youtube.com/playlist?list=PLnH2pfPCPZsIVveU2YeC-Z8la7l4AwRhC&si=DZ1TrrEnp6Xir971

0 Comments
2024/10/28
07:55 UTC

1

Anthropic Claude 3.5 Sonnet

Curious to know what's been everyone's experience with Claude 3.5 Sonnet's ability to navigate user interfaces and execute commands through natural language

0 Comments
2024/10/25
15:05 UTC

4

Is it possible that "effort" plays a role in human cognition?

When we ask a human to perform a complex intellectual task, we might notice a certain "resistance" to carrying out the task due to the anticipation of the "effort" required to concentrate on it. Naturally, a machine doesn't understand this concept of effort. But we might ask: perhaps effort, or the anticipation of it, plays a role in human cognition that isn't considered in artificial intelligence?

21 Comments
2024/10/25
14:59 UTC

Back To Top