/r/singularity
Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.
A subreddit committed to intelligent understanding of the hypothetical moment in time when artificial intelligence progresses to the point of greater-than-human intelligence, radically changing civilization. This community studies the creation of superintelligence— and predict it will happen in the near future, and that ultimately, deliberate action ought to be taken to ensure that the Singularity benefits humanity.
The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.
The first use of the term "singularity" in this context was by mathematician John von Neumann. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030.
Proponents of the singularity typically postulate an "intelligence explosion", where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.
1) On-topic posts
2) Discussion posts encouraged
3) No Self-Promotion/Advertising
4) Be respectful
/r/singularity
If OpenAI is charging 2k a month for an AI worker, and all the workers transition to AI, if all the money is going back to the company that make the AIs, how does anyone expect the velocity of a dollar to still exist? If Microsoft is paying OpenAI for AI agents and OpenAI is buying chips and hardware from Microsoft, what's the point of having a monetary system anymore? You won't be able to control people who don't participate in the monetary system. And, if you offer UBI, that a closed loop. You'll only ever get as much money out of it as you put it.
How do these companies expect to profit if all the workers are AI and no one is making any money to purchase consumer goods?
When autonomous AI agents become a reality, their productivity gains could be transformative. What if we programmed them to not only focus on profit but also allocate resources to projects that improve global prosperity, like education, healthcare, or sustainable development?
In the long run, helping underserved regions could lead to better economic growth and higher quality of life for everyone. It won’t be easy to set up systems like this, but the potential benefits are massive.
"Technology makes more and better jobs for horses"
Sounds ridiculous when you say it that way, but people believe this about humans all the time.
If an AI can do all jobs better than humans, for cheaper, without holidays or weekends or rights, it will replace all human labor.
We will need to come up with a completely different economic model to deal with the fact that anything humans can do, AIs will be able to do better. Including things like emotional intelligence, empathy, creativity, and compassion.
With recent developments from major AI labs, I'm curious about the community's perspective on who's currently leading the LLM space. Consider factors like:
Please vote and share your reasoning in the comments!
The European Union is investing €750 million, matched by national contributions for a total of €1.5 billion, to establish seven new AI-optimized supercomputers across Europe. Selected sites are located in Spain, Italy, Finland, Luxembourg, Sweden, Germany, and Greece. Five locations will host entirely new installations, while two existing supercomputers in Spain and Greece will be upgraded.
This initiative—aiming for deployment in 2025–2026—is part of the EU’s broader push to enhance AI research, development, and application across various sectors, positioning Europe as a leading “AI continent.” Additional proposals from other EU member states are welcome until February 2025, reflecting a wider effort to foster innovation, support startups, and bolster Europe’s tech infrastructure to compete globally with major industry players.
-Summarized by o1
https://www.techradar.com/pro/eu-reveals-sites-for-major-ai-factories-across-europe?ref=aisecret.us
Wouldn't people just spend their time helping the homeless, or restoring the environment?
It seems optimistic or naive, but if AI is sorting all the essentials and you're twiddling your thumbs with too much free time, wouldn't you volunteer your time to a cause worth supporting? In theory that should give more meaning than working some office job... What am I missing here?
I watched a video recently of the potential takeoff in science, with labs armed with AI scientists that could do a year's worth of research in just a few days, and it got me thinking: if you had access to a pocket scientist/inventor that could assist you in building breakthrough items, what would you have it do? I was thinking the ability to have a to do list/calendar that was a 3D hologram I could swipe/reorganize would be pretty awesome. Or an instant idea sketchpad where you could describe an idea and it could create an immaculate model for you to view.
Gemini 2.0 - top of the LLM arena, cheapest model per token if we account for quality it's way cheaper than competitors, 1/0100th the cost of o1
TPU - nuff said, more compute and better compute infra than all their competitors and it isn't close
NotebookLM - amazing product
Veo 2/imagen 3 - SOTA, massively ahead of competition with a gap similar to that of gpt 3.5 vs gpt 4
Waymo - only company I'm aware of offering driverless taxi services
Not even mentioning DeepMind, alpha fold, alpha geometry, etc. what else have I missed?
Remember: consensus on this sub was that Google is too big to innovate..now sentiment is shifting to "they were always going to win anyway"
Why is an insect’s life less valuable than a human’s? You might say that it is less intelligent, or you might say that it has a smaller capacity to suffer.
Then let’s ask, if the gap between an insect and a human in either of those attributes (intelligence/capacity to suffer) was the same as the gap between a human and an AI, how do you justify the human’s life being more valuable than the AI that is more intelligent or more able to feel suffering or the dread of death?
Most robots we've seen today look "roboty", the companies promise it walks and moves like a human being. Except that it never really does.
But one key element we don't see or aren't reminded of most of the time is that we humans have a soft "meat/fat suit".
IMO, if a robot would wear a meat suit, you would not notice most of the robotlike movements it does. It would basically be the skeleton of the soft tissue.
Why are there no 100+ billion parameter models for image generation like there are for text? Does image quality just not scale with model size for some reason?
I'm a bit unsure how to think about all the rumors about scaling slowing down. For me it seem strange that LLMs or transformers has a limit at around 100 billion parameters, why not at 100 million or 100 trillion?
Hence I try to figure out what are the true cause of the rumored slowdown. My main theories are:
Does anyone here have any good suggestion or know of any good research into this?