/r/Futurology
A subreddit devoted to the field of Future(s) Studies and evidence-based speculation about the development of humanity, technology, and civilization.
-------- You can also find us in the fediverse at - https://futurology.today
Source Quality: excellent good ok avoid
3D Printing - Artificial Intelligence - Biotech
Computing - Economics - Energy - Environment
Nanotech - Robotics - Society - Space - Transport
Medicine - Privacy/Security - Politics
A subreddit devoted to the field of Future(s) Studies and evidence-based speculation about the development of humanity, technology, and civilization.
- Be respectful to others - this includes no hostility, racism, sexism, bigotry, etc.
- Submissions must be future focused. All posts must have an initial comment, a Submission Statement, that suggests a line of future-focused discussion for the topic posted. We want this submission statement to elaborate on the topic being posted and suggest how it might be discussed in relation to the future.
- No memes, reaction gifs or similarly low effort content. Images/gifs require a starter comment.
- No spamming - this includes polls and surveys. This also includes promoting any content in which you have any kind of financial or non-financial stake.
- Bots require moderator permission to operate
- Comments must be on topic, contribute to the discussion and be of sufficient length. Comments that dismiss well-established science without compelling evidence are a distraction to discussion of futurology and may be removed.
- Account age: >1 day to comment, >5 days to submit content
- Submissions and comments of accounts whose combined karma is too far in the negatives will be removed
- Avoid posting content that is a duplicate of content posted within the last 7 days.
- Text posts need to encourage in-depth and detailed discussion. Avoid generalized invitations to discuss frequently discussed topics. Submissions with [in-depth] in the title have stricter post length and quality guidelines
- Titles must accurately and truthfully represent the content of the submission
- Support original sources - avoid blogs/websites that are primarily rehosted content
- Content older than 6 months must have [month, year] in the title
For details on the rules see the Rules Wiki.
For details on moderation procedures, see the Transparency Wiki.
If history studies our past and social sciences study our present, what is the study of our future? Future(s) Studies (colloquially called "future(s)" by many of the field's practitioners) is an interdisciplinary field that seeks to hypothesize the possible, probable, preferable, or alternative future(s).
One of the fundamental assumptions in future(s) studies is that the future is plural rather than singular, that is, that it consists of alternative future(s) of varying degrees of likelihood but that it is impossible in principle to say with certainty which one will occur.
For a list of related subreddits, hover over top menu.
/r/Futurology
I understand they operate in a legal grey zone, so I'm assuming until laws and regulations come into place they can't be targeted? Even if the websites can cause allot of harm?
With Deepseek’s arrival, it certainly sparked a national interest. I believe, it might soon turn into a national emergency where, the country will unite and work on it. Possibly, this is trigger a chain reaction globally where most countries will jump into building their own AI. This would definitely be a win for the technology as there will be tons of progress, discoveries and innovation made in the field. Last time there was a competition, man set foot on the moon.
First let me say that I don’t for a minute want to downplay the potential dangers of AI. I just want to explore a different perspective here which to me personally brings more concern.
Society is still reeling from the advent of internet communication. I don’t think anyone here would consider that a hot take. Whether it’s good or bad is irrelevant here, what’s relevant is that it happened fast and changed everything. It created new societal problems faster than could be dealt with, and it changed the way we view the world faster than many people could respond to in a healthy way.
That chaos is I think theoretically temporary, but is also I think still very much underway. Our response to the internet is deeply tied to postmodernist anxieties, which are still not resolved. Ideally, we would have dealt with this already before being confronted with AI. For better or worse, it’s here, and so this is my primary concern. Mass existential crises. I think we need to work to keep our minds very resilient and agile in the coming decade. I’m interested to hear what others think of this.
I think that in a society where everyone lives 100 years or more, the age of majority being 18 would not make sense. I think that in maybe 30 years, you become an adult.
China is facing a demographic cliff, like Korea and Japan, and is anticipated to dip from 1.4 billion to about 800 million around 2100. This will likely reduce their GDP and ability to engage in force projection. Thus, the government is starting to take measures to increase birthrates. Do you think any of them will be successful? Some candidate ideas are:
Require people applying for government positions to have 2-3 children and be married. While not everyone applies for government positions, families may elect to have more children in case they apply, in the future, for government positions. Thus, this intervention could have a ripple effect.
Limit Residence Permits in highly sought after cities to those with 2-3 children. Without these permits, individuals cannot work in those cities
Modify the Chinese Social Credit system: This is a unified record system to measure social behavior where individuals can be blacklisted/redlisted if they engage in anti-social behaviors like stealing/drunk driving. The power of this system is that the government can ratchet up the value awarded to having children, and even adjust it by region, to achieve population growth.
These interventions have almost no cost to the Chinese government. The Chinese autocracy has a proven track record of successfully reducing the population through the one child policy, and the government has been quite ruthless, going so far as forced abortions, to implement that policy. I imagine that the inverse may also be possible, and the government may be able to increase population growth and implement ruthless methods. Thus, it is possible that all the individuals who are proclaiming China's demise may be viewing China from a Western perspective where the measures listed above would be an anathema. I want to be clear that I am not advocating for any of these measures--I find many of them offensive--but I am just interested in hearing your thoughts as to whether or not this may come to pass. I have attached an article link that suggests there may be some pushback ("human mine"), but as the article mentions, the government quickly banned the term "human mine" and is now creating a pro-child media campaign.
January 2025 has seen two significant events for Big Tech. Their moves to further enable authoritarianism, and the neo-nazi far right & their loss of the AI arms race to a tiny Chinese upstart.
Meta embraced the trend of using open-source to weaken its competitors 18 months ago, and since then open-source AI from places as diverse as France and China have been using the same tactic. That culminated in recent weeks in DeepSeek - the open-source AI that has become the world’s most powerful. Since it debuted, 2 other open-source AIs of equal power have emerged too - another Chinese, and one Canadian.
So it seems the power of AI, or even AGI when it comes, may not be in the hands of a few Silicon Valley billionaires, but instead decentralized and democratized around the world. As those billionaires embrace ever darker and more fascistic visions of the future, maybe we should be relieved they are all hobbling and weakening each other via open-sourcing AI.
Our digital footprint could in fact stay forever (forever in all practical ways, nobody can tell what we will be or not be in very long time scales) because the servers that store data are constantly being replaced and the data gets copied. Global players like Google, Meta oder Amazon won't go away
I spent a few hours grilling O3-mini (high) to examine how AGI and other new technologies could result in different future scenarios over the next 20 years.
As you can see from the table the most likely scenario is either AGI becomes sentient and takes control, or disengages from humans. Depression and Civilization collapse and second and third most likely. A smooth Goldilocks transition is 4th most likely at 15% probability.
______________________________________________________
Edit / Important Note:
O3-mini only gives 4/10 confidence to these estimates so each estimate is probably only accurate to +/- 50% or less.
These estimates remain highly speculative and are intended as a framework for discussion rather than precise predictions.
The CEO of Scale.AI made a good comment yesterday, that even inside the AI companies "No one has a clue what the final impact of AI will be on society" or words to that effect.
_________________________________________________________
I explored these different scenarios in depth considering large historical changes in the economy and technology (Bronze, Iron, Industrial Revolution, Computer/Internet), and current and near future technologies and cultural and societal changes which will impact the likelihood of these scenarios occurring.
I also did a fairly detailed analysis of the viability of giving everyone in the USA $20,000 per year UBI, and there are some plausible short term options but it will be difficult to sustain these for more than 5-10 years due to the side effects of the initial solutions either causing a massive depression, or hyperinflation (more likely since it favors the rich IMO).
When I initially started these discussions O3-mini did a poor job of considering secondary effects of AGI on the economy and global stability etc. However when I grilled it on the likely secondary effects it did respond with some logical answers which is good.
The more detailed analysis it did on secondary impacts of AGI the lower the chances of Goldilocks scenario got, so if some experts spent a few months looking at all the possible secondary and tertiary side effects of AGI the Goldilocks scenario may become less likely, which is not good, and I hope that does not happen.
I configured it to give me raw, gritty, unfiltered thoughts even if they were upsetting so this is probably as unbiased and unfiltered an opinion as you can get from it.
I know that technological progress is almost inevitable and that “if we don’t build it, they will”. But as an AI scientist, I can’t really think of the benefits without the drawbacks and its unpredictability.
We’re clearly evolving at a disorienting rate without a clear goal in mind. While building machines that are smarter than us is impressive, not knowing what we’re building and why seems dumb.
As an academic, I do it because of the pleasure to understand how the world works and what intelligence is. But I constantly hold myself back, wondering if that pleasure isn’t necessarily for the benefit of all.
For big institutions, like companies and countries, it’s an arms race. More intelligence means more power. They’re not interested in the unpredictable long term consequences because they don’t want to lose at all cost; often at the expense of the population’s well-being.
I’m convinced that we can’t stop ourselves (as a species) from building these systems, but then can we really consider ourselves intelligent? Isn’t that just a dumb and potentially self-destructive addiction?
I was searching for early-stage anti-aging therapeutics with huge potential and came across this youtube video:
https://www.youtube.com/watch?v=cfR9_iRU7kU&t=6s
It claims that older mice regained endurance and muscle strength, and that it also improved heart function after a heart attack, reducing scarring and inflammation.
It works by blocking a microRNA (miR-128) that disrupts mitochondrial function and fuels chronic inflammation—two major hallmarks of aging.
What do you guys think? Does anyone have knowledge about miRNA theraputics? It seems like a game changer, though it would take a while to make it to market.
Renewables+batteries have almost wiped out the nuclear industry, now geothermal power may be about to put the final nail in that coffin. New research published in Nature magazine shows drilling times are falling so swiftly, that by 2027 geothermal power will be able to deliver a levelized cost of electricity at US$80 MWh. That's price competitive with nuclear, but that's not the real killer for the nuclear industry.
Although some locations (like Iceland) are very suited to geothermal, many places are just fine too. Geothermal can be built widely all over the world - more crucially, it can be built quickly and to a dependable budget.
The nuclear industry's sole surviving argument was it could provide base load power - but so can geothermal. It will now be vastly more appealing to investors and governments than building new nuclear power, which may be an industry about to go into the last stages of its death spiral.
It feels like we’re on the brink of a massive shift in how we interact with technology. AI chatbots are evolving at an insane pace, and it’s starting to feel like they’ll render most of what apps do today... obsolete.
Think about it:
Even niche apps are at risk. Why download a fitness app when a chatbot can create personalized workout plans, track progress, and motivate you in real-time? Why use a language learning app when a chatbot can teach you, correct your grammar, and simulate conversations?
The question is: Are we building a future where apps become redundant? Will the next wave of startups just be AI chatbots that consolidate everything into a single interface?
Sure, there are challenges—privacy, reliability, and the risk of over-reliance on AI. But the trend seems inevitable. What do you think? Are we heading toward a world where apps are replaced by chatbots, or is this just another hype cycle?
Is this the end of apps as we know them? Or am I overestimating the impact of AI
Global trends are currently moving towards a more destabilized world. More countries are moving towards isolationism, authoritarian regimes are gaining ground, and environmental disruption is a virtual guarantee. What if these trends aren’t accidental? Much of it is due to a wave of disinformation that will only become more pronounced as we move into the AI epoch. What if it were due to an unaligned AGI/ASI having found a way to exist in a distributed fashion across the globe? It would be a classic divide and conquer scenario where all the AI needs to do is slowly and incrementally undermine our confidence in our existing systems. SO! If that were the case would it be possible to:
What’s everyone’s thoughts on this?