/r/singularity
Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.
A subreddit committed to intelligent understanding of the hypothetical moment in time when artificial intelligence progresses to the point of greater-than-human intelligence, radically changing civilization. This community studies the creation of superintelligence— and predict it will happen in the near future, and that ultimately, deliberate action ought to be taken to ensure that the Singularity benefits humanity.
The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.
The first use of the term "singularity" in this context was by mathematician John von Neumann. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030.
Proponents of the singularity typically postulate an "intelligence explosion", where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.
1) On-topic posts
2) Discussion posts encouraged
3) No Self-Promotion/Advertising
4) Be respectful
/r/singularity
I prompted ChatGPT to "Create a Framework for Taxing Robots and AI Models Displacing Human Labor", and this is what it came up with: "AMRAI"
AMRAI stands for:
Definition of AMRAI Models: AMRAI Models are defined as artificial systems or algorithms (robotics, machine learning models, AI software) deployed in environments previously dominated by human labor. This includes AI replacing knowledge workers (e.g., GPT-style models), physical robots replacing manual labor, and hybrid systems.
Taxable Events: Taxable events would occur when an organization deploys an AMRAI model in a capacity directly replacing a human worker. A reduction in human workforce correlates directly with increased productivity attributable to AMRAI systems.
Tax Structure:
• Displacement Tax: A flat or tiered tax based on the number of human workers replaced and their estimated lost wages. For example, for each worker displaced, the organization pays X% of the median wage of the replaced job per year.
• Productivity Premium Tax: A percentage of the additional profit or efficiency generated by AMRAI systems after displacement.
• Differential Regional Taxation: Tax rates adjust based on local employment conditions. Higher rates apply in areas with higher unemployment.
Redistribution Mechanism:
• Social Security Fund: Taxes feed directly into programs providing Universal Basic Income (UBI), unemployment benefits, or job retraining initiatives.
• Education & Retraining: Funds are used to upskill displaced workers into emerging fields.
Exemptions:
• Small Businesses: Exemptions for small entities where AMRAI adoption does not significantly harm the labor market.
• Co-Worker AI: AMRAI systems assisting but not replacing workers (e.g., augmented AI tools).
Transparency and Compliance:
• Mandatory Reporting: Companies must disclose workforce changes linked to AMRAI adoption and provide annual reporting of productivity gains attributable to AMRAI.
• AI Usage Registry: A public database tracks which sectors and companies implement AMRAI systems.
Encouraging Responsible AI Development:
• Tax Breaks for Ethical Deployment: Companies investing in job-sharing or human-AI collaboration can receive deductions.
• Regulations on Deployment: Guidelines to ensure AMRAI systems complement human work rather than fully replace it.
International Cooperation:
• Global AI Tax Treaty: Prevent companies from offshoring operations to avoid taxes by standardizing rules.
Challenges:
Measurement Issues: Attributing productivity gains to AMRAI versus other innovations can be difficult.
Regulatory Resistance: Pushback from businesses and lobbying groups.
Global Competition: Ensuring fairness across nations without stifling innovation.
In your opinion, which is the funniest language model? Could any of them make you genuinely laugh?
Puns are easy. I'm looking for something that can build up anticipation then surprise with something unexpected.
It would be fun to have a humor benchmark for LLMs. It could be based on voting.
I ask because I now have nieces and nephews that are just entering school. And there are some subjects I know they're going to struggle with. I've heard a few stories about people using ChatGPT as a tutor of sorts. If I were to use it to help my nieces and nephews, how would I go about it?
Literally anytime I actually click the fucking hyper link referencing the source where AI says it found the information, the source says something COMPLETELY different and too often, the total opposite of what AI summarized.
Like, this shit is BEYOND useless and inaccurate, just straight up making up information and saying whatever it feels like with NOTHING to support it and ample evidence to the contrary at times.
I've been using google for some Psychology research lately while revisiting a section of my thesis project and I had to figure out how to disable the AI overview because it was distracting me with literal misinformation.
I'm not looking forward to the AI takeover in everything. It'll just be more shitty, too-early rolled out, poorly maintained electronic garbage that buying into is a requirement for being part of modern society, like everything else that already surrounds me like my smartphone.
I hate it.
This is the final output after it gave me an answer I was pretty sure was wrong despite the source.
It started to respond with a paragraph, then typed out “Allow me a moment” then rewrote the first paragraph twice overwriting the original response
Has anyone experienced this? It was real time and I’ve never seen it mentioned
Link to paper: https://arxiv.org/abs/2411.10109
For some reason posts containing Blue s*y links get auto-removed here with the label of "overly political content". But you can search Ethan's name on the site to find the post.
I've read both his books, while I found the last chapter of the 2nd book particularly interesting. I believe his wife was challenging his ideas and forcing him to elaborate on the potential negative consequences of AGI at 2029. She was saying that if Brain-Computer-Interface technology is delayed it could lead to mass joblessness for a decade or more if AGI can completely replace humans (e.g. AGI level programmers means no more programmers). He didn't fully address the subject even then, he just said that we was confident it would happen even though hypothetically it could be delayed by legal interventions or something. And he continued to press his analogy to how they are extending our brains even now since we can use tools like ChatGPT and our phones to help us.
I want to know concretely if he predicts a period of mass joblessness due to AGI around 2029 (e.g. AGI is better than humans in every way therefore very few humans are actually employed). He's stated several times that 'sure many jobs will be automated but many more will be created', but it isn't clear if they will be created only after the singularity and whether there will be a major disruption in the mean time.
Have you guys heard any interviews from him or quotes which might confirm or deny this possibility? IMO it is the most important and actionable detail of his predictions thus far (banning human extinction, as relatively unlikely). But I can't clearly discern what he believes will happen.
I’m not gonna say anything definitive because it’s way too early to speculate so act as if this is purely hypothetical, but if LLM’s really have been tapped out on the pretraining end, what do you think is the next paradigm to scale (potentially to AGI)? Do you think it’s something like Yann Lecun is working on that is trained primarily through video, or do you think it’s maybe something that’s just a bit more of a different kind of LLM like o1? Do you think it will use the transformer architecture, or maybe an entirely new/different architecture? Do you think it will take a really long time, or do you think the large investment in AI means that it would arrive a lot sooner than most would expect?
I’m interested to hear your thoughts!