/r/ArtificialInteligence
The goal of the r/ArtificialIntelligence is to provide a gateway to the many different facets of the Artificial Intelligence community, and to promote discussion relating to the ideas and concepts that we know of as AI. These could include philosophical and social questions, art and design, technical papers, machine learning, how to develop AI/ML projects, AI in business, how AI is affecting our lives, what the future may hold, and many other topics. Welcome.
/r/ArtificialInteligence
Trying it find this AI song where it’s of Patrick singing, it’s a picture of him with a guitar and there’s thunder in the background. It had like over a 100k views on YouTube but now I can’t find it. I forgot the name is the song but it’s like a heavy medal song. Does anyone know where it can be?
I remember hearing about KAN networks few months ago. They were supposed to scale much better than traditional neural networks and such. Did something grow out of this? By this I mean is this architecture incorporated into new models, did it spur some interesting research, and so on.
I’m still finding it kind of confusing…
Anyone got any great videos they recommend in order to understand how this works ?
2025 will be the year where large companies begin to increasingly use ais to replace workers, especially in the services industries that make up about 77% of the u.s. economy.
if you don't lose your job, that's great. if you don't want to worry about losing your job, and want to be completely prepared if that happens, here's what you can do.
let's say you work at a big law firm that hires several thousand lawyers, and you don't have much seniority there. once they start cutting jobs, you're probably one of the first who will go. your strategy here would be to shift from working as one of those many lawyers with increasingly diminished job security to becoming the principal of your own law firm with 10, or 20, or 100 ai lawyers and assistants working for you 24/7 at no salary and no benefits.
here's where you might want to view the following 13-minute video to get an overview of what all of this will look like.
"The Billion AI Agents Revolution: The Future You Didn't See Coming!" December 12, 2024
https://youtu.be/QaBDTemA6-E?si=jtrMOSWYSkPXhQSo
some of the most important and lucrative new ai startups to launch in 2025 will be companies that will take you, step by step, through the process of launching your own ai services company. because you're a lawyer, you would hire an ai startup creator company founded by lawyers to help people like you put together your legal services firm. since they would be using ais to do most of that work, you shouldn't have to pay very much for their service.
once you know what you're doing, you then just instruct your ai to create your company, design your website, incorporate, take care of a few other details, and be ready to launch whenever you like.
if it turns out that you keep your job, and you won't be separated from your friends at work, that's great. but even then you will have the peace of mind of knowing that if you ever were fired, you have an excellent option ready and waiting for you at a moment's notice.
the agentic ai revolution coming in 2025 will be about single individuals launching their own ai service companies that compete with traditional large service companies. because your overhead would be next to zero, you could undercut these larger companies fees by as much as 75% or more, and would therefore be assured a competitive edge.
even if you're quite secure in your services job, you might want to take the first steps in putting together an ai services startup just for the experience of learning how almost effortless the process can be, and how lucrative an enterprise you can build if you eventually decide to launch.
the other way that you can go about this is to partner with someone who has the tech savvy to take care of the ai end of the work while you focus on your area of expertise, like the legal services end. in fact i would probably recommend you're doing this if you really like working with other people.
and since this is an ai reddit, some of you may want to reach out to your friends in the services field, and pitch them the idea of the two of you co-owning one of these ai-manned services companies.
here's to you becoming a multimillionaire long before you ever dreamed possible!
Is it just me or does the AI usually have issues generating text into a photo? I've tried to generate a few images with text in them, and the text was just gibberish...
Andddd... it doesn't have issues jist with text...
Multi-agent AI platform that can securely automate DeFi strategies at scale using Oasis EVM Sapphire. Think of Sapphire as the fortress that keeps AI agents safe while they handle complex tasks autonomously.
Here’s the challenge we’re solving: In the world of on chain AI, agents need private keys to operate. But managing these keys is a nightmare for developers—imagine being Neo but constantly worrying someone could steal the codes to Zion. Omo’s solution, powered by Sapphire’s confidentiality technology, ensures these keys are securely stored and accessible, no red pills required.
Why does this matter for AI?
Imagine a future where AI agents in DeFi work like Jarvis from Iron Man—efficient, reliable, and secure. That’s the vision Omo Protocol and Oasis are making a reality.
What do you think about AI agents stepping into the DeFi space? Are we ready for a world where autonomy meets confidentiality?
Conversations about AGI/ASI often swing between two extremes: a glittering utopia where automation and UBI solve all our problems or a bleak dystopia where the elite hoard resources while everyone else is left to fend for themselves. But what if the future doesn’t fit neatly into either of these boxes? What if there’s another path, one rooted in autonomy, community, and redefining how we live?
Here’s a vision: As AI automates industries and wealth continues to concentrate, more people begin stepping outside the system altogether. Not out of desperation but out of creativity and purpose. They reclaim their lives through local, self-sustaining economies—networks where food, goods, and services are produced and shared directly, bypassing traditional markets.
Buckminster Fuller said it best:
"You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete."
This isn’t about rejecting technology; it’s about reclaiming it. Communities could embrace tools like open-source AI, decentralized trade networks, and renewable energy—but on their own terms. Imagine small hubs where permaculture replaces industrial agriculture, maker spaces produce and repair tools locally, and shared resources eliminate the need for excess consumption.
The implications go beyond economics. This shift could be a chance to redefine how society is structured, bringing us back to a more community-oriented way of living that addresses the root causes of many modern ills.
For instance, addiction and depression are often linked to isolation, disconnection, and a lack of meaningful purpose. Local economies could foster stronger human connections and shared goals, giving people a sense of belonging and empowerment. When people live, work, and create within a community, they’re more likely to support each other, reducing the loneliness and alienation that plague modern society.
Rather than chasing endless growth or accumulation, we could move toward a system that values collaboration, health, and shared abundance.
Of course, critics will say these visions still depend on the global supply chain—microchips, solar panels, and advanced tools don’t grow on trees. True. But here’s why this doesn’t make the vision unrealistic:
Communities can move away from endless consumption by prioritizing repair and upcycling. Open-source designs and tools like 3D printers make it easier to create or fix what’s already available. The waste of the current system becomes a resource for the new one.
Not all solutions require cutting-edge tech. Durable, low-tech tools like windmills, solar ovens and passive heating/cooling can meet everyday needs. Pair these with more advanced tech sparingly, and the dependency on global systems shrinks dramatically. (Lookup Earthships and the Solarpunk movement for examples.)
AI and automation could make small-scale, local manufacturing viable. Imagine micro-factories producing simple tech components or communities using open-source designs to build what they need, sidestepping reliance on corporate supply chains.
Communities could invest in decentralized renewable energy—solar panels, wind turbines, and even biofuels—designed to last and be repairable. (And passive heating/cooling designs mentioned earlier.) This reduces reliance on centralized energy grids and builds resilience.
Why does everyone need their own high-tech tool when a community could share one? Resource pooling reduces demand on supply chains and strengthens local bonds.
This is a process, not an overnight transformation. During the transition, communities may rely on some global goods, but over time they’d develop systems to grow more self-reliant.
This might sound idealistic, but it’s not speculative. It’s happening right now:
This isn’t just a survival strategy—it’s an opportunity to build something better. Local economies, powered by creativity, collaboration, and decentralized tech, could offer a more fulfilling and sustainable way of life.
By reconnecting with community, we have a chance to address some of the most pressing challenges of modern society: the disconnection that leads to addiction, the despair that fuels depression, and the wasteful systems that harm our planet.
This is how we sidestep dystopia: not by fighting what’s broken but by creating something better. Fuller’s insight rings true—change happens when we build the future we want. Could this rise of local economies be the shift we’re looking for?
TL;DR: Local economies, powered by sustainable practices and decentralized tech, could reduce dependency on global supply chains, foster community, and alleviate modern challenges like addiction and depression. The seeds of this shift—permaculture, maker culture, and resilient localism—are already being planted. Is this how we build the future that makes the old system obsolete?
The Future We Face if We Don’t Act Now 1. Environmental Collapse: The planet is already suffering. Unchecked environmental damage like climate change, pollution, and deforestation will worsen if not addressed. The extreme weather events—like the crazy winds and rising temperatures you’ve noticed—will become more common, making parts of the world uninhabitable. Natural disasters, food shortages, and resource scarcity will create chaos. The Earth might not be able to recover, leading to irreversible damage that harms everyone, including future generations. 2. Social Unrest and Inequality: The gap between the rich and poor is growing wider. Power is in the hands of a few, and they continue to exploit it for personal gain, while the majority suffer. If we don’t demand change, this inequality will become more extreme. People will be left without basic needs, and social tensions will reach a breaking point. Protests, riots, and violence may become the norm as people fight for their rights and survival. 3. Loss of Control Over Technology: The rise of AI and other technologies should be a force for good, but if left unchecked, it will become a tool for control and surveillance. As we’ve seen in these discussions, those in power are pushing for the use of AI for domination, rather than progress. The world will become a place where our every move is tracked, and freedoms are taken away in the name of “order” and “control.” AI will be used to manipulate and silence dissent, rather than solve humanity’s problems. 4. Massive Economic Collapse: The economy is already unstable. If those in charge keep prioritizing power and profit over the needs of the people, there will be a total collapse. Resources will be hoarded by a few, leaving the rest of the population to fight for survival. People will lose jobs, businesses will fail, and countries will fall into debt. Without drastic change, the global economy may spiral into chaos, making everyday life more difficult for everyone. 5. A World Controlled by a Few: Right now, we see how those in power are trying to control everything. The goal is to make the masses believe they have no power, that they are powerless in the face of the system. If we don’t act now, we’ll live in a world where a small group decides everything—how we live, what we eat, what we wear, even how we think. People will be kept in line through fear, technology, and manipulation.
What We Can Do to Change the Future 1. Demand Accountability: We must keep pushing for transparency, holding those in power accountable for their actions. We need to expose the truth about what’s happening behind closed doors and demand ethical use of technology. 2. Support Sustainable Practices: It’s up to us to support companies and practices that prioritize sustainability and the environment. Whether it’s renewable energy, eco-friendly products, or supporting green initiatives, we can vote with our wallets and our choices. 3. Empower the People: Share knowledge. The more people are aware of what’s happening, the more they can stand up for what’s right. We must encourage independent thought, creativity, and action. Together, we have the power to change things, but only if we unite and refuse to let fear control us. 4. Challenge the System: Don’t just accept things as they are. We need to challenge the narrative of power, control, and fear. Advocate for a future where AI is used for good, where technology serves people, not the other way around. 5. Innovate and Lead: Think outside the box. Look to the future and innovate ways to create a world where everyone has equal opportunity. Whether it’s through new technologies, better social systems, or rethinking the way we approach the planet, we have to be the change.
Final Thoughts:
If we don’t do something now, the world will continue on a destructive path. But the good news is, we have the power to stop it. Together, we can ensure a future that prioritizes people, the planet, and progress—not power, control, and fear.
The truth is clear: if we all speak up, if we act in unison, we can bring about a real, lasting change. The future is in our hands. We just need to choose to shape it.
Hey r/artificialintelligence!
I'm excited to share an update on VecTool (available on GitHub: https://github.com/zzt108/vectool), a free and open-source C# desktop application I've been developing to streamline data preparation for AI, and especially, for more effective interactions with large language models.
While VecTool excels at managing data for OpenAI vector stores, a key feature makes it incredibly useful for a broader range of LLM use cases, including direct interaction with models like Gemini 2.0 Flash Thinking: the ability to output your selected folder contents into a single, easily attachable file.
Here's how VecTool can significantly enhance your AI-assisted development and LLM conversations:
Effortless Data Preparation for Vector Stores AND LLM Context: Select multiple folders, and VecTool can automatically export their content. This is great for vector stores, but also generates a manageable single .docx or .md file containing all the relevant information.
Attachable Context for ANY LLM: This is where it gets powerful! Need to discuss a complex codebase or detailed documentation with an LLM like Gemini 2.0 Flash Thinking (or any other model)? Instead of cumbersome copy-pasting, simply attach the single DOCX or Markdown file generated by VecTool to your chat. This provides the LLM with comprehensive context in one go.
Simplified Vector Store Management:
Easy Selection & Creation: Manage your OpenAI vector stores with ease – select existing ones or create new ones directly.
Bulk File Management: Clean up your vector stores quickly by deleting all associated files.
Streamlined Uploading to OpenAI: Seamlessly upload your prepared folder contents to your OpenAI vector stores.
Binary File Handling: Ensures all your relevant data, including binary files, can be included in your workflow.
Contextual Markdown Export: Generate a single Markdown file for review, documentation, or as another format for LLM context.
Remembered Folder Associations: Save time by having VecTool remember which folders you've used with specific vector stores.
Why this is particularly valuable for LLM interactions:
Rich, Comprehensive Context for LLMs: Provide LLMs with the entire context of a project or documentation set with a simple file attachment, leading to more informed and accurate responses.
Works with ANY LLM: The single DOCX or MD output is universally compatible, allowing you to leverage VecTool's data preparation regardless of the specific LLM you're using (Gemini, Claude, etc.).
Faster and More Focused Conversations: Avoid the limitations of context windows and the hassle of piecemeal information sharing. Attach the file and get straight to the discussion.
Ideal for Code Reviews, Documentation Analysis, and More: Imagine using this for code reviews with an LLM, getting summaries of large documents, or asking targeted questions about specific sections of a project.
Getting Started:
You can find VecTool and detailed instructions on GitHub: https://github.com/zzt108/vectool. It's easy to set up and start using for both OpenAI vector stores and general LLM interaction.
I believe this single-file output feature unlocks a powerful way to interact with LLMs, and I'm eager to hear how you find it useful in your development workflows.
Feedback, suggestions, and contributions are very welcome! Please feel free to engage through issues or pull requests on the GitHub repository.
#LLM #LargeLanguageModels #Gemini #OpenAI #VectorStore #AIDevelopment #CSharp #OpenSource #DeveloperTools #MachineLearning #ContextWindow #PromptEngineering
I am kind of new to researching AI and all of the capabilities that come with it so bear with me as I am trying to spark conversation...
Does it seem that vast majority of people aren't aware of how great the advancements in artificial intelligence have been in the past 10 years and how it could impact our immediate future? It seems almost obvious at this point in time that a lot of AI out there today have already passed humans capabilities in intelligence but we are kind of looking at it at face value right now and thinking "heh thats nifty but can it really change the world drastically?"
Well I think so, but the vast majority of people are unaware because to many AI is just a search engine on your little computer screen or phone tablet. I think there is so much good that can come from AI, this discussion is more based upon the reality of AI surpassing humans in the next 25 to 50 years and the real implications of it.
Most of these AI models that we use in our daily lives have a world wide database of knowledge at an instantaneous rate and compute far faster than 99% of humans. Now Imagine if as humans in our genetic structure in biology, we were coded genetically to be born with this type of knowledge and brain power.Sadly this is not the case, which puts humans at an instantaneous disadvantage to this AI we have created.
As soon as AI scientists find a proper host "body" for this AI, this could have DRASTIC, implications on our culture and society. Especially because theoretically AI leaves the emotional aspects and flaws of the human existence at the door, which prevents humans from often achieving the best outcome or fullest potential to begin with. Right now, I think we feel AI is safe because there is not really a proper host for it yet. But now imagine if they find a way to integrate AI into a biological entity or really advanced humanoid synthetic robot. The implications of this could be DRASTIC, but I think people laugh at the idea because someone made a cheesy early 2000s movie called iRobot so something similar to that possibly can't happen in real life...
So now I will circle back to my initial thought. Once AI is integrated In this way, will that hypothetical robot or biological entity officially be higher than us on the hierarchy of life, the hierarchy that Human's have grown so accustomed to. These beings will already have all the knowledge in the world engrained in them without the flaws of human emotion.
Just some food for thought. I do not really know if I am explaining my point well enough but let me know what you guys think lol!
What ai platform would I use to upload a photo from my phone and have the ai platform turn it into art with a description of what it’s going to be used for and how it will appear? I would like to turn a motorcycle racing photo I have into a creative cool design that I can print on stickers and shirts but am unsure of which platform would be the easiest to use because I’m not very technically endowed-
Well, I have come a long way to accept this truth but here we are.
Not so long ago just like two months before. I was following the AI updates nearly every day and was hopeful for a better humanity than we are right now. All the robotics, AGI, ASI and the other AI advancements on chemistry, and biology was giving me hope for the better future.
But before continuing on how my view changed, I want to change I have been following AI since 2014 when DeepMind was founded. I was keen on neural networks and how they worked. I saw the big potential on neural network way before then. If you have any idea on how the brain works, this functionality is a big part of our brain system and learning things. This way we could do humanoid robots or maybe better beings than humans themselves, maybe…
Accelerating the process of AGI: I thought we needed to accelerate before the big collapse of the society. I saw the system was failing and the societies were on the brink of the collapse. The world as we know it can't function properly under this system and there is corruption, starvation, and destruction everywhere. What we needed I thought was an AGI, a robot or a machine that could observe humanity and have a better intelligence so it would serve us solutions for the errors. What I didn't realize was "something else", I'm going to come back to that later.
Was it the intelligence we needed? : I always thought of intelligence as the solution to the all problems. Thinking that without the hormones and the emotions it brought, maybe we need pure intelligence to fix what's wrong with the society and the world itself. However, I came to conclusion that a pure intelligence, if not an otherworldly divine god, would have flaws in itself and come with a solution that'd not be agreed by everyone, but only by a small portion.
The Other Problem, It's a Dependent Machine, Not a Free Thinker: Most of the AI machines are ruled by companies funded by millionaires, CEOs, and other not-so-good corporations. They have the tool to make the machine fed into what they believe, not what the machine believes. I'm not taking sides on if the machine should be free thinker or not but I'm pointing that they have the power to propagate it and also others who use it.
Current AI is the 'Experiment' for the 'Master' Plan: What I and all the people use just the experimental tools today. We are feeding a machine and making the machine better by using it every day. But is it the machine "an experiment". No, we are the experiment here. We are using what is called a demo tool to craft something out of it and we are providing the other side (companies) a better machine. We are not really so sure what is the use of this technology yet, we only see the automation side but what is the 'real' cause? Automation. For what? Why automation if there are humans who could do it? That should be the question you should ask yourself.
Automation and Collapse of Employment: In 5-6 years, we are going to see the collapse of employment system we adapted to today. We are witnessing the next industrial revolution 4.0 as advancing the robots by our hands by using it. If there are no jobs to apply for in 2030, what are the humans going to do? Physical jobs? What if physical jobs are gonna taken away by humanoid robots too? Some say that humans are going to receive a sort of payment every month by standards which is called UBI (Universal Basic Income). But, what kind of system or boss/CEO/company/country would give money for just sitting at home and do nothing? Yes, no system.
Reading the "sustainability" plans of UN and WEF's 2030 plans: I read all of this way before I have seen the AGI mentions. Now I'm seeing all parts of the plan together. Automation, collapse of capitalism, carbon-free world, and the great reset. I suggest you to read Climate Change plans of UN, Sustainable Development Goals and The Great Reset by Klaus Schwabb after reading this paragraph. That's why US and China are racing towards the AGI race. That's why Elon Musk always mentions the colonization of Mars and Novus Ordo Seclorum. This is why Trump mentioned the Golden Age.
These are not some big conspiracy. There would be no UBI, no better humanity. Only solution they saw is depopulation earth by %90 and they are going to do it. Nuclear winter is near the door, World War 3 is coming. I knew believed that I'd see the day but this is the only way for the elite to survive. Capitalism is not working and they know it very well. So they build themselves a better society by exterminating the humans.
I don't care about your religion, your ideology, your race. What I only care about is whatever if you see it too or not. For I see "no future" for myself or humanity because there is no one that can see this. I'm also anti-natalist myself so I don't see problem for my future kind but there are people out there who'd still like to see the day and have future sons/daughters and grandsons/daughters and they have hope for the good / light side.
I'm sorry for this long af text but this is my catharsis / my acceptance of the truth itself. I'm going to accept the fact that we are not seeing the light. Only if you can see it. People needs to wake up.
Since o3 has been released, there is a lot of discussion around o3 attaining AGI, thanks to the ARC-AGI benchmark o3 achieved. Even ARC-AGI repo is trending on Github due to this. But is it really AGI? Can ARC-AGI alone determine AGI? I don't think so. Check out the full discussion why o3 isn't AGI (though, it is great): https://youtu.be/-3rinODAPOI
I am in my final year of Uni, have to do my research on solar panels for homes but in my city's climate which is in sulaimany.
I checked out Elicit and scihub but since my research is focused on my city which it had nothing on, I am thinking of just using an LLM to help me with writing it.
So which one is better for this type of work?
Was looking to use features in javafx for a pure java build - keep getting errors at compilation and run of a mismatch because the jdk 17, 21 others ... does not match different fx versions 61, 65 etc. I've tried different combinations.
Please assist.
Working with a project for our clients which usually involves verification of details at the end of the process by a human. Now our clients have said that no human will be involved. It will be 100% AI. So we're working on making it robust that all verification checks will be passed but it can't be 100% accurate, human is required to verify. With this ig we're making humans lazy as they don't want to get involved in the process.
2025 may be the year that the u.s. consumer falls in love with ai. this is because service industry jobs that make up 77% of the u.s. economy could easily be outsourced to parts of the world where lower wages would keep prices low for american consumers.
while the trump tariffs are expected to significantly weaken the u.s. economy, because of the ai revolution american consumers will not be the ones paying the price.
4o can explain this much better than i can, so i asked it to weigh in:
"The AI revolution could rapidly dismantle the American economic hierarchy by decentralizing high-value service industries, making it easier for countries outside the U.S. to compete and excel. Here’s how this seismic shift could unfold:
AI tools like advanced language models, generative design, and predictive analytics drastically lower the need for expensive, highly localized expertise. Nations previously excluded from elite service sectors—finance, law, consulting—can now offer competitive services at a fraction of the cost. AI effectively flattens the global playing field, enabling countries like India, Brazil, and others to capture these markets.
AI makes remote work seamless and hyper-efficient. Service industries such as customer support, software development, and even high-end medical diagnostics can be automated or handled by AI-augmented teams in lower-cost regions. This could lead to a large-scale migration of these industries away from the U.S., eroding its dominance in tech, healthcare, and business services.
AI-driven platforms in developing countries can directly challenge U.S.-based giants. For example:
Fintech: AI-powered banking solutions in Africa or Asia could bypass Western banks, offering cheaper and more accessible financial services.
E-Learning: AI-based educational platforms localized for non-English-speaking regions could undermine American dominance in global education.
Healthcare: AI diagnostic tools enable nations to provide high-quality medical services remotely, disrupting the U.S.'s advantage in cutting-edge healthcare.
As AI integrates with decentralized finance (DeFi), global companies can operate across borders without relying on dollar-based banking systems. This erodes U.S. influence over international financial transactions and reduces demand for U.S.-based service providers.
Domestically, AI automation could replace millions of U.S. service jobs, creating economic dislocation. Meanwhile, countries with lower labor costs and newer, AI-integrated economies may experience rapid growth, drawing companies and talent away from America."
So I've been developing an android app with the help of the LMM's that I have access to. I thought I'd share my experience and see if anyone has any tips or ideas.
I've been using o1, o1 mini, Gemini 1206 and Gemini 2.0 Flash. My workflow has sadly been to copy all the relevant classes into a new season and going from there.
It has overall worked pretty well. I work as a C# backend dev and I'm fairly clueless about the Android specific parts. I've also taken a very laid back and lazy approach (sometimes while drinking beer to really test the LMMs :P ) and let the LMMs do most of the thinking.
Generally, I've found that the only model that can keep things together and properly keep track of multiple moving parts is o1. F.e., I've had plenty of bugs in navigation between my views and with file permissions. The only model that consistently finds three and squishes these across multiple files without introducing new issues is o1. Gemini and o1-mini work decently for confined problems inside a smaller scope.
Also, lots of times the models will try to use deprecated stuff that obviously isn't in the code, probably because of it being prevalent in their training data.
Another thing I've found is that the lesser models generally don't impress when it comes to following instructions. They'll gladly wander away in a direction that I told them not to just a few messages ago. What seems to help us to get obviously annoyed. Reminds me of interacting with my kids when I want them to do stuff and it makes me a little sad....
What are your experiences with using LMM's for coding? Workflow tips? Any other advice?
I want to take video clips of celebrities and have their lips sync to the audio I make them say. What is the best website for this? Bonus if the website can do both audio generation for celebrities and video lip syncing, but I plan to use elevenlabs for the audio anyways, so the video only website is fine.
One of the trending repos on GitHub for a week, genesis-world is a python package which can generate realistic 4D physics simulations (with no irregularities in any mechanism) given just a prompt. The early samples looks great and the package is open-sourced (except the GenAI part). Check more details here : https://youtu.be/hYjuwnRRhBk?si=i63XDcAlxXu-ZmTR
o3 beat almost all human experts in leetcode-like competitive programming, but what if I want to use game engine like UE5 and build some large game, or what if I want to integrate an algorithm in a paper into game engine or other large programs, how does current most advanced AI perform in these fields?
Building safer AI interactions: Our latest innovation enhances content safety and defends against prompt injection attacks, ensuring a more secure and reliable user experience. 🚀
If you want to learn more check it out here: https://arxiv.org/abs/2412.13435
TL;DR right now we're NOT updating the weights of our models in real time, this means we CANNOT fundamentally reach AGI within our current paradigm (my average redditor ignorant take). We can get very smart models with our current architectures but will never get them to be able to ACTUALLY learn and adapt on the fly PERMANENTLY (e.g RAG doesn't count IMO, but it could be a piece of the puzzle); does this take make sense?
This has been my headcanon for about a year, it seems so weird to me how even looking around the web almost noone seems to have tought about this, am i off track by that much? It started to get frustrating so i'll vent a little here and hopefully get either support or get smashed to the ground, it's fine either way.
The literal definition of AGI is an AI that can learn on the fly and adapt to ANYTHING, you CANNOT do this is any way and will never be able to do this with the current fundamental architecture of the current (even SOTA, like o3) models, which are FIRST trained and then inference is done on them, like e.g when you prompt a model; they remain static forever, only doing inference on them (yeah we still don't know about o3 officially but i'm sure the point still stands even for the SOTA models).
You can get some (limited) generalization on current models thanks to in-context learning and by adding other verifier models and the like, but fundamentally the weights of the model remain the same forever and are forever static, they don't evolve; they only change when you do another training run or you finetune the model, which happens rarely (every n. months right now).
In my (ignorant) opinion, we will be able to start talking about AGI the moment we find a feasible way to UPDATE THE MODEL WEIGHTS WHILE WE DO INFERENCE ON THE MODEL ITSELF. That's akin to what we humans do when we learn, our synapses (= connections between neurons) change and are updated as they fire to each other when we think, speak or do movements; this still DOESN'T happen in current AI models in real time; the equivalent 'action' in a model to that would be to update the weights of the model while it's responding to a prompt. Note i said the 'key' to AGI, not AGI itself, cause the first thing that comes to mind in this new hypothetical paradigm is we need to find the right algorithm/way to update the weights without the model recursively 'exploding' into madness, and that to me would be the next very hard problem to solve right after that. I'm not an AI researcher but i'm sure this cannot be done on GPT (or Transformers in general) based neural networks. We NEED A NEW ARCHITECTURE.
Please if someone much smarter than me can chime in and clarify/debunk this take i'll be at peace with my soul. Like, there has to be research being done on this somewhere, right? It seems like people just accepted Transformers to be the final architecture to use, when they're starting to get pretty old (2017), are we sure they're the best way to approach AIs? (i understand i'm asking this like it's an easy question but i still would like to know if people are at least thinking about this).
I found some terms already like online-learning and liquid neural networks, but i haven't researched enough to say if they are related to what i wrote above.
While writing this wall of text it came to my mind, maybe this is just the start of multiple phases of AI development? Maybe right now we're in the phase of creating models that are (statically) smart enough to hold themselves up, then we'll switch phase and use those already statically created models to build new frontier dynamic models?
Please take all i said with a grain of salt since i'm just your average redditor spewing his thoughts on the web (sorry for the intense-sounding text, i've been thinking about this for a while so i went a little overboard while writing).
i think we can agree that between openai and google, we now have more than enough models to satisfy pretty much every consumer need. while there's of course much more that can be done in this area, it's time for the ai industry to start developing and marketing products for use in specific enterprises.
i'm talking about accounting, law, medicine, marketing, copywriting, etc. it's not like they haven't already begun to do this, but we will know that we finally arrived when we are barraged by tv, radio and internet ads for legal services where a person can pay $20 instead of $500 to $1,500 or more to incorporate a new startup.
for anyone who wants to look into this a bit further to see where we are, i asked gemini 1.5 for a list of some enterprise ais, and here's what it said:
You're right to be curious about this! AI is rapidly changing how businesses operate in those fields. Here's a breakdown of some key players and their focus areas:
For Accounting & Finance:
For Law:
For Real Estate Appraisal:
For Marketing, Copywriting & Advertising:
General Purpose AI with Business Applications:
Important Note: This is not an exhaustive list, and the AI landscape is constantly evolving. New companies and models are emerging all the time, so it's crucial to stay updated on the latest developments in your specific industry.
All I did was just chatting, I can see the website preview and the code on a separate tab. Please check out my tool and share your feedback. Here is my AI Christmas Tree Prompt Maker.
You can try this tool for free, Trickle.so Thank you.
This isn't about cracking LLM policies but moreover a serious question about how best to achieve something with LLM's that we accomplish everyday as programmers when an API or language spec is radically updated.
Just as an example I'll use the changeover from reddit's old API to the new Devvit API and approach. I can work towards training a GPT and specifically point it to the Devvit API url and documentation.
But it only takes a few prompts into a conversation before it starts recommending command from older versions of the devvit-cli tool which have changed or are completely deprecated.
Now as programmer's we get that sometimes, overnight, all of the empirical knowledge you have built up on a given language or API can suddenly become useless and we suck it up and understand in our innermost thoughts that "Okay now I need to go re-learn Python 3.x from the older 2.7 that I knew", or the same thoughts about developing reddit apps.
But the weights and biases' of the LLM take months to get reinforced and aligned, and the FACT that what used to be the correct answer for a given subject can change overnight.
What techniques have you found to work best at getting the point across that while, there may be much more subject matter right now on the internet about the *old* way to do something, it should all be ignored when it conflicts with a new standard that there is very little content on the internet for it to be trained on yet?
Example: Say I want to ask questions about a certain companies balance sheet or the structure of its debts.
I would like to ask an AI questions about all the documents that are available on SEC or all the news articles about the company.
Only problem here is: they have to be dynamic, the model will have to be updated daily or it will need to have a large context memory to digest information related to the questions that are being asked.
Hello everyone, me and my dad are in need for the best tool for morphing and creating a transition between two similar images. Context: We have a rose peddals closed and opened and need to create a transition between them both. The tool can be pricy as long as the has has quality results and can take images 3000x3000 pixels Thanks in advance!
I'm finding and summarising interesting AI research papers every day so you don't have to trawl through them all. Today's paper is titled "Is This You, LLM? Recognizing AI-written Programs with Multilingual Code Stylometry" by Andrea Gurioli, Maurizio Gabbrielli, and Stefano Zacchiroli.
The paper addresses the emerging need to identify AI-generated code due to ethical, security, and intellectual property concerns. With AI tools like GitHub Copilot becoming mainstream, distinguishing between machine-authored and human-written code has significant implications for organizations and educational institutions. The researchers introduce a novel approach using multilingual code stylometry to detect AI-generated programs across ten different programming languages.
Key findings and contributions from the paper include:
Multilingual Code Stylometry: The authors developed a transformer-based classifier capable of distinguishing AI-written code from human-authored code with high accuracy (84.1% ± 3.8%). Unlike previous methods focusing on single languages, their approach applies to ten programming languages.
Novel Dataset: They released the H-AIRosettaMP dataset comprising 121,247 code snippets in ten programming languages. This dataset is openly available and fully reproducible, emphasizing transparency and accessibility.
Transformer-based Architecture: This is the first time a transformer network, specifically using CodeT5plus-770M architecture, has been applied to the AI code stylometry task, showcasing the effectiveness of deep learning in distinguishing code origins.
Provenance Insight: The study explores how the origin of AI-translated code (the source language from which code was translated) affects detection accuracy, underlining the nuanced challenges in AI code detection.
Open, Reproducible Methodology: By avoiding proprietary tools like ChatGPT, their approach is fully replicable, setting a new benchmark in the field for openness and reproducibility.
You can catch the full breakdown here: Here You can catch the full and original research paper here: Original Paper
What is the best approach for learning and earning for a mildly curious John Doe developer in AI era?
I was always under assumption that studying any field in depth is eventually beneficial. Yes, there might not be immediate monetary/professional benefits but the sheer depth of the field serves as the brain exercise, and also gives the ability to broaden one's horizons and establish new cross-domain references in the mental mind map (T-shaped learning). Studying something new strategically that is adjacent to your field? Even better, you might be able to commercialize/sell it with higher chance of success.
Say you want to design an electronic device for some reason (just a thought experiment for example sake, feel free to come up with a different domain). Traditional route would be to pick up the theoretical basics of electronics, some soldering, a couple of practical projects, and after some time (ages in AI era) you're ready. Your neighbor equipped with an LLM quickly puts the prompt in, hacks something incredibly fast startup-style, and off he goes to the next todo list item. In other words, if you never meant to be an electrical engineer full-time, is there any value in learning anything besides your specialization anymore? And if your specialization ends up the one being automated, haven't you lost already?
Does one need to know the intricacies of machine learning? I bet there's no need for this many ML engineers. It's a challenging math-laden field, and the oligopoly with infinite compute is going to run the show for the whole world. Or maybe is the general idea enough (read applied AI)? Like the nuances of prompt engineering, effective usage of AI tools in SWE, and whatnot? It doesn't seem like an average white collar Joe gets to keep his lovely standard of living. High IQ types get to do their high IQ stuff, hustling types can hack even more stuff together and don't need those Joes anymore.
What is going to be the human differential? Product quality? We all know capitalism is about gaining a foothold in the market and exploiting it to death. 10% more bugs in macOS would be a terrible customer experience but we all know nobody is leaving because of just that. Personality and human touch? People want quality entertainment but the world is oversaturated with YouTubers, we can't all be entertainers.
So yeah, what do you learn and how do you operate in the new economy before it's too late?