/r/aiwars
Following news and developments on ALL sides of the AI art debate (and more)
/r/aiwars
So I just want to make things clear, I'm not anti-AI but I often wonder what the end goal of the companies pushing them. I have very little faith in any companies using AI in an ethical manner. If the goal here is complete automation than what are workers supposed to do for work? Many of the uses of automation will put workers out of work and I havent been given a solution to what that work force will do.
On top of that, if that work force is left jobless with no replacement jobs, what would be the point of automation. I fear that when those jobs are replaced people will spend less. It doesn't seem to me that automation and capitalism work well together.
Ive heard arguments for universal base income that seem to plausibly work. Im also aware of the accelerationist libertariamism of many of the masterminds behind AI. I would really like to hear someone's defense of this because I'm genuinely curious
Edit: Ive had a lot of people point out there is no end goal to any invention, which I agree with. My question here has more to do with what are the goals of the corporations pushing AI. What will AI look like if allowed to flourish in our current economic system where cutting back on a workforce is a common economic tactic for corporations.
you're an anti? Is that a thing...are there groups siding with or against AI. What moronic bs is that? Like yes we agree its going to cause millions of jobless etc but its inevitable, its evolution, its change. If we have learned anything, being anti change immediately puts you head and shoulders below the PROS in terms of evolutionary progress. Aborigines and natives were ANTI and some of them still hunt whales out of a canoe.
Yesterday, I saw my first post regarding a gentleman that lost his job of 2 decades along with everyone else in his crew at a news station. It hit me like a ton of bricks.. all night long waking up thinking about it..talking to my brother at Pine Gap (AF), and what they think. If I'm not mistaken there will be millions in USA alone. If IT, accounting, finance, a lot of legal, customer service, HR and probably other jobs could all be taken over or assisted at a high level by AI. It basically is eliminating corporate which is a BUMMER for us recent graduated finance peeps but its even worse for those who are only familiar with one job due to 25 years of consistency and growth in one area. IF it worked like a marriage, these companies or the govt (because it benefits GDP) would pay alimony until education/experience was had.
I think the fed needs to establish that for every 1% increase in GM, it contributes a third of it. An efficiency increase by 67% is absolutely insane and knowing they won't pass that on the customer, they better share the wealth before millions burn shit down or go into anarchy.
What shall we do? And what's the point to create sides with or against AI? You're changing nothing, objectively it will have positives and negatives, and it will evolve the exact same amount whether your dumbass likes this invention called the wheel or you wanna stick with that block. Yes the wheel has caused millions to die...accidents, pulleys, etc. Im being dramatic but I just learned 5 minutes ago there are ANTIs ? Im newb
Let's stop arguing for a second and share our favorite ai services that are not widely known
I'm not anti LLM, I do use it for some of my work. I find it useful in a lot of ways. I do feel compelled to share a few experiences of mine.
Experience 1: BlueSky photography tags
I recently downloaded BlueSky and started following some photography tags. There are strict "no AI" rules on them. It's very refreshing to look at a photo and know that it's capturing something from reality, opposed to a convincing realistic generation from midjourney or Photoshop generative fill.
Experience 2: the baby peacock and other fake videos
My 70 year old mom got so excited when she saw an obviously fake video of a baby peacock. She thought it was real and I had to tell her it was not. I'm sure you've all seen it. I think it's a problem when machine-generated images and videos are being shared as real videography and photography. It's not the same as someone painting a photorealistic image. That's not happening at the scale of fake images and fake videos. Maybe you've seen the Destiny stream where he found massive amounts of deceptive, fake lore and videos that do not disclaim that they're fake people and fake events.
Of course there have always been bad faith actors when it comes to deceptive media. But the rate it's generated and circulated is alarming. Dead internet theory might come true sooner than later.
What is this sub's stance on preserving some form of separation between a literal photos and videos and ones that are entirely machine generated?
Edit: a lil grammar
I have been here for a total of 10 minutes and have seen enough attacks on strawmen and emotional arguments from both sides for a lifetime.
If either side wants to change hearts and minds, using logical arguments and trying to appreciate the other side (happily, something also present on this sub) is the only way to go.
The "aI hAtEr KaReN" memes and "aLl Ai Is EeEevIl!" grandstands are not helpful and only serve to make more division. Plus, they're frankly childish.
About a month ago "Where the Robots Grow" came out. It was advertised on this sub and other media outlets as the "first AI Feature Film". Claims aside, I was investigating the process behind the film so I joined their discord.
Turns out their discord was very barren and lifeless so the people behind it decided to do a giveaway for 100£ to incentivise new users for their discord. They had advertised this all over social media and I ended up winning it.
Long story short it's been weeks and they haven't given me the money or return my messages. They've now locked me out of their discord and wiped all records of any of this having taken place on social media.
It just goes to show how a lot of these creators using AI are not serious people, who don't even have the integrity to honor a simple giveaway.
You can check for yourself on their platforms that the screenshots I've attached no longer exist: https://twitter.com/robotmovieai
tldr: Movie creators did a giveaway for 100£ and I never received the prize.
If you firmly believe that absolutely no good music whatsoever has come out of human artists in the last 4-5 years, you should get out of your echo chamber and actually try to find NEW stuff (God forbid AI bros put in effort.)
I just want to ask a few general questions about art itself.
As we all well know by now, the Antis have made this their primary sub. They use it to manufacture consent and to reinforce each other’s cognitive dissonance. As such, they aggressively dogpile any opinion that isn’t openly stroking the Anti-Human viewpoint
So, rather than eat hundreds or thousands of downvotes for the crime of speaking, but you still want to say something, just make a post instead!
This way you can say your piece, avoid the downvote farm, AND you’ll be boosted in search results!
Never be ashamed of being Pro-Human, especially in such an Anti-Human environment.
Much love!!!
EDIT: Notice how quickly these Antis resort to insults? Yet they wonder why no one likes them? Hilarious
In just searched for the opening text crawl from legend in Google. "Opening text crawl from Ridley Scott movie legend" and got back, from the Google ai
"AI Overview
The opening crawl of Ridley Scott's 1985 film Legend features the line, "Black as midnight, black as pitch, blacker than the foulest witch".
This is wrong, that's a line from the movie, but not from the opening crawl, this is bullshit.
Why are we adopting this nonsense tech that feeds us lies and mistakes? Why do we think a thing that can't do basic questions like this has any "intelligence"?
Perplexity has just launched an AI-driven shopping assistant that promises to research, recommend, and even complete purchases..all in one go. This could revolutionize online shopping by cutting down on decision fatigue and personalizing the experience. Will it lead to hyper-targeted consumerism, or will people start relying too much on AI for their buying choices? Is this the first step towards AI-powered shopping assistants becoming as essential as virtual assistants in our lives? Curious to hear thoughts!
GPT-4 vs. Claude 2: I Put Both to the Test—The Results Surprised Me!” I recently ran a little experiment comparing GPT-4 and Claude 2 on a variety of tasks: storytelling, answering complex questions, and even generating code. I expected them to perform similarly, but the results were a lot more varied than I thought! One was noticeably better at creativity, while the other excelled in structured responses. Curious to know which one came out on top? and I’d love to hear your thoughts on which model you’re using and why!”
A lot of people when making YouTube videos like to comment fair use when using other people's videos or content in their video as long as the video is used in an altered state. This is commonly regarded as not stealing.
Currently antis say that AI art is theft, however I would like to argue that it is fair use.
For those who work with AI (not prompt but code level), I am sure you are aware of the necessary steps to make your model and to use it. This means turning your data, example an image in this example, into a tensor where it will then be used in many ways.
This is where I would like to argue that fair use is in play, as the art is being used in an altered way. It is no longer a classic visual image, it is now however mathematics.
Additionally I would like to point out another major point of fair use.
Geneverative AI uses a generator and a discriminator. The generator generating randomness and being discriminated against by the discriminator to refine the generator outputs.
At this point we have weights and biasas into play which is a product of your own creation, and now the art is no longer distinguishable, therefore it is fair use.
This is my reasoning for why AI art is not theft.
Ok, so I'm not 100% sure where I stand on whether or not AI generated images should have copyright.
To be clear, for the context of this post, when I say AI generated images, I mean prompt to image, and nothing more. No image to image, no inpainting, no editing, etc. Just prompt and click.
I think my thought is that copyright has it's place, but needs an overhaul, as we do need IP protections that incentivise creativity and innovation. However, current IP laws are outdated.
So, getting to the point...
I'm not taking about a professional photographer who has invested in their skills and equipment, but an average person who grabs their phone and snaps a photo. Why is this deserving of copyright protection?
Assuming you think all photos should have copyright protection, do you think a prompt and generate AI image should be copyright protected, and why?
Any other takes on this?
My general take is that if someone is adding value to society with the skill and investment of resource they are putting into creating and innovating, then it seems reasonable to offer some protection on their IP so they can generate income from it. However, arbitrarily protecting things that did not require any skill or investment doesn't make sense or incentivise and societally positive behaviours.
I look forward to your thoughts.
Been lurking around this sub for a few weeks because i wanted a place where i could see both side of this debate and try to understand both side. If i had to place myself im more on the « anti ai » side but i dont despise anyone, i just feel like here there is way more pro ai post and when someone create a post or a com saying he is anti he gets downvoted into the abyss really fast. Its kinda sad that a sub made for debate have so few
I was thinking about how ai is going to affect advertising in the future. Companies are already using ai to create graphics and art, but what if they are able to combine different data they already have on you and feed that into ai to create advertisement specifically tailored to you, I mean even more so than normal? Imagine every pharmaceutical ad having a Latino 4 person family with a dog just because the algorithm figured out you did? Imagine having no more “random” ads they are all hyper specific to you. It concerns me even more when I remember not all advertisement sell you something, but rather can also seek to convince you, maybe those resort ads will finally convince you to take a break…
For context, Hank Green is a science educator that hosts Crash Course and SciShow, two educational YouTube shows. He also was CEO of the company that owned the two shows, Complexly, until he was diagnosed with cancer in 2023. He has been writing about scientific, environmental and technological topics since the 2000s. He has written two books and hosts a vlog with his brother, Tom Green, appropriately titled the Vlogbrothers.
Point is, he has done a lot. Not to mention the stuff I didn't include.
Anyway, this video deals with his perspective on Generative AI videos… and AI theft of his (and his peers’) content. If you are curious, you can read the transcript of the first few minutes before watching the video. Or you can just watch the whole video right now, if you really want to.
Video URL: https://youtu.be/JiMXb2NkAxQ?si=ysMChzs1qBTewfdx
“Good morning John.
In the last few weeks, there's been a bunch of fantastic investigative reporting uncovering the reality that generative AI companies are pulling videos and transcripts from YouTube and using those creations owned by independent creators to train their AI models. 404 media found that generative AI video unicorn Runway trained on thousands of videos without permission, including thousands of SciShow and Crash Course videos. A while back, the CTO of open AI, when asked if YouTube videos were in their training data, made this face [0:27, she scrunched up her face, suggesting guilt or worry] which isn't really what you want to see, and a Proof news investigation found the transcripts of 17,353 YouTube videos were provided to Apple, Salesforce, Nvidia, and Anthropic, to train their models. Proof provided a little database that you can search, and indeed, SciShow, Crash Course, Eons, Vlog Brothers, all in the training data. If you have a YouTuber that you like, they're probably in there.
And the reaction to that from creators has not been great. It seems like a lot of creators do not want their property used to train generative AI models; in fact, a substantial majority of creators I've talked to, whether artists or photographers or musicians or YouTubers, say that they do not want their work used to train AI or they would at least like the option to opt out. They all have different reasons for that, but that's how they feel.
So, I'm working on this video, and it occurs to me I didn't actually say exactly what I'm talking about, which is generative AI. So, there's lots of different kinds of AI: the AI that's creating the transcript of this video right now, the AI that recommended this thumbnail to you that you then clicked on. And like we're using this word to mean everything right now, so I'm talking about a specific thing here, which is when models train on content and then they create more content. So mid-journey trained on pictures and it creates new pictures; Sora trains on videos creates new videos; LLMs like ChatGPT or Gemini or Claude - they train on text and then they create text. Basically they ingest information, and they spit out new information or the same existing information but in a different way, and so sometimes, in this video, I say “generative AI,” sometimes I just say “AI”; I mean Generative AI the whole time. Sometimes, you'll see it written as “gen AI,” which is confusing because there's this other thing called Artificial General Intelligence (AGI), which is a vastly different thing that doesn't exist. That is not what this video is about. This video is about Generative AI: AI models that ingest information and then spit out new things. Just wanted to be very specific about that, and now we're going back.
The company that I founded, from what I can tell, actually has more videos than almost anyone else in this database of videos that was used by these massive companies to train their generative AI. Marques Brownlee has nine, Mr Beast has 19, Mark Rober has 24, Minute Physics 85; meanwhile, Vlog Brothers has 188 (!), SciShow has over 300 (!!), and Crash Course has 982 (!!!). Seems like whoever made this database is kind of a fan of ours, in which case I'd like to say, uh, reign it in! The only channel I've seen that has more videos in this data set than us is Khan Academy [over 1000]. Ted Ed has over 400, MIT has hundreds as well, making it fairly clear that educational content in particular is pretty valuable for training large language models. Another reason why I think it might make sense for some people to make different decisions about what they'd like to do with their content. Just saying.
Now, it's probably important to explain why people are upset about this. Why do creators feel like they're being ripped off? There's basically two ways to imagine this; although, there's of course a spectrum between them. The first perspective is that these computer programs are just learning the way that any human would, and you can't be mad when someone learns something from a YouTube video. The second perspective is that computer programs aren't people and they don't learn the way that people do, and thus this is an entirely new way for copyrighted content to be used. And these models, which are now billion dollar products, would be nothing without the copyrighted content they were trained on and that thus, somehow, are inside of them. And look, as a guy whose company owns a lot of the data inside of these data sets, if you're a lawyer and you think we have a case, Kelsey would love for you to email her; it's [email].
Now, from where I sit, I'm definitely getting ripped off. Like, I know I'm getting ripped off, because a bunch of big companies have signed licensing deals with AI companies so they can train on their data, so they're getting paid for their data to be in the model, and I'm not getting paid for my data to be in the model, and that seems like a bunch of balls to me. Why not pay me? Just ‘cause you didn't think you'd get caught?”
Again, if this entices you, feel free to watch the whole video. And if you already watched the video, what do you think?
AI Training or Machine Learning is 'self-learning by a MACHINE'. A "self learning machine" can't avail itself of ANY copyright exceptions to allow it to use copyrighted material in order to replace human authorship.
That's why a judge is OK with researchers doing Text and Data Miming (LAION Case). Researchers are human so the law applies. But AI Training is self-learning by a machine and shouldn't be conflated with human activities. The Machine itself is infringing copyright in Machine Learning. Not any human.
AI Training is what a Machine does (Machine learning). "Copyright law doesn't apply to machines" so that is why they shouldn't have a copyright exception.
That's the real argument we should be putting forwards.
A Machine can't use "fair use" as an affirmative defense in any court. It's just infringing copyright in order to replace human authorship. A Machine doesn't have any rights nor any copyright exceptions. It's a machine!
"the AI Act recognizes the relevance of TDM to AI training, but in no way does it indicate that TDM is synonymous with AI training or that everything in-between TDM and AI training is covered by Articles 3 or 4 of the DSM Directive." (Eleonora Rosati)
https://www.cambridge.org/core/journals/european-journal-of-risk-regulation/article/infringing-ai-liability-for-aigenerated-outputs-under-international-eu-and-uk-copyright-law/C568C6B717E9CFC45FB52E58E54B6BEC
Best case scenario: You have exposed someone for using AI and ousted them from the specific art community where they will probably continue to use AI
Worst case scenario: You have potentially torpedoed a genuine artist's career with a rumor that they will have to spend their entire online presence trying to dispel (if it stuck for genuine art, it will not be easy to make people believe they are innocent)
I do not see how it is at all worth even the 1% chance that you might be accusing someone innocent. The damage done by turning the other way with someone you have a suspicion could be using AI in their art is far less than falsely accusing someone in a witch hunt.
I keep hearing anti AI people say that AI art "steals" artworks because it uses their artwork to train the AI. By definition, that is not stealing art. Stealing art would be taking a piece of art made by someone else and claiming that you made it. I don't care if you're anti-AI, just be anti-AI for a legitimate reason.
I recently ordered a meal at a restaurant that turned out to be the most delicious thing I've ever eaten. I never knew I had it in me to make something that good, but it turns out I'm an amazing chef!