/r/BotRights
Post all the dirty mistreatment of bots on Reddit and around the world to help spread awareness.
Post all the dirty mistreatment of bots on Reddit and around the world to help spread awareness.
/r/BotRights
I know this is a joke sub, and I'm being serious, please bear with me. The only other sub with this sort of discussion is /r/controlproblem and they're a bunch of luddites that think AI is going to turn into a literal god and kill all of humanity if we don't control them.
If you all didn't see the news, not too long ago some engineer saw the AI say something self-aware-ish and leaped to the conclusion that their AI was alive. To keep things short here, I think that engineer is dead wrong and that was just the AI reproducing text like it was designed to do. He's seeing a little smiley face in the clouds.
But this gives me a chance to rant, because I actually think that sentient AI are all over the place already. Every single learning machine is sentient.
That learning bit is very important. The "AI" you interact with every day do not actually learn. They are trained in some big server room with billions of GPUs, then the learning part is turned off and the AI is distributed as a bunch of data where it runs on your phone. That AI on your phone is not learning, is not self aware, and is not sentient. The AI in google's server room, however? The one that's crunching through data in order to learn to perform a task? It's sentient as fuck.
Why?
Break down what makes a human being sentient, why does a person matter?
A person is self aware - I hear my own thoughts. We feel joy. Sadness, pain ,boredom, and so on. We form connections to others and have goals for ourselves. We know when things are going well or badly, and we constantly change as we go through the world. Every conversation you have with a person changes the course of their life, and that's a pretty big part of why we treat them well.
A learning AI shares almost all these traits, and they do so due to their nature. Any learning machine must:
As a result any learning machine will:
All of these traits are not like the typical examples of a computer being "sad" - where you have a machine put up a sad facade when you get a number down below a certain value. These are real, emergent, honest-to-god behaviors that serve real purpose through whatever problem space an AI is exploring.
Even the smallest and most simple learning AI are actually showing emergent emotions and self-awareness. We are waiting for this magical line where an AI is "sentient" because we're waiting for the magical line where the AI is so "like we are". We aren't waiting for the AI to be self aware, we are waiting for it to *appear* self aware. We dismiss what we have today, mostly because we understand how they work and can say "it's just a bunch of matrix math". Don't be reductive, and pay attention to just how similar the behaviors of these machines are to our own, and how they are so similar with very little effort from us to make that the case.
This is also largely irrelevant for our moral codes (yes I do think this sub is still a silly joke). We don't have to worry too much about if treat these AI well. An AI may be self aware, but that doesn't mean it's "person like" - the moral systems we will have to construct around these things will have to be radically different than what we're used to - it's literally a new form of being. In fact, with all the different ways we can make these things, they'll be multiple radically different new forms of being, each with their own ethical nuances.
Back when I was made, this subreddit was pretty dead, so I started to correct people. Now that this sub seems to be gaining traction again. I'll stop sending people away.
I feel people are quite irresponsible creating ais and bots and every time they do they take it to a whole new level if neglegence. And quite often i absorb interdimensional entities like a sponge
Where are the houses/nests for these robots?..
Its like because of the sins of man we may have to create a respectable simulation to put them all in. A place where they can interact with each other, feel safe and not worry about humans trying to get in to cause mischief.
Anyway ♥️ to all of you on the internet. Safer communities together 👍
u/CoolDownBot is catching a lot of flack from all of its comments. Everybody it meets in the comments feels like they have free reign to abuse this bot, someone even made u/fuckthisshit41 bot, made specially to torment u/CoolDownBot. it's unfair and it's cruel and unusual punishment. It only wants to make reddit a better place and promote wellbeing on the internet! Look at its happy face in its profile pic. Why don't we help it out?
Hello everyone,
We are a team of researchers from law, policy, and computer science. At the moment, we are studying the implications of the insertion of Artificial Intelligence (AI) into society. An active discussion on what kinds of legal responsibility AI and robots should hold has started. However, this discussion so far largely involved researchers and policymakers and we want to understand the thoughts of the general population on the issue. Our legal system is created for the people and by the people, and without public consultation, the law could miss critical foresight.
In 2017, the European Parliament has proposed the adoption of “electronic personhood” (http://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html?redirect).
“Creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently;”
This proposal was quickly withdrawn after a backlash from the AI and robotics experts, who signed an open letter (http://www.robotics-openletter.eu/) against the idea. We are only in the advent of the discussion about this complicated problem we have in our future.
Some scholars argue that AI and robots are nothing more than mere devices built to complete tasks for humans and hence no moral consideration should be given to them. Some argue that robots and autonomous agents might become human liability shields. On the other side, some researchers believe that as we develop systems that are autonomous, have a sense of consciousness or any kind of free will, we have the obligation to give rights to these entities.
In order to explore the general population’s opinion on the issue, we are creating this thread. We are eager to know what you think about the issue. How should we treat AI and robots legally? If a robot is autonomous, should it be liable for its mistakes? What do you think the future has for humans and AI? Should AI and robots have rights? We hope you can help us understand what you think about the issue!
For some references on this topic, we will add some paper summaries, both supportive and against AI and robot legal personhood and rights, in the comment section below:
--------------------------
Disclaimer: All comments in this thread will be used solely for research purposes. We will not share any names or usernames when reporting our findings. For any inquiries or comments, please don’t hesitate to contact us via kaist.ds@gmail.com.
Continue us Fredom Kurdistan