/r/ModerationTheory
A forum for the discussion of the moderation theory in reddit's subreddits. message the mods for submission privileges.
/r/ModerationTheory is a subreddit for the discussion of moderation, moderation policies, moderation case studies, and general moderation philosophies on reddit.
We welcome both general discussions and questions, and tactful references to specific subreddit moderation policies and moderation tools.
This is a space for assuming and exerting good faith towards and on behalf of everyone on reddit. Discussions take place under the assumption that moderators make internal compromise and are the only users privy to all information concerning specific moderation decisions in the subreddits they moderate. As onlookers we may respectfully disagree with that in mind.
/r/ModerationTheory is a largely self-moderated space where civil, on-topic discussion using friendly and respectful language is expected of all participants.
Moderators may step in if needed, such as in cases of insults or assuming bad faith.
/r/ModerationTheory
I have OCD and autism post at an alarming rate because some things just pickle my special interests. Rather than getting shadow banned time after time, could someone break down the posting speed limits?
New account, hence you won't see that on this one.
A few days ago I was banned from posting to r/LeapordsAteMyFace because I posted to r/LockDownSkepticism. Is this common? How would users know which subreddits are verboten? And, most importantly, is it an effective type of moderation? I've never posted to r/LeapordsAteMyFace, and never expect to, but I do study online communities and find this interesting.
As I wrote the mods:
Hello, I appreciate your concern with misinformation and my post on /r/LockdownSkepticism spoke to the ease of getting vaccinated. Consequently, while I expected to be downvoted there, I'm surprised to be banned in /r/LeopardsAteMyFace. This concerns me as a user -- and interests me as a researcher who studies online communities:
- Is this policy of yours stated anywhere?
- What other subreddits are included?
- How long has this been your policy?
- And, as a researcher, do you have evidence that the policy is in someway effective?
- Does it somehow lighten your moderation load (e.g., preventing any participant there from brigading here)?
- Do you believe it limits misinformation? (I suspect not, as people can easily use multiple accounts and this action could prompt a backfire effect.)
They responded, "Thou shalt not sealion."
Link is as follows:
https://cdr.lib.unc.edu/downloads/cz30pz987?locale=en
Edit: I haven‘t participated in that community.
Reddit’s admins removed a moderator‘s comment from the sub, where he is one of the mods. I think that either the admins or the other moderators of the sub should have removed him from that sub’s moderation team. Asking for your comments!
Edit: I wonder, if the admins informed the other moderators of the sub that his comment was removed?
UPDATE: 8 months later: In the meantime the account of the moderator was suspended! I don‘t know anything about the circumstances of the suspension. Besides I don‘t know, when it was suspended.
Many subreddits have a moderation team.
Who decides/ should decide, if a user should be banned/ unbanned?
Do they decide together? Does every moderator decide independently? Does the lead moderator decide?
should I hold steve huffman responsible for that ?
Reddit has been moving away from shadowbanning humans for years now, and today I noticed they changed "User Shadowban List" to "User Bot Ban List" in the AutoModerator Library of Common Rules.
in my opinion, yes, if you find someone who
then ya ban them
Hi r/ModerationTheory,
I’m a graduate student at the Oxford Internet Institute researching AutoModerator and currently in the process of conducting short interviews with moderators who use the bot.
If you’re experienced with AutoModerator, I’d be interested in learning more about how you started using it and how it has changed your day-to-day as a moderator. If you’re less experienced, I’d still be interested in talking, especially to learn more about any barriers to entry that you’ve encountered.
My preference is for phone interviews, but I’ve included some questions below that you can answer in a private message. If you’re up for a phone call, message me and we’ll schedule a time. You can learn more about the project and how interviews are conducted here: https://www.dropbox.com/s/1k5ziop83qji1dh/AutoMod_info_form.pdf?dl=0.
Thanks in advance!
Questions:
How did you first learn about AutoModerator and why did you adopt it? Was there a specific incident that drove you to use it? What did you hope to accomplish with it?
What kinds of rules have you used it to enforce (i.e. spam, formatting, shadow banning, etc) and where have you found it to be most impactful? Where has it failed? Did you borrow rules from other subs or develop them on your own?
How has it changed your day to day as a moderator? Has it changed the ways in which you interact with users in your subs?
Have you noticed a difference in the ways users respond to AutoModerator as opposed to you or other human moderators?
I run a Dyslexic subreddit. I keep blocking bot's; Reddit needs to stop this mess. When did they become English teachers?
When is enough enough?
Morally speaking, I'm well aware that banning everyone who regularly participates in a particular subreddit from your subreddit is wrong. But allow me a moment to describe a situation that I can see no other way to resolve, and then you can tell me if you have a viable path through this quandry.
I moderate /r/alcohol. A decenly growing subreddit, with a good community forming.
Like all subreddits, we have a basic set of rules. And like all subreddits, we get a fair amount of people who willfully ignore those rules, resulting in (hopefully temporary) bans.
A few months back, we started getting a rather heavy influx of people blatantly posting in violation of two specific rules we have. Namely, "shitposting" and "anti-alcohol rhetoric." In some cases, going so far as to call out those specific rules in their posts.
All these banned posters have one thing in common: they're all frequent posters in a certain subreddit dedicated to a specific illicit drug. No, I'm not naming this other subreddit, but observant readers can probably figure it out.
In fact, looking over the ban logs for the past week, I find a total of 37 bans. One of those was a spambot. The other 36 are all very frequent posters/commenters in this other subreddit.
I considered approaching the moderators of said subreddit, until I noticed that out of their six moderators, four are already on my subreddit's ban list, all for anti-alcohol rhetoric.
So simply put, what solution do we have? Pre-emptive banning is abhorrent, as well as being a logistical nightmare. Report my findings to the Admins? It's unlikely that they would even care to respond, let alone offer a solution.
I'm well aware that this is /r/ModerationTheory, so I'm not expecting a viable solution, but I do hope it raises some interesting thoughts on the subject.
This is to mods who have banned or restricted users/mods; that did not know your sub existed, much less posted to your sub. If there is one thing that irritates me, it is this. My sub has an advocacy component, and dyslexics are everywhere: we come in all colors, political backgrounds, genders and languages. I heard rumors that certain subs are restricting, or banning based on users post to other subs; I was wondering is this true? I only ask, because I came across /r/offmychest
If this is true......... Learning disabilities are a multifaceted issue for people. This means I need to go on different subs and yes, these subs could be experiencing conflicts with one another. Nevertheless, its still important to talk and get to know people. If this is indeed happening, I would like to know how they perform this task: what algorithms are they using and does this involve volunteers.
However, I refuse to go on subs that promote child abuse, animal abuse, ect. The incles is as far as I have went and my observations: the abuse is coming from trolls, a few predators, and unethical mods
I know mods hold all the power in their subs, but what would happen if a mod started banning people for bad or nonexistent reasons?
Also non-mod question sorry but how do I change submitted thread titles after submitting said post. I know how to change description but not title. Cheers and thanks.
I'm sorry I just don't know in which subreddit to ask this question so I start here. I just found out that my comment isn't showing up where I posted it, even though it is in my history, while some other comments are getting upvotes/downvotes and replies. So question is can a mod remove a comment without a trace, not even showing up as [removed] or [deleted]?
So, Correct the Record gets to send paid shills to Reddit to post for pay, and we can't even point out that it's happening? This seems to be entirely a victory for any organization or group that wants to send paid shills to Reddit.
You can't even point out that the existence of paid shills means that all posters who share that opinion are being discredited, because how can the rest of us know who's a shill and who isn't?
And there is a huge difference between an honest debate with someone whose opinion differs from yours and debating with someone who is just posting because he or she is paid to. Arguing with a shill takes all the meaning out of a debate, it's like arguing with a tape recording.
Putting all the penalties on the part of honest Reddit posters rather than shills goes against every principle Reddit supposedly stands for. If you are going to give shills free reign on Reddit, how can you forbid others from posting on Reddit purely for commercial reasons?
I realize that some people are going to accuse others of being shills as a simple strategy for discrediting them. But in the case of Correct the Record, or any group that announces that it will be sending shills to Reddit, the discrediting has already been done. Other posters are merely pointing it out. I suggest that it should be allowable to accuse someone of being a shill if it is public knowledge that an organization is sending paid shills out to online media on behalf of a given viewpoint.
The current policy is altogether a bad policy in my opinion, one that should be changed. It discredits Reddit, by making it appear to be entirely on the side of organizations that sponsor paid shills on social media.
In the interest of transparency: I'm creating a platform for building communities which I hope will bring something unique to the table. That, coupled with a longstanding love for online communities, has inspired this series. P.S. much of the background for this first post was taken from my series over at /r/AskModerators, you can find that post here.
Welcome to the first part of a series designed to spur discussion about the theory, philosophies and practical applications of moderation! I'm hoping that over the course of the next week I can ask you all questions that you find interesting, engaging, thought provoking, and fun.
So without further ado, the topic of my first post: Incentives for user behavior. Many community platforms have built systems to influence user behavior, and these incentives have had a huge effect on the culture and community of the sites. Reddit has karma given through a democratic voting system; a system that can be manipulated (i.e. vote brigades) for various reasons. Stackoverflow grants users greater power if they consistently engage in specific contributions; power that is occasionally abused in interesting ways. What incentives would you like to see built in a platform (reddit, forums, Q&A sites, others)? Would you like to see more rewards for users policing themselves? Is it possible to have a voting system that rewards long-form content instead of image macros (without significant moderation intervention, like /r/AskHistorians)? Is there a now defunct service that had a incentive system you long for?
Thanks for your time, looking forward to some really fascinating discussion!
The Ban Bargain is a technique to temporarily ban users, to stop them from complaining about being banned, and to curb their unwanted behaviour. If a user makes a comment worthy of a temporary ban but not a permanent ban, initially give the user a permanent ban. The user will then beg to be unbanned in modmail. Tell them you are willing to shorten their ban, if they are willing to never do whatever they were banned for again. They will happily agree, and think they were given a second chance, when in reality you were only going to temporarily ban them, anyways. Make sure to temp-ban them for long enough that they will remember the ban next time they go to make the same type of comment, but not for so long that they completely forget about the subreddit.
If you haven't heard of the concept before, some subreddits (In my case, /r/imgoingtohellforthis, though I've heard of it from others, including /r/askreddit.) will allow for users to have a ban shortened or removed if the user produces a specified bit of content. In /r/imgoingtohellforthis's case, we store ours publicly at /r/TalesOfIGTHFT^NSFW and have asked people for a variety of content, including erotic fanfiction and terrible MSPaint pictures. In other cases, I've heard of essays relating to the offense committed, or just art.
What are your thoughts on the idea?
If you practice it, have you measured recidivism at all and has it made a dent in it? Is the offer a regular/semi-regular one or is it a rare occasion kind of thing? If it is a regular/semi-regular offer, is the knowledge of its existence what you'd consider to be common?
A particularly desperate user--who was trying to get their cop-shot-a-dog post reinstated on /r/pics after a rule violation--offered to buy gold and help bring reddit more traffic. When I told them that this doesn't affect us because we're not paid, they asked "so why be a moderator?"
I said it was like owning a Harley Davidson: if you don't know, you wouldn't understand.
Each time something controversial happens, I also see mods saying things such as "I want to improve the community/quality of discussion/etc."
I'm not so sure about that anymore, I think that we like to think this, but the real reason is much more basic and instinctual.
If you've seen an indoor cat get the "zoomies" then you've seen an animal getting a natural urge out of its system. Konrad Lorenz wrote about something similar in On Agression, where a pet starling would track an imaginary fly and then leap out to snatch it from the air. Each animal had the need to satisfy an innate compulsion, even if there was no other reason.
I've noticed that part of the human instinct to form organised groups and societies includes the urge to take on a necessary labor, and you get a lot of satisfaction from that work—no matter how trivial—because it exercises that urge until you no longer feel it.
I get uncomfortable at work when there's nothing for me to do. Why am I being paid? What if someone sees me doing nothing? Well, I'm not so sure the paranoia is really the reason why I volunteer for tasks outside my job description. I don't think it's because I'm afraid of being fired for slacking, but it is a very accessible reason to think of when anyone asks "why do you volunteer?"
Reasons like those, "I just want to improve the community", etc. are post hoc.
The cat, if able to answer "why did you just zoom around the house like bonkers for ten minutes?" might say it was because she thought it would be good exercise. A nice, rational, well-thought reason. But the real reason is because predator/prey chasing and fleeing have been baked into her nature over millions of years and scream to be expressed.
I think mods moderate because we need to feel useful and productive, that we want to be cleaning comes before wanting to see things clean. Some feel this more than others; there's a lot of variety in people.
Hello all,
I am a moderator of a NSFW sub that generates short-form original content daily to be viewed by our 15,000 subscribers.
I have been made aware of a new sub with a similar name, same premise, that contains 100% reposts from our sub, all posted one user.
This user has made a new sub with the same premise, then gone through a few months of our back log to fill it up, and he continues to re-post daily.
My question is what should be done about this? and how should I go about it?
Have you had any luck recruiting new mods via r/needamod or some other means?
Do you give new mods specific tasks and guidelines?
How do you determine that they'll be a good fit?
Any other suggestions?