Point of concern: snark communities
You probably have heard of those harassment subreddits. The ones that are effectively the mainstream outreach arms of KiwiFarms. The type of community that harassed Mikayla Raines to death and then shrugged it off.
They might openly identify as snark, they might call themselves a "tea" community, they might cloak their snarking under other types of celebrity gossip like you see in FauxMoi... but the point remains the same: to coordinate lies in such a way that manipulates people's sentiment towards a person and enable harassment.
A pretty striking example is how "Zionist" now simultaneously means anything from "someone who voted for Kamala" all the way up to "rabid supporter of Netanyahu and Israeli ultranationalists" and implies that someone supports, ideologically or even materially, genocide. Pretty crazy stuff, but it's not even used consistently. People who supported every stage of Israel's response will escape criticism while those who said Netanyahu's lunatic regime is committing war crimes and should rot in jail will get a half dozen snark communities (and seemingly any other part of the internet aware of the individual) attacking them. This seems to be the most popular basis of snark, but it's all fairly bespoke and can probably evade detection to any distracted or uninterested person.
Its whole method is betrayed in its name. Snark aren't fan communities lovingly poking fun at someone. They're communities where you'll see participants hope on other platforms that their subject will commit suicide. They'll spin stories to mean two things at once. They'll change words to mean two things at once. I would say they seem to do everything they can to avoid getting moderated off of a platform, but from what I've read in court documents, it seems that Reddit actually harbors and enables them. I can understand that people might have either an ideological or technological mental block on sussing them out because of this murkiness, but I think it should be a goal.
So my suggestion: use AI to track this behavior and summarize it for mods and admins. Perhaps it might look like inputting data from KiwiFarms and Twitter accounts associated with users there.
Keyword filters will miss the behavior. The moderators of such communities would probably be able to talk their way out of trouble. Taking a broader look at off-platform behavior seems to be the best way to become equipped at recognizing and deplatforming it.
Lastly, obviously harassment = bad. But also criticism = good. Salvaging discourse IMO should involve salvaging criticism from communities like this. Especially since it seems like we're still spinning our wheels in the mud over what are essentially 200 year old philosophical debates. It feels like moving into bubbles is moving us backwards.
There's probably a moderate amount I'd be willing to concede on in this post but my focus here is ultimately informing moderation through off-platform behavior. This is the point I would be most interested in debating or fleshing out.
0 Comments