I’m going to propose a new moderation setting in mastodon.
-
Emelia 👸🏻replied to Aleksandra Lesya last edited by [email protected]
@girlintech @renchap @jerry not all posts are going to be correctly categorized by language. Also, if you're receiving spam posts and start dropping “spam-keyword" for "english" the spammers pretty easily adapt by sending spam in "german" even though it's the same keyword.
For dealing with spam, we need naive-bayes based models, which can more accurately classify spam than simple keyword matches
-
@thisismissem @jerry @girlintech Manual review by who? The admin/mods? This would be a huge burden, and what about "private" exchanges? Users? Do you really want to have them go through a list of messages that an admin decided were potentially very problematic? Reactions we got from the filtered notifications feature lead me to think that this will not fly well. Also should every user review the posts for their own timelines, meaning that a post might require a review by hundreds?
-
@renchap @jerry @girlintech yeah, I'm not saying quarantine is the best solution, but it is a partial solution. But really we need proper naive-bayes based spam classifiers instead of simple keyword filters that drop stuff.
-
Aleksandra Lesyareplied to Renaud Chaput last edited by
@renchap @thisismissem @jerry in my case i live on my tough we need to copy the feature of bluesky.
List made by community, so user can decide to set them or not.
Unlike bluesky you could make them open tough
-
@thisismissem @jerry @girlintech I have a feeling (and hope) that the first non-discovery FASP we work on will be this.
-
@renchap @jerry @girlintech seconding this. There are hashtags I check regularly due to them being associated with bad content. However if I had blocked them entirely I would've been worse off because I would've missed IFTAS posts about how to protect against that content and users discussing which servers I need to block
Word filters are a poor substitute for moderation, particularly while Mastodon doesn't yet support a "filtered" state for posts where they're visible to mods but not users
-
Aleksandra Lesyareplied to Renaud Chaput last edited by
-
@renchap @jerry @girlintech I might also be able to do this by leveraging @MarcT0K's classifier: https://github.com/MarcT0K/Fediverse-Spam-Filtering/
-
@thisismissem @jerry
> there are cases where you want the server to know “hey, we rejected your message"
That's what we thought about email too and it formed the basis for mail bomb / backscatter attacks. This is just the Fediverse making the same mistakes all over again...
I guarantee this will be abused and cause servers to have a massive backlog of Rejects in their queues. Especially the attacker can ensure the Reject can never be delivered successfully.
IMO every Reject activity should be from deliberate human interaction so it can't be weaponized.
Communicating state with Accept/Reject for Follow Requests is the only option we've got. But trying to further establish state across the Fediverse for other activities is not something I'd recommend.
We need to stop trying to make the Fediverse act like a centralized platform because it cannot and will not work that way.
edit: also the more we do things like this the harder it will be to self-host on commodity hardware as it continues to raise the hardware requirements to process the garbage that will constantly be trying to crush your little fedi server. We already suffer under Mastodon Deletes, please don't send Rejects. -
Jerry Bell :bell: :llama: :verified_paw: :verified_dragon: :rebelverified:replied to Aleksandra Lesya last edited by
@girlintech @renchap that is a good idea. I think the problem the mastodon team has is that they have far more good ideas than they have capacity to deliver, so we need to help support them
-
@girlintech @renchap @jerry I'm also working on labelling as a moderation feature via the Social Web Community Group ActivityPub Trust and Safety Taskforce, which could enable things like this.
Bluesky Moderation Services are in a very similar implementation space to FASPs.
-
Renaud Chaputreplied to Jerry Bell :bell: :llama: :verified_paw: :verified_dragon: :rebelverified: last edited by
@jerry @girlintech yes, thats a good idea, and should not be hard to implement. Need to think about what to display (ie a notification? A web banner? An email? A not-yet-designed-server-initiated-annoucement?)
-
@feld @jerry we are already starting to use Accept/Reject for other purposes, e.g., reply controls: https://docs.gotosocial.org/en/latest/federation/posts/#requesting-obtaining-and-validating-approval
-
Aleksandra Lesyareplied to Renaud Chaput last edited by
-
Renaud Chaputreplied to Aleksandra Lesya last edited by
@girlintech @jerry every client would need to implement this notification type, thats the main issue
-
@thisismissem @jerry Stop this please. There's no need for it. If you don't want to receive replies to a post, just drop them. The senders do not need to know that the server does not want responses to a public post. There is no benefit to this.
If the server software (Mastodon, GoToSocial) supports recognizing these locked threads it should deny the ability to send the reply in the first place.
If the server software does not support recognizing these locked threads there is no point in responding with a Reject. They won't understand the Reject anyway.
Just add a new key to the activity/object and let software that understands it Do The Right Thing️. -
-
@thisismissem @jerry then please make sure Mastodon does not make this same mistake.
If GoToSocial wants to pretend that it's possible to lock a thread and keep people from having a discussion they can do that.
But it won't accomplish anything outside their bubble. The users of any other software that exists can still have a discussion under that public parent post. The trolls and harassers will continue to do what they do, you just won't see it. Which can be accomplished by simply muting the thread or silently dropping the activities. -
-
@renchap @jerry @girlintech Unfortunately, many servers are doing this today, but in a vacuum with no broader conversation to address these issues. Stop words are often used in reclaimed language or recounted lived experience, such use would need to consider how to avoid unintended consequences.
Ideally, new accounts using stop words could be put on hold for approval, and posts containing words from e.g. https://weaponizedword.org/languages could be quietly (as in, no report) flagged for review