AI generated posts are getting reported, so I think we need to adopt a new strategy to allow to users to self tag AI content, and allow everyone to filter that out from their feeds with a single setting.
-
AI generated posts are getting reported, so I think we need to adopt a new strategy to allow to users to self tag AI content, and allow everyone to filter that out from their feeds with a single setting.
If users don't self tag AI, they will get a warning, and multiple violations may lead to suspension.
-
-
replied to Daniel Supernault last edited by
@dansup I think I don't like AI content.
-
replied to Daniel Supernault last edited by
@dansup I don't think this is a valid strategy on the longer run, with short enough characters nobody can differentiate AI posts from regular human posts
and we have a ton of bots everywhere
I think creating user "trees" is a better strategy here, i.e. people can only sign up via referral links from other users, since I expect people coming up with bots to not limit themselves to just one bot
this won't fix the issue but it will reduce it
-
replied to Daniel Supernault last edited by
@dansup How about: user attempts to post AI content → autodetect AI with a local offline algorithm (avoiding server load) → if AI is detected, don't autotag but prompt the user to manually tag the content accordingly, and possibly give that gentle warning where omitting the tag on AI generated posts might lead to suspensions.
-
replied to Daniel Supernault last edited by
@dansup True. Then again, the algo doesn't need to be flawless, false pos/negs are tolerable as long as it's good enough for production, and it can later be iterated upon. Wondering about the pretty common use case of reposting memes though, i.e. forwarding content that the user doesn't know the original source of. Autodetection could autotag with "possibly AI generated" and if the user feels certain it's not AI, they can still manually remove that tag.
-
replied to Daniel Supernault last edited by
@dansup this will only work as long as AI content is obvious or can be recognized by people. Humans will quite soon not be able to distinguish, unless it’s really obvious. This will lead to loss of trust in photography. The industry is trying to solve that with Content Credentials. Over time this would allow you to cryptographically verify if a picture came from a camera or was generated or modified by AI.
-
replied to Chris Marquardt last edited by
@chrismarquardt Yeah, Content Credentials are something we are planning to implement for this
-
replied to Daniel Supernault last edited by
@dansup AI generated content in itself is fine and it seems pointless trying to filter them automatically. It's more about the behavior of the members, much like not using alt text on images.
I will personally block people posting stuff I don't want to see, that's the best approach?
Then it's up to the admin of the instance to decide what they allow and to enforce it.
-
replied to vrtxd last edited by
-
replied to Daniel Supernault last edited by
@dansup we don't need that AI sh** on mastodon
-
replied to Daniel Supernault last edited by
@dansup sounds good to me. I noticed at least two accounts following me with just AI "selfies". One at least wrote in their profile that they are using AI.
Would love to filter them out. I don't like AI "art" either.
-
replied to Daniel Supernault last edited by
@dansup maybe have two flags:
"AI" for self labelling, and
"likely AI" (or similar) that users can put on a post from somebody else. If enough reports of AI accumulate, the post gets the "likely AI" flag, which users can filter our separately. OPs then get notified that their post has been labeled this way.
This makes a distinction between good faith and trickery.
-
replied to Daniel Supernault last edited by
@dansup Totally agree. If people want to post it, fair enough, but others shouldn’t have to see it. That is unless you want to take the hardline approach that Pixelfed is for photography instead of simply images. Your call though.
-
replied to Daniel Supernault last edited by
@dansup I'm tempted to split it into two levels of AI content so people can choose what to block, stuff made with "GenAI" and more traditional "tweeting the temperature every hour." I get a sense that a lot of people don't mind machine-generated posts made using technology from before 2015.
Could do #GenAIgenerated and #pepperidgeAI or something like that? Maybe #handraisedAI?
I can see the dividing line being a sticking point though. Maybe the invention date is the most clear cut?