Ozone, Bluesky's stackable moderation system is up and open-sourced. https://bsky.social/about/blog/03-12-2024-stackable-moderationI think it's interesting in obvious ways and risky in some less obvious ones (that have less to do with "O NO BILLIONAIRES" ...
-
jonny (good kind)replied to Erin Kissane on last edited by
@kissane
Totally agree. And the labels are a different vector than lists alone too since they are applied to the post/account itself, rather than the post/account being indexed and etc. Also agree that blocking is at best a reactive measure, even if identity had more friction. I think youre right on diagnosing lack of place as the core of it, and its a really nasty downside of "frictionless all-to-all platform" as design goal. Fedi fiefdoms are not great, but having no sense of place doesnt feel like an alternative either. -
-
On the other note, I think the "illegal content and network abuse only" refers to the moderation that extends beyond Bluesky-the-reference-app/platform, in a larger future system.
Bluesky as a platform—which is what I *think* Tim and I were discussing—does takedowns and deletions for lots of things that don't rise to that level, and the team talks about that in their moderation report and other places. (I know you know this, I just want to try to keep the thread clear-ish.)
-
Erin Kissanereplied to jonny (good kind) on last edited by
@jonny Let us not even begin to speak of Nostr
-
@kissane @jonny I think on labelling it won't actually make a possible "list of targets" since it's never "filter in this stuff I don't follow" but "filter out this stuff I might see"
So because it's subtractive, you don't know the content you don't know, as an end user. Yeah, the label operator would have a list of accounts / hashtags / etc to monitor, but that'd be internal information to them.
-
@thisismissem @kissane @jonny "Feed generators" get those labels and can opt posts in based on that.
-
@kissane They've certainly thought it through a lot more than ActivityPub and Mastodon did at the equivalent stage! Bryan said they've done red-teaming, perhaps that included threat modeling as well. If so it'd be a first, no social network that I know of has ever done this early in their lifecycle (or for that matter later). Time will tell.
-
Caspar C. Mieraureplied to Erin Kissane on last edited by
@kissane @joshwayne @mergesort BlueSky - as a commercial company that is going to earn money from what they build - makes it obvious that they consider a central moderation instance as bad because it is like a "Supreme Court". Which is by itself already a strange way of criticm. What they don't say here is: moderation costs money. Yes, it does. And social network platform hate paying people for this hard job - which is necessary and it is just fair that they do their job on a platform where they also earn money from users generating content. The result is an ecosystem where it is ok to be harassed as you are free to move to another instance. This is just making a bad system worse and selling it as a new technical feature. If BlueSky would finally agree on it's repsonsibility, building a well paid moderation team and then introduce "composable" moderation, yes, that would be fine. As it would be an addon. But this is cost reduction by technical implementation.
When Jack initially announced BlueSky the first (!) point he made was the following:
»First, we’re facing entirely new challenges centralized solutions are struggling to meet. For instance, centralized enforcement of global policy to address abuse and misleading information is unlikely to scale over the long-term without placing far too much burden on people.«
So he argues that being a responsible company that is obliged to international laws - and it's users - is a "burden on people". Well: the people here is the stakeholders of billion dollar platforms. And that is what BlueSky is the solution, too.
I would have loved to see a blue sky in BlueSky but besides looking nice I mainly see a platform that aims towards deregulation.
https://bsky.social/about/blog/4-13-2023-moderation
https://twitter.com/jack/status/1204766082206011393 -
Erin Kissanereplied to Caspar C. Mierau on last edited by
@leitmedium @joshwayne @mergesort So, Bluesky has a large and active moderation team: They do platform-style moderation transparency reporting. Paid humans review all reports. That’s what’s actually happening. (Also there are no “instances” to move between.)
I have zero problem with critique of their model, but a lot of the discussion is remarkably decoupled from actual events.
-
@leitmedium @joshwayne @mergesort The usual next step is to move the goalposts and say “Ah but they won’t moderate in the future and no one can prove they will, because Jack!”
(Which, sure! Maybe they kill off all their central moderation, maybe it’s all a ruse, we can make things up forever. But I have low faith about my ability to parse out futures from inferred intent and even less about most other people’s, so it’s not a mode I find fruitful.)
-
Vesipeto Vetehinenreplied to Erin Kissane on last edited by
@[email protected] @[email protected] @[email protected] @[email protected] it's not just Jack though. Their documentation reflects this philosophy in parts too. Maybe they should come out and say it isn't their goal anymore if that is the case?
-
Erin Kissanereplied to Vesipeto Vetehinen on last edited by
@vetehinen @leitmedium @joshwayne @mergesort What I’m saying kinda always is that I think it’s more useful to look at *what is actually happening* than to read philosophical statements and try to work out what systems they would have resulted in if they were building on a frictionless plane.
So I really value “What is the actual system and what does it *do*” as the soundest basis for trying to understand the (very) near future.
-
Caspar C. Mieraureplied to Erin Kissane on last edited by
@kissane @joshwayne @mergesort Well, I quoted official statements and documentation, which is what I was studying for quite a while now - in order to understand what BlueSky wants to achieve in the future. If this is not the right type of discussion I am sorry for interrupting. I did not want to be alerting here or shout "Jack!!". All fine.
-
Erin Kissanereplied to Caspar C. Mierau on last edited by
@leitmedium @joshwayne @mergesort Nah, that second post was me trying to get ahead of the thread's direction, not aimed at you specifically.
I think it's great to look at stated philosophy, just not in isolation, because…
>If BlueSky would finally agree on it's repsonsibility, building a well paid moderation team and then introduce "composable" moderation, yes, that would be fine.
This is actually what they've done:
Bluesky 2023 Moderation Report - Bluesky
We have hired and trained a full-time team of moderators, launched and iterated on several community and individual moderation features, developed and refined policies both public and internal, designed and redesigned product features to reduce abuse, and built several infrastructure components from scratch to support our Trust and Safety work.
Bluesky (bsky.social)
-
@leitmedium @joshwayne @mergesort
They've also committed to doing kill-switch moderation for illegal content and network abuse (beyond-Bluesky-the-App View + official clients) across everything their relays and PDSes touch on the future ATP network. (This is a lot more central modding than happens on fedi, but still upsets a lot of people because it's less than they want, which is interesting to me.)
-
Vesipeto Vetehinenreplied to Erin Kissane on last edited by
@[email protected] @[email protected] @[email protected] @[email protected] I appreciate the distinction but I also feel like there are so many examples of getting burned when a company starts with something good and switches it up later that we should probably give at least some weight to what they are saying their intentions are too.
-
Erin Kissanereplied to Vesipeto Vetehinen on last edited by
@vetehinen @leitmedium @joshwayne @mergesort Totally agreement. I spoke too strongly—I think it’s good and appropriate to look at the cloud of philosophical stuff, but also to maintain a healthy skepticism about how that translates into the actual, and especially to put what actually happens at the center. (Because I am a The Purpose of a System Is What It Does person.)
-
Hmm, here's an example that their red-teaming doesn't appear to have considered. There doesn't seem to be any way to prevent an account from following a labeler and reporting posts -- I just tested and even if the labeler's blocked the account it can still subscribe and report. So what's to prevent bad actors from bombarding a labloer with (valid) reports of traumatizing images that Bluesky has hidden by default?
-
-
jonny (good kind)replied to Erin Kissane on last edited by