Ozone, Bluesky's stackable moderation system is up and open-sourced. https://bsky.social/about/blog/03-12-2024-stackable-moderationI think it's interesting in obvious ways and risky in some less obvious ones (that have less to do with "O NO BILLIONAIRES" ...
-
Caspar C. Mieraureplied to Erin Kissane on last edited by
@kissane @joshwayne @mergesort BlueSky - as a commercial company that is going to earn money from what they build - makes it obvious that they consider a central moderation instance as bad because it is like a "Supreme Court". Which is by itself already a strange way of criticm. What they don't say here is: moderation costs money. Yes, it does. And social network platform hate paying people for this hard job - which is necessary and it is just fair that they do their job on a platform where they also earn money from users generating content. The result is an ecosystem where it is ok to be harassed as you are free to move to another instance. This is just making a bad system worse and selling it as a new technical feature. If BlueSky would finally agree on it's repsonsibility, building a well paid moderation team and then introduce "composable" moderation, yes, that would be fine. As it would be an addon. But this is cost reduction by technical implementation.
When Jack initially announced BlueSky the first (!) point he made was the following:
»First, we’re facing entirely new challenges centralized solutions are struggling to meet. For instance, centralized enforcement of global policy to address abuse and misleading information is unlikely to scale over the long-term without placing far too much burden on people.«
So he argues that being a responsible company that is obliged to international laws - and it's users - is a "burden on people". Well: the people here is the stakeholders of billion dollar platforms. And that is what BlueSky is the solution, too.
I would have loved to see a blue sky in BlueSky but besides looking nice I mainly see a platform that aims towards deregulation.
https://bsky.social/about/blog/4-13-2023-moderation
https://twitter.com/jack/status/1204766082206011393 -
Erin Kissanereplied to Caspar C. Mierau on last edited by
@leitmedium @joshwayne @mergesort So, Bluesky has a large and active moderation team: They do platform-style moderation transparency reporting. Paid humans review all reports. That’s what’s actually happening. (Also there are no “instances” to move between.)
I have zero problem with critique of their model, but a lot of the discussion is remarkably decoupled from actual events.
-
@leitmedium @joshwayne @mergesort The usual next step is to move the goalposts and say “Ah but they won’t moderate in the future and no one can prove they will, because Jack!”
(Which, sure! Maybe they kill off all their central moderation, maybe it’s all a ruse, we can make things up forever. But I have low faith about my ability to parse out futures from inferred intent and even less about most other people’s, so it’s not a mode I find fruitful.)
-
Vesipeto Vetehinenreplied to Erin Kissane on last edited by
@[email protected] @[email protected] @[email protected] @[email protected] it's not just Jack though. Their documentation reflects this philosophy in parts too. Maybe they should come out and say it isn't their goal anymore if that is the case?
-
Erin Kissanereplied to Vesipeto Vetehinen on last edited by
@vetehinen @leitmedium @joshwayne @mergesort What I’m saying kinda always is that I think it’s more useful to look at *what is actually happening* than to read philosophical statements and try to work out what systems they would have resulted in if they were building on a frictionless plane.
So I really value “What is the actual system and what does it *do*” as the soundest basis for trying to understand the (very) near future.
-
Caspar C. Mieraureplied to Erin Kissane on last edited by
@kissane @joshwayne @mergesort Well, I quoted official statements and documentation, which is what I was studying for quite a while now - in order to understand what BlueSky wants to achieve in the future. If this is not the right type of discussion I am sorry for interrupting. I did not want to be alerting here or shout "Jack!!". All fine.
-
Erin Kissanereplied to Caspar C. Mierau on last edited by
@leitmedium @joshwayne @mergesort Nah, that second post was me trying to get ahead of the thread's direction, not aimed at you specifically.
I think it's great to look at stated philosophy, just not in isolation, because…
>If BlueSky would finally agree on it's repsonsibility, building a well paid moderation team and then introduce "composable" moderation, yes, that would be fine.
This is actually what they've done:
Bluesky 2023 Moderation Report - Bluesky
We have hired and trained a full-time team of moderators, launched and iterated on several community and individual moderation features, developed and refined policies both public and internal, designed and redesigned product features to reduce abuse, and built several infrastructure components from scratch to support our Trust and Safety work.
Bluesky (bsky.social)
-
@leitmedium @joshwayne @mergesort
They've also committed to doing kill-switch moderation for illegal content and network abuse (beyond-Bluesky-the-App View + official clients) across everything their relays and PDSes touch on the future ATP network. (This is a lot more central modding than happens on fedi, but still upsets a lot of people because it's less than they want, which is interesting to me.)
-
Vesipeto Vetehinenreplied to Erin Kissane on last edited by
@[email protected] @[email protected] @[email protected] @[email protected] I appreciate the distinction but I also feel like there are so many examples of getting burned when a company starts with something good and switches it up later that we should probably give at least some weight to what they are saying their intentions are too.
-
Erin Kissanereplied to Vesipeto Vetehinen on last edited by
@vetehinen @leitmedium @joshwayne @mergesort Totally agreement. I spoke too strongly—I think it’s good and appropriate to look at the cloud of philosophical stuff, but also to maintain a healthy skepticism about how that translates into the actual, and especially to put what actually happens at the center. (Because I am a The Purpose of a System Is What It Does person.)
-
Hmm, here's an example that their red-teaming doesn't appear to have considered. There doesn't seem to be any way to prevent an account from following a labeler and reporting posts -- I just tested and even if the labeler's blocked the account it can still subscribe and report. So what's to prevent bad actors from bombarding a labloer with (valid) reports of traumatizing images that Bluesky has hidden by default?
-
-
jonny (good kind)replied to Erin Kissane on last edited by
-
@[email protected] as @[email protected] mentions, it seems threadiverse type apps are going in that direction.
NodeBB (working on AP integration) is also built around being an actual community, with separated local and federated posts. They can mix, of course, but there's an explicit sense of locality that is intentionally missing in Mastodon.
-
The user experience is to submit directly to the labelers, not sure how it works behind the scenes. Therre's some discussion from a Bluesky dev at https://bsky.app/profile/jacob.gold/post/3knqwjlvhu22q But blocks are public on bluesky so no matter what they layering is they *could* be checked.
And yeah, getting a new DID is also an attack. If they threat modeled this, they either missed some very obvious stuff or skipped the all-important "implement mitigations" step
-
And @kissane it also seems to me it would be a good issue to file. I tagged a couple of their devs at https://bsky.app/profile/jdp23.bsky.social/post/3knugvlg37k2r so it'll be interesting to see if they agree.
-
-
-
@jdp23 @jonny
@tchambers To come back to this node in the thread, interesting notes from Bryan on a piece of the discussion. https://staging.bsky.app/profile/bnewbold.net/post/3knyw7ydofu26My sense is there is a lot of iceberg under the water that hasn’t been documented or fully implemented yet, which is one of the reasons I’m in wait-and-see mode about a lot of this stuff.
-
@jdp23 @jonny This is kind of a meta comment, but I would myself be hesitant to publish or even discuss threat-modeling findings outside a core team.
(If the gaps and rough edges in the labeler system don’t get filled/filed in the coming weeks, that would suggest to me that they missed things or triaged them down, or decided to accept a new risk surface and mitigate elsewhere—but from the outside, seems like it won’t be especially clear which.)