Will the Social Web Foundation prioritize safety?
-
Will the Social Web Foundation prioritize safety?
If they do, what are some good places to start?
https://privacy.thenexus.today/swf-safety-draft/
This is draft post, and feedback is very welcome. In fact, feedback on "what are some good places to start" is welcome even if you don't read the post ... later in this thread I'll include some of the ideas in the current version.
-
The Nexus of Privacyreplied to The Nexus of Privacy last edited by
SWF's mission talks about a "growing, healthy" Fediverse, but their initial plans don't seem to be paying much attention to the "healthy" part. For example:
@socialwebfdn's initial list of projects doesn't include anything addressing current Fediverse safety issues.
- As far as I know, none of SWF's advisors are safety experts, and @iftas' @jaz is the only one of their launch partners who has a history of prioritizing safety.
- SWF's list of launch partners doesn't included software projects like #GoToSocial, #Bonfire, and #Letterbook that are prioritizing safety
- #Meta's involvement with SWF adds to the concerns -- as I discuss in more detail in the post
-
The Nexus of Privacyreplied to The Nexus of Privacy last edited by
Nothing's set in stone with the Social Web Foundation at this point. Most non-profits' initial projects, program, staffing, network of participants, and even mission evolve. My guess is that'll be the case for SWF as well.
In a discussion on SocialHub, @thisismissem suggested that SWF should commit to devote at least X% of its resources to safety. That would be a good first step, and if X% is high enough it would send important signal they intend to prioritize this issue.
What's should X be? Hmm, that's a good question ... and a place where feedback should be useful. Here's a poll!
https://infosec.exchange/@thenexusofprivacy/113315331877238065
-
The Nexus of Privacyreplied to The Nexus of Privacy last edited by
The good news is that if SWF does decide to prioritize privacy, there are a lot of opportunities for impact. Of course, even if they spend a big chunk of their reported $1M funding on safety it can't fund all of these ... but investment from SWF can also encourage other funding (from their corporate funders and others). Here's some that I list in the current draft (which goes into more detail on all of them)
Funded participation by marginalized people in the new #W3C SWICG Trust and Safety Task Force
Fediverse versions of tools on other platforms like Block Party and Filter Buddy that allow for collaborative defense against harassment and toxic content
Threat modeling, an important technique for safety (and security and privacy) that isn't yet widely adopted in the fediverse
working with AI researchers in the Fediverse who take an anti-oppressive, ethics-and-safety-first approach (like @timnitGebru and the rest of DAIR Institute) to look at ways to apply automated moderation tools without the racism, anti-LGBTQIA2S+ bias, Islamophobia, environmental harm, and consent violations of current ineffective AI-based tools from #Meta et al
Consent-based tools and infrastructure, which historically haven't gotten a lot of attention -- despite the fediverse's focus on consent. With #BridgyFed, for example @snarfed.org had to roll his own consent mechanism -- and so does every other developer.
@iftas is an SWF launch partner ... what are some of the concrete ways they can collaborate?
This is another area where feedback would be useful. There are a lot of other interesting projects here, what else should I include?
-
Emelia πΈπ»replied to The Nexus of Privacy last edited by
@thenexusofprivacy worth noting: privacy doesn't necessarily equal safety β the first paragraph makes them look equivalent there