Facebook did not have an over-censorship problem.
-
Facebook did not have an over-censorship problem. Facebook main problem was that its algorithms promote hate speech and disinformation, lately with the addition of inauthentic content. Fighting that through fact-checking is always going to be an uphill battle.
This is the reason I believe that content algorithms have no place in social media and that user profiling should be banned. Spreading misinformation and hate speech is a lot harder without the amplification algorithms.
-
-
Henry Edward Hardyreplied to Jon S. von Tetzchner last edited by
The question I would pose is this: Are social media platforms common carriers, without liability for their transmitted content, but also enjoined from snooping, selling ads, discriminating based on speech content.
Or are they publishers, who own their content, are legally responsible for it, can make editorial choices based on editorial policy, and can monetize the content?
It seems to me they have been trying to have it both says since the litigation over section 230 of the CDA.
-
SnowBlind2005replied to Jon S. von Tetzchner last edited by
The algorithms don't help, but even without them it's going to be an uphill battle still.
Here on Mastodon there is a lot of mis-information, disinformation and the same non-sense that I see on just about every other platform. Maybe not as amplified without the algorithms.
I think I'll go and post about how I run dnf upgrade today.
-
Jon S. von Tetzchnerreplied to SnowBlind2005 last edited by
I see very little misinformation here.
Most of what I see is in my feed and I have chosen what is in my feed. That is not the case on Facebook.
The point is that Facebook amplifies misinformation and hate speech. The impact of that is a lot more than people maybe realize. If they did not decide what shows up in my feed and I would just see posts from those that are my friends and that I follow, things would be very different.
These popularity algorithms are also just very easily gamed and they are. If you have them, you will see this impact.
-
SnowBlind2005replied to Jon S. von Tetzchner last edited by
Just because you don't see it, doesn't mean it's not real or not happening. That's what I am getting at. And that can be dangerous in of itself.
I quit Facebook in 2015 so I am not sure what it is like there anymore. The reason I quit was because of the clear gamification of the content. It became unreal and just a strive to be the most popular regardless of reality.
It is strange how Facebook which was supposed to be a community is now launching AI users to interact with people, because the platform is so toxic no one wants to interact.
Hate really is the low hanging fruit. Facebook is just taking advantage of the revenue stream. People are inherently dumb. Education should be where the resources go at this.
I just today responded to a misinformation post. Intent and context are key though. Their intent wasn't misinformation (at least I don't think it was) and they left out considerable amount of context to completely understand what they were trying to say.
-
Kevin Steinmetzreplied to Jon S. von Tetzchner last edited by
-
-
Seems to me that if the platforms want to claim they are common carriers like the telephone company and not publishers like the newspaper, they should not he able to promote certain content at the expense of others.
I think I'd like to see a ban on content promotion algorithms take the form of "If you promoted defamatory content or false advertising etc then you are liable for it. Your Secrion 230 exemption from liability depends on your content neutrality."
-
Jon S. von Tetzchnerreplied to marymessall last edited by
To me there is just the question whether it is OK for companies to build profiles on you based on what you view on their platform. That profile is later used to select content and to show ads. If you ban that, you have solved a lot of problems.
I think any benefits from this user profiling is at best questionable and I question why it has been OK at all ever to do this.