blog! “Yet another AI Racism example”
-
@Edent I await the really tired "it's not lying because it's not consciously telling an untruth, it's just a piece of software so we should call it something else" arguments, but with racism (and other biases) instead of lying.
-
@tommorris
Something something justified true belief. -
@Edent It does seem like if you think Siri might magically become conscious in the next couple of years, externalism about justification might be a pleasingly coherent epistemology to adopt to explain how an AI can kinda know things.
(Given the nerd habit of only caring about the subset of philosophy that has been turned into sci-fi, I’m guessing not many brain cycles have been dedicated to it.)
-
il_fritzreplied to Terence Eden last edited by [email protected]
> This "AI" would rather hallucinate than acknowledge the Black actors who have been in Doctor Who.
You are phrasing it so as to ascribe intention, when it's simply the bias in the training data
A more constructivve response would be in having a debate on we can assess / regulate what kind of training data such models are trained on
-
@il_fritz I wrote a post about this - https://shkspr.mobi/blog/2020/05/postels-law-also-applies-to-human-communication/
-
-
@il_fritz That's a fascinating idea - could you tell me more please?
-
@Edent I was asking YOU, as [checks notes] Open Standards / Source / Data geek.
Don't you think the issue here is the training data rather than the clickbaity "AI is racist" angle?
-
@Edent it seems to have given answers based on what's most talked about, rather than what's true.
Because LLM can't tell the difference.
-
@JetlagJen perhaps. But do you think Lynda Barron is talked more about than Daniel Kaluuya?