answering captchas as badly as possible while still not being determined to be a robot is actually pretty fun
-
[email protected]replied to [email protected] last edited by
I can't believe I thought I was playing a video game again! I keep getting confused...
-
[email protected]replied to [email protected] last edited by
I would love my data to be discarded. Freeloaders!
-
That kind of data sanitization is just standard practice. You need some level of confidence on your data’s accuracy, and for anything normally distributed, throwing out obvious outliers is a safe assumption.
-
If you cut the outliers out of a dataset of whom 30% are bullshitters that doesn't magically make the system accurate it only makes it more precise.
-
Possibly linuxreplied to [email protected] last edited by
hCAPTCHA is the worst
I can't get though them most of the time. I try to find a YouTube guide but everything I can find is out of date.
-
[email protected]replied to [email protected] last edited by
Also expect your AI to be engaged in some heady and deep forms of self hatred that is going to take decades to unravel.
-
[email protected]replied to [email protected] last edited by
if you haven't noticed yet, we want these "smart" vehicles to be abolished, along with their non-stop automatic surveillance and data mining they do
-
[email protected]replied to [email protected] last edited by
Um, you're preaching to the choir here.
As much as I loathe the AI and surveillance and shit, I get downvoted any time I express my opinions.
Maybe I should be more clear...
FUCK AI!!!
-
[email protected]replied to [email protected] last edited by
it did not seem so. we will never do away with AI by being friendly to the tech and even helping it grow
-
[email protected]replied to [email protected] last edited by
My point is, it exists now, whether we like it or not. I for one do not like it, I prefer Actual Intelligence, like the stuff that comes from the brain noodle.
But if AI is gonna be the thing, which obviously it is, people shouldn't deliberately give it bad training data.
-
[email protected]replied to [email protected] last edited by
If you use internet discussions as training data, you can expect to find all sorts of crazy biases. Completely unfiltered data should produce a chatbot that exaggerates many human traits which completely burying others.
For example, on Reddit and Lemmy, you’ll find lots of clever puns. On Mastodon, you’ll find all sorts of LGBT advocates or otherwise queer people. On Xitter, you’ll find all the racists and white supremacists. There are also old school forums that amplify things even further.