When worrying about the harms from deploying AI, it’s issues like this that have caused and will cause the most harm. Not the AI suddenly turning into the machines from the Matrix and enslaving humanity.
-
When worrying about the harms from deploying AI, it’s issues like this that have caused and will cause the most harm. Not the AI suddenly turning into the machines from the Matrix and enslaving humanity.
The system that was meant to analyze income and health information to automatically determine eligibility for benefits simply didn’t work and often failed to load the correct data to analyze. ️
Judge Rules $400 Million Algorithmic System Illegally Denied Thousands of People’s Medicaid Benefits
Thousands of children and adults were automatically terminated from Medicaid and disability benefits programs by a computer system that was supposed to make applying for and receiving health coverage easier.
Gizmodo (gizmodo.com)
-
Captain Janegay 🫖replied to Dare Obasanjo last edited by
@carnage4life It's not clear from the story whether this particular system is AI driven, but here are a couple of similar, but even more horrifying, stories about systems that are/were:
-
@carnage4life @risottobias I suspect Tennessee is just one of many similar cases where algorithms are utterly failing harming the most vulnerable the most (and in many cases harming folks who don’t have the resources to push back)
Here in CA I’m fairly sure (from direct personal experience) that there are some underlying glitches inside of the otherwise good CoveredCA (and the various county led MediCare and MediCal systems). Leading to mistakes and delays and more referrals back and forth