Google's AI tooling for deciding unemployment cases is a RAG that hooks into a knowledge base that includes personal history and Nevada law: https://gizmodo.com/googles-ai-will-help-decide-whether-unemployed-workers-get-benefits-2000496215
-
Google's AI tooling for deciding unemployment cases is a RAG that hooks into a knowledge base that includes personal history and Nevada law: https://gizmodo.com/googles-ai-will-help-decide-whether-unemployed-workers-get-benefits-2000496215
This is exactly the kind of hybrid LLM-KG model that I wrote #SurveillanceGraphs about. The threat was never just the LLMs alone, but their integration into the broader logic of surveillance capitalism. The fact that humans will review recommendations from the model is not a safeguard, but a means of generating opaque discretionary decisions under the guise of objectivity. The fact that the tools don't work is irrelevant - they respond to the needs of efficiency and plausible deniability of agency, the brokenness of the models is a feature not a bug
https://jon-e.net/surveillance-graphs/#nsf-open-knowledge-network
https://jon-e.net/surveillance-graphs/#play-this-pattern-out-across-algorithmic-governance-predictive-p