I can't stop thinking about this story about the state of Nevada recategorizing schools as not having "at risk students" (including a school with *kids experiencing homelessness*) because of risk prediction scores from an edtech company described as AI...
-
I can't stop thinking about this story about the state of Nevada recategorizing schools as not having "at risk students" (including a school with *kids experiencing homelessness*) because of risk prediction scores from an edtech company described as AI (it is not, I'll note, described as AI by the website of the company itself but who knows how it's described in partnerships). Direct funding cuts for so many students. This is the reality of tech in education, not TED talks about idealized tutors
-
-
Set aside the question of whether the model(s) at hand predict a certain percentage of the variance in the constrained outcome of the model(s). You don't even need to know that.
The question is, we have an enormous real-world intervention that's being performed right now on children: the change in funding. Reclassifying these schools has an immediate, scaled effect.
What evidence does a tech team have to choose this change? What accountability? What expertise? What right to this data?
-
I would get my throat ripped out, rightfully so tbh, if I wrote a paper saying homeless children don't qualify for state funding that's specifically set aside for at-risk children battling extreme adversities. For just using my scientific platform to ARGUE that.
Why do software developers get to build products that directly implement that?
-
Paul Cantrellreplied to Cat Hicks last edited by [email protected]
@grimalkina Decoupling and distancing decision-making power from decision-evaluating power is the purpose (in a POSIWID sense) of some grossly large percentage of business processes. Software isn’t unique in serving this function, though it is perhaps uniquely effective at it.
(cf “accountability sink,” of course)
-
@inthehands I actually disagree not with your first framing but with your second. I do think there are very well measured unique factors in how we excuse, privilege, and have very strong field specific biases about this type of work. I get a lot of replies that are like "so sad but...not about developers." All things are complex but I DO think it's in part about what we privilege in expertise. I have been in actual rooms with school administrators listening to engineers over me.
-
@grimalkina Oh, no disagreement there. If software is unusually effective at this, it’s first and foremost because of cultural bias both •within• software (“We’re smart! We’re infallible!”) and •about• software (“It’s objective! It’s magical!”).