Is there a social/structural notion of entropy?
-
@aud i don't think the problem is in being unable to foresee/forestall all possible futures but in effectively responding to what arises and building in mechanisms for what you can see in response. we've seen that rust has a code of conduct but fails to enforce it—this is something we can look to address. no system of government is perfect
-
@[email protected] absolutely. I used to phrase this kind of thing, as, say... a "good king" example. Like you can, in theory, have a benevolent ruler (let's ignore that's probably never happened); even if you did, it doesn't make the monarchy a good system to so speak.
So it's just sort of musings on that idea taken a little further with regard to the inherent flexibility of how rules and laws might be interpreted.
Power being what it is, though, as was pointed out with the gun lobby, there's nothing stopping the 'second king' from deciding to pursue a propaganda campaign where words used in the original laws now mean something else that is easily abused (and frankly this probably happens). -
@[email protected] yeah... hmmm. It's just, how do we get to completely ridiculous point where an entity like the rust foundation says it would be discrimination to not accept funding from war profiteers?
Speaking of linguistic abuse... bleh.
Anyway, there's no using technology (and I mean this to include the written word) to get rid of the "trust" problem at the end of the day. -
@[email protected] I'm kind of glad to hear this, because originalism on its face is such a ridiculous concept (the idea that you can divine out what they mean is ridiculous; the idea that you should even more so) so it's nice to hear that when you take other parts of the constitution in, it also already doesn't make sense.
-
@aud unfortunately a lot of libertarians (derogatory) use a similar framing re software licenses that disallow military/surveillance/apartheid like the ACSL. there is an idea that these restrictions can be misused to harm good people, which only makes sense if you do not really care about the harm they are trying to stop imho. you can see US sanctions as an example of this, but US sanctions are imposed by an imperialist military state and not a band of people over the internet, so it's a bit wrong to draw the analogy
-
d@nny "disc@" mc²replied to d@nny "disc@" mc² last edited by
@aud there is a claim i don't agree with that licenses depend on laws, which depend on the state, which is therefore bad. a license is actually a contract with users, which can be applied even in non-state contexts. one thing i don't like is anarchists who use it to act like no agreement between people(s) can be upheld in an anarchist society, when my conception of an anarchist society is precisely one in which agreements can be upheld
-
@[email protected] reminds me of the “how can you have morality without god?” question which is like, wait, what? it seems like morality is more important if you believe this is all there is.
-
@aud it's extremely common in "AI" discourse because the fundamental assertion of an LLM is that all meaning can be distilled to textual language (this part is not really that bad in itself; it's possible to speak to people purely through text) and furthermore that any mode of statistical text generation is equivalent to the exercise of language (this is marketing and obviously false)
-
@[email protected] ARGH
-
d@nny "disc@" mc²replied to d@nny "disc@" mc² last edited by
@aud similar to people who elide that "AI" art is being used to deskill artists and pretend that it's a debate over the definition of art, rebuttals to the second very specious claim about statistical text simulating language are wilfully misinterpreted as challenges to the first claim about what textual language is able to represent. so a bunch of misguided people who think very highly of their own intelligence and not others' are abusing shannon information to equate the two claims
-
d@nny "disc@" mc²replied to d@nny "disc@" mc² last edited by
@aud so shannon information is used as an argument to say with sufficient state being shoved through the text channel, intelligence arises
-
@[email protected] Jesus, I’ve seen some bad takes but WOW, that is… one hell of an impressively bad argument they’re making. Ugh.
-
@aud this is like one of the tenets of TESCREAL iiuc. far divorced from shannon, people always misuse any metric they can get their hands on to push their agenda and this is why the field of statistics was created
-
d@nny "disc@" mc²replied to d@nny "disc@" mc² last edited by
@aud to define intelligence is to imply it can be quantified and simulated this is one reason i have always avoided that thinking
-
@[email protected] and Nevermind how racist that very concept it is; the whole idea is steeped in old ass racist ideology. It’s horseshit.
-
d@nny "disc@" mc²replied to d@nny "disc@" mc² last edited by
@aud and fwiw i find the concern trolling like "oh you don't think this statistical machine is intelligent that means you think other types of people aren't intelligent" incredibly upsetting and frustrating when the people pushing the statistical machines are literal eugenicists not unlike when people say corporations are less chauvinistic than communities without PR departments
-
@[email protected] Jesus Christ, I think I’d pop blood vessel if I saw that argument in the wild. That’s so… just fucking bad faith and wrong.
-
fucking yikes. I'm more and more convinced that contemporary computational theory of mind is as much inseparable from our history of hyper-individualism, eugenics, racial capitalism, etc. as it is from the history of computer technologies developing alongside neuroscience.