A fundamental principle that ought to be solidly established, but that every AI vendor is desperately hoping we’ll scrap entirely, is that •a company should be responsible for its decisions regardless of how it makes them•
-
A fundamental principle that ought to be solidly established, but that every AI vendor is desperately hoping we’ll scrap entirely, is that •a company should be responsible for its decisions regardless of how it makes them•:
- Even if the decision is made by one individual at the company
- Even if the decision is delegated to a third party
- Even if the decision is algorithmic
- Even if the decision is made by an AI https://eigenmagic.net/@NewtonMark/113421248799173867 -
I would be curious to hear from •actual lawyers• about the status of this principle in the law pre-gen-AI. Under what circumstances can a company say, “Oops, one bad apple! Not our fault!” or “Don’t sue us, sue the people we hired?”
-
Marsh Gardiner 🌱🐝replied to Paul Cantrell last edited by
@inthehands interesting point—AI as a kind of accountability sink!
-
@inthehands It is truly simply amazing how hard it seems for society to get this basic, fundamental thing.
Somehow companies have been pushing hard at making "AI" somehow "not their fault." It's the user's fault if something goes wrong or the people who wrote bad things in the stuff they illegally stole and profited from without permission from the original owners or payment to them.
It's really amazing how this forceful "full steam ahead" way seems to have prevented them from being liable.
-
It's a sad joke about businesses that when they made a corporation a separate legal entity, and therefore a person, they created psychopaths with immense power.
Now the liability aspect of legal entity status is in the way of the technology of automation. It is a measure of the risks incurred with AI that corporations can't progress this tech without finding a way out of taking liability for errors.
-
@Awks @inthehands I suppose you think the nazi doctors who furthered medical science by performing human experimentation were a good thing.
Tech can improve and still follow morals and legality. Yes, sometimes harder. Yes, sometimes it's slower. But it also results in something that is more hardened and better for humanity overall.
And I hate to give a "slippery slope" type argument, but it really is an honest truth that he more freedom you give companies to bypass laws, the worse they get
-
@nazokiyoubinbou @Awks
Nazo, I think you misunderstood the post you’re replying to.In particular, please reread this sentence more carefully [emph added]:
“It is a measure of the ••risks incurred•• with AI that corporations can't progress this tech without ••finding a way out of taking liability•• for errors.”