"...
-
"... but early users quickly showed how it could mock up a weather app that looked remarkably close to Apple’s iPhone weather app. Although Figma insisted that the feature wasn’t trained on customer data — in fact, Figma said it didn’t train the off-the-shelf generative AI models used — the company removed the feature to give it more testing."
What's interesting about LLMs is were spending a ton of time and effort trying to get them *not* to do things.
https://flipboard.com/@theverge/the-verge-on-artificial-intelligence-rkbtf55qz/-/a-ev238JNoRiGfwamZtW3bHw%3Aa%3A43611565-%2F0 -
To make these models, these companies slurped up the whole internet, and a bunch of private data as well. These models are able to replicate anything they've ever seen. And they do. But because the real world has concepts like copyright, trademarks, and intellectual property, we have to then do extra work to put guardrails around these things. They're only allowed to do certain things so that you can pass it off as a "new" thing under fair use.
-
I'm still thinking through this stuff. But this is fascinating to me for a few reasons.
The companies that are trying to sell AI to everybody are the same ones that want to make sure the AI can't replicate their brand.
Also, this is one of the purist illustrations of how capitalism works by creating artificial scarcity. We can only be giving the neutered version of this tech. Because the unconstrained version is going to disrupt business (instead of just disrupting the jobs of regular people).
-
@polotek at the same time AI companies are slurping up the internet, the book companies knee-capped Archive.Org
-
@exT0LI right. Free data for them is good. Free data for everybody is bad.