When he’s not busy being one of our best fiction writers, Ted Chiang has become our best critic of generative AI:
-
When he’s not busy being one of our best fiction writers, Ted Chiang has become our best critic of generative AI:
“The programmer Simon Willison has described the training for large language models as ‘money laundering for copyrighted data,’ which I find a useful way to think about the appeal of generative-A.I. programs: they let you engage in something like plagiarism, but there’s no guilt associated with it because it’s not clear even to you that you’re copying.”
Why A.I. Isn’t Going to Make Art
Ted Chiang on how artificial intelligence still isn’t as intelligent as it is perceived to be and how its profound limitations should temper our fears about it replacing real art-making.
The New Yorker (www.newyorker.com)
-
“It is very easy to get ChatGPT to emit a series of words such as ‘I am happy to see you.’ There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you. A dog can communicate that it is happy to see you, and so can a prelinguistic child, even though both lack the capability to use words. ChatGPT feels nothing and desires nothing, and this lack of intention is why ChatGPT is not actually using language.”
-
@gleick sorry to disagree but I found it disappointing. The stuff about communication is just a rehash of the fundamental point of the stochastic parrots paper and the stuff about art and choices is disappointing because it was a major theme in 20th century art to what extent art could be created by algorithm or chance pursued by painters, poets and composers (e.g., Feldman, Cage, Boulez in the world of music), with countless phd’s written on the issue. It’s like none of that ever happened.