@[email protected] This might sound silly, but how about a tractor if you don't mind? My son loves tractors so he'll be able to appreciate it too!
Posts
-
51 years down, ??? to go.#birthday -
51 years down, ??? to go.#birthday@[email protected] Thank you! 51 = 3 x 17, but is one of those numbers I always half-believe is a prime before I remember how to factor it.
-
51 years down, ??? to go.#birthday -
Nvidia is self-dealing by investing in its own customers so they can keep buying its GPUs. That's something companies do when real investment starts to dry up.@[email protected] @[email protected] When I first got into computing as a kid, the names of the companies making the stuff I used weren't shoved into my face with every keypress. Obviously I knew about Microsoft and Apple and whatnot, but they weren't a presence in my life. Now they're ever present, constantly looking over your shoulder. The companies demand you allow them constant access to everything you do with a computing device. The companies demand a seat at the table at every meeting about any new standard, law, or regulation. The companies spawn new companies, which make even more demands.
Computing today is oppressive. That should change, but for now it's hard to even imagine how.
-
Nvidia is self-dealing by investing in its own customers so they can keep buying its GPUs. That's something companies do when real investment starts to dry up. -
Nvidia is self-dealing by investing in its own customers so they can keep buying its GPUs. That's something companies do when real investment starts to dry up.@[email protected] @[email protected] Stock price is pretty disconnected from reality. I'd guess that the reality is they can't let GPU demand drop without risking a glut and fall in future demand for chips in the pipeline. It takes years from design to production to sales and if the sales flag for too long, all their current investments on future chips are put at risk. If they overinvested based on hype-fueled projections, they'd have an incentive to drive demand harder than usual. Hard to say.
-
The film director Bennett Miller has used DALL-E 2 to generate some very striking images that have been exhibited at the Gagosian gallery; to create them, he crafted detailed text prompts and then instructed DALL-E to revise and manipulate the generate...The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning.
#AI #GenAI #GenerativeAI #DALL-E #ChatGPT #GPT #LLM #art
-
The film director Bennett Miller has used DALL-E 2 to generate some very striking images that have been exhibited at the Gagosian gallery; to create them, he crafted detailed text prompts and then instructed DALL-E to revise and manipulate the generate...The film director Bennett Miller has used DALL-E 2 to generate some very striking images that have been exhibited at the Gagosian gallery; to create them, he crafted detailed text prompts and then instructed DALL-E to revise and manipulate the generated images again and again. He generated more than a hundred thousand images to arrive at the twenty images in the exhibit. But he has said that he hasn’t been able to obtain comparable results on later releases of DALL-E.
From https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art
This is a good essay whose core argument about why generative AI cannot make art is one I strongly believe. The passage I quoted above reminds me of a blog post I wrote awhile back, about why I feel it can't make sense to think of LLMs as "word calculators": https://bucci.onl/notes/Word-calculators-dont-add-up
#AI #GenAI #GenerativeAI #DALL-E #ChatGPT #GPT #LLM #art
-
Good gods the logical leap from “we got a diffusion model to hallucinate a video of doom” to “no more video game programmers”. https://mastodon.social/@arstechnica/113040739034038174@[email protected] Training on Unicode code point sequences would not address the problem I raised. It's still a "small world", not a "large world" / open-ended possibility space. That's the key point. @[email protected]
-
Good gods the logical leap from “we got a diffusion model to hallucinate a video of doom” to “no more video game programmers”. https://mastodon.social/@arstechnica/113040739034038174@[email protected] @[email protected] Think "word" for "token" (*). There are more than 50,000 words in English. But beyond that, no matter how many tokens it has, if the number is fixed at N, it will never suffice to capture how human beings use language. We invent new words, symbols, etc. fluently as we communicate, which GPT cannot do and may never be able to do.
(*) Really it's more like words, word fragments, and punctuation, but that doesn't matter for the point.
-
Good gods the logical leap from “we got a diffusion model to hallucinate a video of doom” to “no more video game programmers”. https://mastodon.social/@arstechnica/113040739034038174@[email protected] @[email protected] I'm not a data scientist, but I am a computer scientist, and there are lots of good reasons to believe that LLMs/genAI will not be able to fulfill on the promises their advocates seem to be suggesting they will.
Prompt engineering is another "basis set" for describing (a subset) of what you are already able to express with other methods. Current techniques like game engines and modeling tools and whatnot give you a certain grasp on the space of possible games; they make certain games easier to make than others. Whatever kind of genAI gobbledygook ends up being applied to game creation is simply another way of grasping the space of possible games. It too will make a possibly different set of games easier to make than others. That's a basic observation from algorithmic information theory. There's no magic here.
Since most of the genAI models I'm aware of ultimately ground out in "small world" representations, there's a strong case to be made that they will always fall well short of what human beings are capable of. By "small world", I mean they tend to have finite-width bottlenecks. For instance, according to Stephen Wolfram's account, GPT's core LLM outputs a probability distribution over approximately 50,000 tokens. While an impressive amount of English text can be built from combinations of these 50,000 units, not all of it can. Further, humans readily invent new words and patterns and give them meaning; often whenever we see a constraint (like 50,000 tokens) one of the first things we do is blast through it and create patterns outside the constraint. Arguably, this latter process is closer to what language actually IS than the token-emission processes boosters tend to fixate on. Similar comments apply to image-generating AI.
My belief is that if "game" generating AI ever becomes common, as soon as its limits are perceived people will start building tools that transcend the limits, to make games that can't be made with the generative AI. This is what matters, and this is something generative AI may never be able to do.
-
Good gods the logical leap from “we got a diffusion model to hallucinate a video of doom” to “no more video game programmers”. https://mastodon.social/@arstechnica/113040739034038174@[email protected] @[email protected] Among other things, "thinking every pixel in real time" is insanely inefficient. The whole point of doing, say, 3-d modeling is to compress an otherwise enormous amount of information into a comparatively small representation that's also naturally aligned with the intended future transformations of it. AI definitely does not solve the general problem of determining the best representation for arbitrary situations of this nature.
There's an asymptotic argument to be made that this cannot be done in general, in the same way that if someone gave you the position and momenta of every subatomic particle making up a rabbit you wouldn't be able to even tell that was a rabbit you were looking at, let alone determine whether it was about to eat a carrot. And even if you somehow could, it'd take you an astronomical number of orders of magnitude more work than just looking at the dang rabbit.
-
The influence of powerful imagery and rhetorics in promotional material for computing is neither new nor surprising.@[email protected] This one's worth a read I think.
-
The influence of powerful imagery and rhetorics in promotional material for computing is neither new nor surprising.The influence of powerful imagery and rhetorics in promotional material for computing is neither new nor surprising. There is a longstanding tradition of overselling the latest technology, claiming it to be the next (industrial) revolution or promising that it will outperform human beings. With the passage of time it may become difficult to recognize these invented ideas and images that have acquired a life of their own and have become integrated as part of a historical narrative. As modern, digital electronic computing is nearing its 100th anniversary, such recognition does not become easier, though we may be in need of it more than ever before.
From https://cacm.acm.org/opinion/the-myth-of-the-coder/
This particular case, where the praise of automatic programming implied the obsolescence of the coder, can be instructive for us today. There is a line that runs from Grace Hopper’s selling of “automatic coding” to today’s promises of large AI models such as Chat-GPT for revolutionizing computing by automating programming or even making human programmers obsolete.19,20 Then as now, it is certainly the case that the automation of some parts of programming is progressing, and it will upset or even redefine the division of labor. However, this is not a simple straightforward process that replaces the human element in one or more specific phases of programming by the computer itself. Rather, practice adopts new techniques to assist with existing tasks and jobs. Such changes do not generalize easily, and using titles as like “coders”—or today’s “prompt engineers,”—while memorable, does not do justice to the subtle process of changing practice.
#ComputerScience #computers #computing #programming #dev #tech #hype #GPT #ChatGPT #Copilot
-
AI generation when writing software is a false economy. You are replacing writing code with code review. Code review is harder and requires you to already have an understanding of the domain which often means that you would’ve even able to write it you...@[email protected] @[email protected] @[email protected] Back when I was a software developer someone in my group identified a subtle bug in money processing code that in some circumstances would have resulted in a small rounding error. We pored over this awhile and convinced ourselves that we could have siphoned money out of that and, since we controlled all these systems, it could probably gone undetected for quite some time. This company had revenues in the many hundreds of millions so it would have added up to a tidy sum over enough time.
We alerted the management and it was fixed. A few things occur to me:
1. An LLM would never find a bug like that. In fact, it's fairly likely to generate them
2. Software developers who are not deeply attuned to their codebase (say because LLMs generated substantial portions of it) would be unlikely to find such bugs
3. If i were a software developer today and I were required to use LLMs in my work, I would not tell management about bugs like this if I found them. Because they've signalled to me they don't respect what I'm capable of enough to support me in it, so why should I? I'ld save the good energy for my hobbies and phone it in at work as much as I could get away with; it won't matter to the higher ups.