@
[email protected] @
[email protected] @
[email protected] I wonder if someone thinks that a large dataset of "prompts", given sufficient "prompt engineering", would be a decent training set.(I do not necessarily think it would be, for the record, as the LLMs already reproduce syntactically valid expressions, which is all this dataset would consist of, but all reason has flown out the door with "AI" so who knows what people would pay for at this point)