@Edent in fact, there is a thing that kinda does that, called Perplexity. Should be much better suited for this, as far as I know.
I tried your question, there is a result if you care to take a look (I don't know if it's actually any better) https://www.perplexity.ai/search/which-oscar-winners-have-appea-GvT78HkiRiSUIIftuGoLag#0
Posts
-
I know people are probably bored of ChatGPT mistakes - but its knowledge of #DoctorWho is impressively bad! -
I know people are probably bored of ChatGPT mistakes - but its knowledge of #DoctorWho is impressively bad!@Edent I don't know what state the bubble is in, but getting the specific and accurate data is just not really the use case for current generation models. It's a shame that this is hardly communicated.
It can be vastly improved for specific knowledge domans with RAG and other buzzwords, but even that does not fully solves the problem.You don't go to chatgpt or whatever the current AI thing is for straight up facts, at least not yet, just like you don't take the very first result you find on Google as a fact. Well, hopefully.
If they teach this thing to automatically go to *actual*(not bing) search engines, evaluate the reputation of sources, and double-check the data, we can get some significant improvements here.