My current LLM workflow typically consists of the following dialogue.
-
My current LLM workflow typically consists of the following dialogue.
Me: <asks a simple question>
LLM: <gives wrong answer 1>
Me: That doesn't work. I get the following error.
LLM: My apologies! <gives wrong answer 2>
Me: That doesn't work either!
LLM: You're absolutely right! Thanks for your patience. <gives wrong answer 3>
Me: Nope, it still doesn't work.
LLM: My apologies! <gives wrong answer 1> again.
-
For all the time saved by using the solution LLMs provide, I spend almost twice as much time correcting their answers while conversing, resulting in a net loss of productivity.
-
I've also been advised to leverage the full benefit of context windows by uploading e.g. whole chapters of documentation into the model.
There's another catch. Unless the accuracy of answers surpasses at least 95 %, why would I want to prepare and maintain the said context window myself. It's nothing but poor return on investment.
Tell me, where's the productivity boost you are so feverishly chasing?
-
@nikoheikkila And if you already had to find the right section of docs, isn't it just faster to read the docs directly. Also makes the dev faster in future by learning.