> I noticed that the AI in the demo seems to be very rambly
That's been a major issue for me with LLMs this whole time. They can't just give me an answer, they have to spout a whole preamble that usually includes spitting my query back at me and trying to impress me with how many words it's saying, like it's a requirement. You can tell it e.g. "don't give me anything other than the list" but it's annoying to have to ask it every time.
They really need a "hidden yap" mode. LLMs perform better on difficult questions or interactions when they have "room to think". An introductory paragraph is like that, and it's as much for the LLM to form its thoughts as it's for the user. But for all that the intro paragraph doesn't have to be _read_ by the user, it just has to be emitted by GPT and put into the transcript.
Someone suggested writing "Be concise in your answers. Excessive politeness is physically painful to me." in ChatGPT's custom instructions, and so far I've liked the results. I mean, I haven't done A/B testing, but I haven't had a problem with excessive verbosity every since I set that custom prompt.
I almost always find it too verbose and unnaturally mimicy when bouncing the question back. It doesn't paraphrase my request. It's more like restating it.
What I notice most is that almost always repeats verbatim unnaturally long parts of my requests.
This might be more useful to people that do lazy prompting. My nature compels me to be clear and specific in all written text.
That's been a major issue for me with LLMs this whole time. They can't just give me an answer, they have to spout a whole preamble that usually includes spitting my query back at me and trying to impress me with how many words it's saying, like it's a requirement. You can tell it e.g. "don't give me anything other than the list" but it's annoying to have to ask it every time.
Every AI chat needs a "no yap mode"