Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I noticed that the AI in the demo seems to be very rambly

That's been a major issue for me with LLMs this whole time. They can't just give me an answer, they have to spout a whole preamble that usually includes spitting my query back at me and trying to impress me with how many words it's saying, like it's a requirement. You can tell it e.g. "don't give me anything other than the list" but it's annoying to have to ask it every time.

Every AI chat needs a "no yap mode"



They really need a "hidden yap" mode. LLMs perform better on difficult questions or interactions when they have "room to think". An introductory paragraph is like that, and it's as much for the LLM to form its thoughts as it's for the user. But for all that the intro paragraph doesn't have to be _read_ by the user, it just has to be emitted by GPT and put into the transcript.


That's very true. I believe it's a variation of chain of thought prompting they're doing. ChatGPT seems trained to do this for one.


Someone suggested writing "Be concise in your answers. Excessive politeness is physically painful to me." in ChatGPT's custom instructions, and so far I've liked the results. I mean, I haven't done A/B testing, but I haven't had a problem with excessive verbosity every since I set that custom prompt.


I’m kind of glad it does this so I know that it understood what I asked. A good presenter will do this as well when responding to questions.


I almost always find it too verbose and unnaturally mimicy when bouncing the question back. It doesn't paraphrase my request. It's more like restating it.

What I notice most is that almost always repeats verbatim unnaturally long parts of my requests.

This might be more useful to people that do lazy prompting. My nature compels me to be clear and specific in all written text.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: