Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a system prompt that gives me probability estimates to everything the LLM claims. Its a simple fix for your problem.


My problem is people coming to communities I'm a part of with information they got "from Google" and that information being 100% wrong. Not sure how your prompt helps with that, I need Google to fix their system first.


LLMs are unable to introspect and don't know what they don't know. Watson the Jeopardy-bot had a confidence interval, but Watson was not an LLM


How does that work if the LLM is the one generating the probabilities too?


according to you, if a human makes a prediction with some probability estimate it is useless because the estimate itself is inaccurate (hence probability "estimate"). in reality nothing needs to be 100% accurate for it to be useful including the estimate of probability itself.


It's weird to make an assumption about OPs position and argue with that instead of what they actually wrote.

Also, why make it so personal? I think it was a fair question to ask - you didn't answer how it works - just got weirdly defensive about it.


hey that was not my intention, it was to bring to light that we assign probability estimates ourselves to our own predictions despite the estimates being not 100% accurate.


Hate to break it to you, but the "probability estimates" it spits out are also complete bullshit.


nope! you can also self assign probability estimates to your own predictions. if you follow it you will end up being more accurate in the long run even if your accuracy of probability estimates are themselves not accurate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: