Almost any web page full of fluff, which is a rapidly rising proportion.
> And how would I know the LLM has error bounds appropriate for my situation?
You consider whether you care if it is wrong, and then you try it a couple of times, and apply some common sense when reading the summaries, just the same as when considering if you trust any human-written summary. Is this a real question?
"Get me the recipe from this page" feels like a place where I do really care that it gets it right, because in an unfamiliar recipe it doesn't take much hallucination around the ingredients to ruin the dish.
I guess I never come across that situation because I just don’t engage with sources that fluff. That is a good example, but presumably, there should be no errors there because it’s just stripping away unnecessary stuff? Although, you would have to trust the LLM doesn’t get rid of or change a key step in the process, which I still don’t feel comfortable trusting.
I was thinking more along the lines of asking an LLM for a recipe or review, rather than asking for it to restrict its result to a single web page.
Because I can get content I want there, and with a summarisatin option, it is irrelevant to me if they don't "respect my time" because it doesn't take any more time for me to get at the actual recipe.