Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's an interesting prompt I tried on our Pulze.ai platform Spaces and we nailed it with automatically choosing the right model for this type of question gpt-4-turbo: "Yes, you can infuse garlic into olive oil without heating it up, but it requires caution due to the risk of botulism, a potentially fatal illness caused by Clostridium botulinum bacteria. These bacteria can thrive in low-oxygen environments and can produce toxins in food products like garlic-infused oil if not prepared or stored correctly."

I think that is one advantage of not just blindly trusting one model but finding consensus among many top rated models within one interface that allows you to quickly cross-check.



Which Gemini was used is important too btw. - just tried Gemini-1.5-pro and it was working just fine. So I really think the newer versions of LLMs are able to catch this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: