Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenAI is doubling the number of messages customers can send to GPT-4 (twitter.com/openai)
29 points by DantesKite on July 20, 2023 | hide | past | favorite | 8 comments


I wouldn't care about a low limit if OpenAI brought back the chatbot we had in March.

GPT-4 is barely above unusable now.



Right now I start my prompts to chatgpt-4 with "dont be lazy". For every question I have it answers that it is a complex problem... GPT-3.5 in the API is more consistent that chatgpt-4. Even with some additional prompts it is making so many mistakes that it takes me multiple tries to get the right output with conversation resets from time to time to start from the previous solution.


This is welcome because GPT-4 actually requires a few iterations of prompts to actually do its job now. Before it took no more than a prompt and one clarification to get a good output. Now it’s just a GPT-3.5-turbo that hallucinates slightly less.


Do you use the web app?

If so, could you please go back in your history and make a new chat with an old prompt which had an excellent response?

I am curious if you could see the degradation and share the example.


To your point, that we haven’t seen widespread examples of prompts that used to work but now don’t, seems telling about the accuracy of claims.


But GPT4 sucks now


Finally. It was a ridiculous limit




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: