Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I haven't. Because (free, as in free beer) chatGPT is extremely slow, I have to make a rather extensive proompt to get the result I want to, and then I still have to debug most code.

That's not very convenient, atleast for now. I got so used to search engines by now, that it only takes a few keywords to get the expected result. Be it a SO-answer or a documentation page. And as people have mentioned, chatGPT was learned on the stuff that's on the internet, so if there will never be any new stuff, because people just use AI, then it will not learn and won't answer your new questions. For some edge cases I might try AI here or there, but usually it's not for me.

Hell, there comes even an example to my mind. I recently just asked chatGPT what a single-issue 5 stage pipeline on a CPU actually means. I wanted to know if, especially, the "single-issue" meant that only one instruction is present in the pipeline at a time, or if a new one gets shifted in on every clock cycle (if there is no hazard). It just couldn't answer it straight-forward. It was also kinda hard to find the exact definition on the internet. I found it in a book from the 90s which was chilling in my book shelf (Computer architecture and parallel processing by Kai Hwang). Hint: Single-issue just means that only one instruction can be in one stage at a time, but still multiple get processed inside the pipeline. The keyword is 'underpipelined'



ChatGPT-4 seems to do fine on this for me. https://chat.openai.com/share/c5cc8cb6-ebb5-45eb-9476-ef85a6...


Yes, someone tested it on GPT-4 for me too and that actually gave a quite decent reply. Still, there are always some cases somewhere where it messes up.

I'll just keep an eye on AI progress, but will probably not make it my goto for some time. Maybe later (whenever that is)


> I haven't. Because (free, as in free beer) chatGPT is extremely slow, I have to make a rather extensive proompt to get the result I want to, and then I still have to debug most code.

That's because you are comparing asking ChatGPT to write full code to searching for a question on Stack Overflow and adapting their answer (which is comparing apples and oranges).

Try using ChatGPT like you use Stack Overflow instead (i.e. the question is "How would I record an audio stream to disk in Python" rather than "write me an application / function which...").

As an aside, try "How would I record an audio stream to disk in Python"" in both GPT4 and searching for an answer on Stack Overflow and see what has the better answer! (Clue: GPT4, and if you don't like GPT4's answer just ask it to clarify/change it)


>Try using ChatGPT like you use Stack Overflow instead (i.e. the question is "How would I record an audio stream to disk in Python" rather than "write me an application / function which...").

That's my point though. I get, that it can produce quite good results, if you are specific enough. And for some applications it makes sense to take your time and describe that as much as possible.

Most of the time I just need some small snippet though and usually I can get that with just a few keywords in my favorite search engine, which is way faster. So the conclusion is: There is no one or the other. They should be used complementary, or atleast that's what I am doing (as in use the search engine for quick hints and chatGPT for some more verbose stuff 'write me a parser for this csv in awk'.)


Personally ChatGPT generally gives me a quicker, better, simpler and ad-free result for the snippet (At least with GPT4).

Plus I can ask follow-up questions in a context-driven way ("Can I do this without importing a library?").

I'm aware that different people will have different feelings on this though and personal tastes will differ, but while search engines stagnate I suspect the needle will continue to shift towards AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: