It has for me, and several developer friends, and considering the fame ChatGPT has gotten, and that StackOverflow's fall has accelerated, it's obvious the milkshake's migrating. Not all of it of course, as I stated, "for the common use case"
It's painfully slow. I can Google the question, click one of the top results, skip to the relevant part and read it faster than GPT can generate two sentences. You also have to build an elaborate prompt instead of throwing two/three keywords into it.
It doesn't help that GPT is insistent on replying in the three paragraph format, meaning that the first 30-40 words it creates are just trash to be ignored.
I found it useful once - when I had to write an essay about ISO 27001 for college and just wanted it to go away. Took what it generated and spent 20 minutes editing it to look closer to my style. For real work it isn't as useful.
> I can Google the question, click one of the top results, skip to the relevant part and read it faster than GPT can generate two sentences.
Ironically, this is why people like me prefer LLMs (when they're accurate). With Google, about 50% of the times the top SO hit is not answering my question. So I have to click 5-10 SO links, parse each one to see if:
1. The question being asked is relevant to my problem.
2. The answer actually answers it.
I may be able to do it quickly, but it is a tedious burden on my brain. While GPT doesn't always work, the nice thing about it is that when it does work, it has taken care of this burden for me.
Also, GPT's pretty much memorized a lot of the answers. I once asked it an obscure question involving openpyxl. It gave a perfectly working answer. I wondered: Did it reason it and generate the code, or is there a SO post with the exact same answer? So I Googled it, and sure enough, there was an SO question with the same code!
Except GPT's solution was superior in one tiny respect: The SO answer had some profanity in the code (in a commented line). GPT removed the profanity :-)
I find it incredible you find a LLM slower and less full of useless chitchat about a question than stack overflow.
I don’t even open SO anymore; if it has a direct answer to your question, the LLM almost certainly does too; and asking new questions on SO is basically impossible.
If you manage to survive the gauntlet of “too specific, already answered, not general interest, arbitrary moderator activity”, the chances of getting an answer that answers your question can take forever; most likely you’ll get a stupid answer that doesn’t answer it, upvoted by idiots who don’t understand that it not an answer the the actual question, and, ultimately, because it “already has an answer”, ignored, never to receive an answer.
Maybe one day, a passing savant will answer in a comment.
…and yet, you find it faster and more reliable?
You, and I, have had different experiences on stack overflow in the last two years.
I think maybe you haven't been using GPT4 (the one where you have to pay money). Or else you're coming at it with a very strong prior, or you're not asking it about software engineering questions, or you're not phrasing your questions carefully. GPT4 is demonstrably extremely useful for technical questions in the realm of software engineering, and in addition to surfacing useful answers, it (obviously) presents a completely unprecedented conversational interface.
Can you give an example of a technical software question where you found it wasn't helpful? I'll see if I can get a good answer and post the permalink for you. I suspect you're not phrasing your questions well.
I have 110% replaced it with ChatGPT. Perhaps SO would still have a chance back in its glory days but there's no comparison to having a direct, specific, instant answer vs having to fight against SEO or moderators for hours.
I haven't. Because (free, as in free beer) chatGPT is extremely slow, I have to make a rather extensive proompt to get the result I want to, and then I still have to debug most code.
That's not very convenient, atleast for now. I got so used to search engines by now, that it only takes a few keywords to get the expected result. Be it a SO-answer or a documentation page. And as people have mentioned, chatGPT was learned on the stuff that's on the internet, so if there will never be any new stuff, because people just use AI, then it will not learn and won't answer your new questions. For some edge cases I might try AI here or there, but usually it's not for me.
Hell, there comes even an example to my mind. I recently just asked chatGPT what a single-issue 5 stage pipeline on a CPU actually means. I wanted to know if, especially, the "single-issue" meant that only one instruction is present in the pipeline at a time, or if a new one gets shifted in on every clock cycle (if there is no hazard). It just couldn't answer it straight-forward. It was also kinda hard to find the exact definition on the internet. I found it in a book from the 90s which was chilling in my book shelf (Computer architecture and parallel processing by Kai Hwang). Hint: Single-issue just means that only one instruction can be in one stage at a time, but still multiple get processed inside the pipeline. The keyword is 'underpipelined'
Yes, someone tested it on GPT-4 for me too and that actually gave a quite decent reply. Still, there are always some cases somewhere where it messes up.
I'll just keep an eye on AI progress, but will probably not make it my goto for some time. Maybe later (whenever that is)
> I haven't. Because (free, as in free beer) chatGPT is extremely slow, I have to make a rather extensive proompt to get the result I want to, and then I still have to debug most code.
That's because you are comparing asking ChatGPT to write full code to searching for a question on Stack Overflow and adapting their answer (which is comparing apples and oranges).
Try using ChatGPT like you use Stack Overflow instead (i.e. the question is "How would I record an audio stream to disk in Python" rather than "write me an application / function which...").
As an aside, try "How would I record an audio stream to disk in Python"" in both GPT4 and searching for an answer on Stack Overflow and see what has the better answer! (Clue: GPT4, and if you don't like GPT4's answer just ask it to clarify/change it)
>Try using ChatGPT like you use Stack Overflow instead (i.e. the question is "How would I record an audio stream to disk in Python" rather than "write me an application / function which...").
That's my point though. I get, that it can produce quite good results, if you are specific enough. And for some applications it makes sense to take your time and describe that as much as possible.
Most of the time I just need some small snippet though and usually I can get that with just a few keywords in my favorite search engine, which is way faster. So the conclusion is: There is no one or the other. They should be used complementary, or atleast that's what I am doing (as in use the search engine for quick hints and chatGPT for some more verbose stuff 'write me a parser for this csv in awk'.)
Personally ChatGPT generally gives me a quicker, better, simpler and ad-free result for the snippet (At least with GPT4).
Plus I can ask follow-up questions in a context-driven way ("Can I do this without importing a library?").
I'm aware that different people will have different feelings on this though and personal tastes will differ, but while search engines stagnate I suspect the needle will continue to shift towards AI.
I always read here ChatGPT is amazing. Can you give a link on how to use it? Every time I tried to google it returns lots of different results and when I tried it it:s not even usable for basic things I want. Is the ChatGPT you:re talking about on their website? Do I have to pay for it?
> Can you give a link on how to use it? Every time I tried to google it returns lots of different results and when I tried it it:s not even usable for basic things I want.
Here is an example of using it to write simple powershell scripts:
You need to pay for it if you want access to the latest version of the model, along with some beta features like plugins. Plugins are extremely useful and it is worth paying just to get access to them. For instance, you need to have a certain plugin to get it to read links.
In that JSON example you're honestly losing more time with ChatGPT that doing it yourself. It seems more like mentoring a junior than a helpful assistant. Most of my interactions with it have been this way.
I knew/know very little about 3D printers and the fields didn't mean much to me so I didn't want to have to research every one of them. It wouldn't have been difficult, just tedious.
You're mostly right in your experience. I have spent quite a bit of time trying to get ChatGPT to be a worthwhile piece of my workflow, and I guess sometimes it is, but most of the time the basic code or config or content I try to generate, it gets very fundamental things incorrect. It feels like it's mostly just hype these days.
Can you give an example of a technical software question where you found it wasn't helpful? I'll see if I can get a good answer and post the permalink for you. I suspect you're not phrasing your questions well.
Can you please specify whether you use (paid) GPT-4? Would you kindly provide links to a few examples of very fundamental things incorrect?
My experience - the free version made up a lot of things but still felt very useful - enough to want to upgrade to the paid version. With the paid version, I notice very rarely that it hallucinates. It does make errors but it can correct them when I provide feedback. It is possible that I just do not notice the errors you would notice, it is also possible that we use it differently. I would like to know.
> Can you please specify whether you use (paid) GPT-4?
Paid.
> Would you kindly provide links to a few examples of very fundamental things incorrect?
No, definitely not.
> I notice very rarely that it hallucinates.
Unsure of what "hallucinates" means in this case. Some examples of things I've used it for: docker configuration, small blocks of code, generating a cover letter, proofreading a document, YAML validation, questions about various software SDKs. The outcome is usually somewhere on the spectrum of "not even close/not even valid output" to "kind of close but not close enough to warrant a paid service". When I ask for a simple paragraph and I get a response that isn't grammatically correct/doesn't include punctuation, I'm not sure what I'm paying for.
>> Unsure of what "hallucinates" means in this case
The term "hallucinations" is now commonly used for instances of AI making stuff up - like when I asked ChatGPT (before I had paid account) to recommend 5 books about a certain topic and two of the recommended books looked totally plausible, but when I tried to find them, I discovered there are no such books. This is where I see a big difference between GPT-3.5 and GPT-4.
>> I get a response that isn't grammatically correct/doesn't include punctuation
What punctuation? If you mean stuff like commas separating complex sentences, my English is definitely not good enough to spot that. But your mention of punctuation reminded me of problems that ChatGPT has with my native language... any chance you are using ChatGPT in a language other than English?
Anecdata: I've started asking Bing these questions instead of SO. E.g., it recently gave me a very helpful answer for debugging a Spring issue and cited its sources. What it didn't do was present me with a whole lot of moderation cruft.
I can ask for recommendations for tools and libraries, which IIRC SO disallows.
I also don't have to pray my question will get enough vote attention or worry that I posted it at the wrong time of day.
On the whole, going the GPT route has been more satisfying in all ways.
> I can ask for recommendations for tools and libraries, which IIRC SO disallows.
Bing Chat almost always is useless for me with these kinds of queries. A few days ago I asked for a tool that monitors to see if a website is up. I told it I needed the tool to be something I'd run locally - not an online service and not something I need to sign up for.
small nitpick but I thought it was just an icon but it turns out to be the button for switching light/dark mode. It would be great if you could replace it.
Sure, but that doesn’t mean it’s not true, and for many of us its truth is prima facie because it’s true of both our own usage and the people we work with and talk to.
Claim asserted without evidence.