Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you trust the code by humans?


What gpt-3 creates usually looks 'competent'.

Junior human devs usually leave a visible mess in the code, and errors are easy to spot. With the gpt-3 solutions to complex-ish problems, not so much, one needs to read line by line to spot super weird errors that no competent programmer would ever plant.

E.g.: I asked it to implement quicksort in x86 asm, but increment every sorted element by 5. It did nearly all of that right, except that it replaced every sorted element with 5. The code looked great otherwise, concise and even commented. I pointed out the mistake and it promptly agreed with me and fixed it. At this point I freked out and asked it to write about Dickens using adjectives that start with the letter 'p'. Which it (mostly) did. Good lord.


I wonder if it would find it if you asked: “There is a bug in that code. Could you explain the problem and fix it?” I’ve certainly had that happen in enough interviews, haha.


From what I was able to experiment, it is capable of doing so.


A snark comment that is entirely on point. Well done :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: