ELIZA fooled plenty of people (both originally and in the study you just linked) but i still wouldn't say Eliza passed/passes the turing test in general. It just shows that occasionally or even frequently fooling people is not a sufficient proxy for general intelligence. Ofc there isn't a standardized definition, but one thing I would personally include in a "strict" Turing test is that the human interrogee ought to be incentivized to cooperate and to make their humanity as clear as possible. And the interrogator should similarly be incentivized to find the right answer.
I'm sure you can find some formulations that are AI written. Because I've used AI for structuring content and developing site.
As I wrote somewhere else this is made with AI, not by AI.
Ive been singing and developing for years. I'm not the expert but using others. Also, anyone finding anything that looks remotely wrong, I'll happily receive the feedback and update.
And use chatgpt, but use it the same way. Be curious if it's correct.
Unfortunately, if you reveal that you use AI in your projects, you will instantly turn a segment of your readers against you, even if your project is objectively good.
I suspect a lot of people don't reveal that they use AI for this reason.
> I could just ask ChatGPT for any of these things myself.
You wouldn't know what to ask, unless you have expertise.
The question isn't whether an LLM was used, but the trustworthiness of the human(s) behind it. Why would you trust anything by an unknown person on the Internet?
A human being informed of a mistake will usually be able to resolve it and learn something in the process, whereas an LLM is more likely to spiral into nonsense
It's true that the big public-facing chatbots love to admit to mistakes.
It's not obvious to me that they're better at admitting their mistakes. Part of being good at admitting mistakes is recognizing when you haven't made one. That humans tend to lean too far in that direction shouldn't suggest that the right amount of that behavior is... less than zero.