Hacker Newsnew | past | comments | ask | show | jobs | submit | zeknife's commentslogin

It's a bit of misdirection, since you actually have more options than just clicking the button.


Thanks for the tip


>The Turing test has a rigorous definition

Does it? Where?



ELIZA fooled plenty of people (both originally and in the study you just linked) but i still wouldn't say Eliza passed/passes the turing test in general. It just shows that occasionally or even frequently fooling people is not a sufficient proxy for general intelligence. Ofc there isn't a standardized definition, but one thing I would personally include in a "strict" Turing test is that the human interrogee ought to be incentivized to cooperate and to make their humanity as clear as possible. And the interrogator should similarly be incentivized to find the right answer.


I get the impression LLM agents are a bit like tamagochi but for tech bros.


This is nice but it looks so suspiciously AI-written how can I trust it? I could just ask ChatGPT for any of these things myself.


I'm sure you can find some formulations that are AI written. Because I've used AI for structuring content and developing site.

As I wrote somewhere else this is made with AI, not by AI.

Ive been singing and developing for years. I'm not the expert but using others. Also, anyone finding anything that looks remotely wrong, I'll happily receive the feedback and update.

And use chatgpt, but use it the same way. Be curious if it's correct.


Unfortunately, if you reveal that you use AI in your projects, you will instantly turn a segment of your readers against you, even if your project is objectively good.

I suspect a lot of people don't reveal that they use AI for this reason.


> I could just ask ChatGPT for any of these things myself.

You wouldn't know what to ask, unless you have expertise.

The question isn't whether an LLM was used, but the trustworthiness of the human(s) behind it. Why would you trust anything by an unknown person on the Internet?


Ruby has a similarly intuitive `3.times do ... end` syntax


go also has

    for range 5 { ... }


A human being informed of a mistake will usually be able to resolve it and learn something in the process, whereas an LLM is more likely to spiral into nonsense


You must know people without egos. Humans are better at correcting their mistakes, but far worse at admitting them.

But yes, as an edge case handler humans still have an edge.


LLMs by contrast love to admit their mistakes and self-flagellate, and then go on to not correct them. Seems like a worse tradeoff.


It's true that the big public-facing chatbots love to admit to mistakes.

It's not obvious to me that they're better at admitting their mistakes. Part of being good at admitting mistakes is recognizing when you haven't made one. That humans tend to lean too far in that direction shouldn't suggest that the right amount of that behavior is... less than zero.


Not when your goal is to create ASI: Artificial Sycophant Intelligence


and this is why LLM is getting cooked

they feed an internet data into that shit, they basically "told" LLM to behave because surprise surprise, human sometimes can be more nasty


You must know better humans than I do.


At least until they spend some time with it


It also doesn't need to be good for anything to turn the world upside down, but it would be nice if it was


I see about 40 paragraphs?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: