Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the sake of discussion I want to play devil's advocate concerning your point

> It was an interesting result to me because it shows that experts in a field are not only more likely to recognize when a model is giving incorrect answers but they're also more likely to get correct answers because they are able to tap into a set of weights that are populated by text that knew what it was talking about. Lay people trying to use an LLM to understand an unfamiliar field are vulnerable to accidentally tapping into the "amateur" weights and ending up with an answer learned from random Reddit threads or SEO marketing blog posts, whereas experts can use jargon correctly in order to tap into answers learned from other experts.

Couldn't it be the case that people who (in this case recognizable to the AI by their choice of wording) are knowledgeable in the topic need different advise than people who know less about the topic?

To give one specific examples from finance: if you know a lot about finance, getting some deep analysis and advice about what is the best way to trade some exotic options is likely sound advice. On the other hand, for people who are not deeply into finance the best advice is likely rather "don't do it!".



In some cases, sure, but not here—neither option had more risk associated with it than the other, it was just an optimization problem. The first answer that the model gave to my wife was just wrong about the math, with no room for subjectivity.


> Couldn't it be the case [...] need different advise than people who know less about the topic?

> for people who are not deeply into finance the best advice is likely rather "don't do it!".

Oh boy, more nanny software. This future blows.


> Oh boy, more nanny software. This future blows.

I think this topic is a little bit more complicated: this is rather a balancing of the model between

1. "giving the best possible advice to the respective person given their circumstances" vs

2. "giving the most precise answer to the query to the user"

(if you ask me: the best decision would in my opinion be to give the user a choice for this, but this would be overtaxing to many users)

- Freedom-loving people will hate it if they don't get 2

- On the other hand, many people would like to actually get the advice that is most helpful to them (i.e. 1), and not the one that may answer their question exactly, but is likely a bad idea for them


Everything can always be more complicated, of course. For example:

1. The AI will never know the user well enough to predict what will be best for them. It will resort to treating everybody like children. In fact, many of the crude ways LLMs currently steer and censor are already infantilizing.

2. The users' benefit vs. "for your own good" as defined by a product vendor's financial interest is a scam that vendors have perpetrated for ages. Even the unsubtle version of it has a bunch of stooges and chumps that defend it. Things will not improve in the users' favor when it's harder to notice, and easier to pretend they're not malicious.

3. A bunch of Californians using the next wave of tech to spread cultural imperialism is better than China doing it, I guess. But why are those my options?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: