Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When a model "reasons through" a problem its just outputting text that is statistically likely to appear in the context of "reasoning through" things. There is no intent, consideration of the options available, the implications, possible outcomes.

However, the result often looks the same, which is neat



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: