Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
poorman
5 months ago
|
parent
|
context
|
favorite
| on:
GPT-OSS vs. Qwen3 and a detailed look how things e...
As we saw with GPT-5 the RL technique of training doesn't scale forever
energy123
5 months ago
|
next
[–]
Unless GPT-5 is 30% cheaper to run than o3. Then it's scaling brilliantly given the small gap between release dates. People are really drawing too many conclusions from too little information.
oezi
5 months ago
|
prev
[–]
I meant scaling the base training before RL.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: