Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The concern here is mainly on practicality. The original mainframes did not command startup valuations counted in fractions of the US economy, they did qualify for billions in investment.

This is a great milestone, but OpenAI will not be successful charging 10x the cost of a human to perform a task.



The cost of inference has be dropping by ~100x in the past 2 years.

https://a16z.com/llmflation-llm-inference-cost/


Hmm the link is saying the price of an LLM that scores 42 or above on MMLU has dropped 100x in 2 years, equating gpt 3.5 and llama 3.2 3B. In my opinion gpt 3.5 was significantly better than llama 3B, and certainly much better than the also-equated llama 2 7B. MMLU isn't a great marker of overall model capabilities.

Obviously the drop in cost for capability in the last 2 years is big, but I'd wager it's closer to 10x than 100x.


*infernonce


*inference


> OpenAI will not be successful charging 10x the cost of a human to perform a task.

True, but they might be successful charging 20x for 2x the skill of a human.


Or 10x the skill and speed of a human in some specific class of recurrent tasks. We don't need full super-human AGI for AI to become economically viable.


Companies routinely pay short-term contractors a lot more than their permanent staff.

If you can just unleash AI on any of your problems, without having to commit to anything long term, it might still be useful, even if they charged more than for equivalent human labour.

(Though I suspect AI labour will generally trend to be cheaper than humans over time for anything AIs can do at all.)


I wouldn’t expect it to cost 10x in five years, if only because parallel computing still seems to be roughly obeying moore’s.


How much does AWS charge for compute?

If it can be spun up with Terraform, I bet you they could.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: