Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Open models are going to win long-term. Anthropics' own research has to use OSS models [0]. China is demonstrating how quickly companies can iterate on open models, allowing smaller teams access and augmentation to the abilities of a model without paying the training cost.

My personal prediction is that the US foundational model makers will OSS something close to N-1 for the next 1-3 iterations. The CAPEX for the foundational model creation is too high to justify OSS for the current generation. Unless the US Gov steps up and starts subsidizing power, or Stargate does 10x what it is planned right now.

N-1 model value depreciates insanely fast. Making an OSS release of them and allowing specialized use cases and novel developments allows potential value to be captured and integrated into future model designs. It's medium risk, as you may lose market share. But also high potential value, as the shared discoveries could substantially increase the velocity of next-gen development.

There will be a plethora of small OSS models. Iteration on the OSS releases is going to be biased towards local development, creating more capable and specialized models that work on smaller and smaller devices. In an agentic future, every different agent in a domain may have its own model. Distilled and customized for its use case without significant cost.

Everyone is racing to AGI/SGI. The models along the way are to capture market share and use data for training and evaluations. Once someone hits AGI/SGI, the consumer market is nice to have, but the real value is in novel developments in science, engineering, and every other aspect of the world.

[0] https://www.anthropic.com/research/persona-vectors > We demonstrate these applications on two open-source models, Qwen 2.5-7B-Instruct and Llama-3.1-8B-Instruct.



I'm pretty sure there's no reason that Anthropic has to do research on open models, it's just that they produced their result on open models so that you can reproduce their result on open models without having access to theirs.


> Open models are going to win long-term.

[2 of 3] Assuming we pin down what win means... (which is definitely not easy)... What would it take for this to not be true? There are many ways, including but not limited to:

- publishing open weights helps your competitors catch up

- publishing open weights doesn't improve your own research agenda

- publishing open weights leads to a race dynamic where only the latest and greatest matters; leading to a situation where the resources sunk exceed the gains

- publishing open weights distracts your organization from attaining a sustainable business model / funding stream

- publishing open weights leads to significant negative downstream impacts (there are a variety of uncertain outcomes, such as: deepfakes, security breaches, bioweapon development, unaligned general intelligence, humans losing control [1] [2], and so on)

[1]: "What failure looks like" by Paul Christiano : https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-...

[2]: "An AGI race is a suicide race." - quote from Max Tegmark; article at https://futureoflife.org/statement/agi-manhattan-project-max...


> Once someone hits AGI/SGI

I don't think there will be such a unique event. There is no clear boundary. This is a continuous process. Modells get slightly better than before.

Also, another dimension is the inference cost to run those models. It has to be cheap enough to really take advantage of it.

Also, I wonder, what would be a good target to make profit, to develop new things? There is Isomorphic Labs, which seems like a good target. This company already exists now, and people are working on it. What else?


> I don't think there will be such a unique event.

I guess it depends on your definition of AGI, but if it means human level intelligence then the unique event will be the AI having the ability to act on its own without a "prompt".


> the unique event will be the AI having the ability to act on its own without a "prompt"

That's super easy. The reason they need a prompt is that this is the way we make them useful. We don't need LLMs to generate an endless stream of random "thoughts" otherwise, but if you really wanted to, just hook one up to a webcam and microphone stream in a loop and provide it some storage for "memories".


And the ability to improve itself.


I'm a layman but it seemed to me that the industry is going towards robust foundational models on which we plug tools, databases, and processes to expand their capabilities.

In this setup OSS models could be more than enough and capture the market but I don't see where the value would be to a multitude of specialized models we have to train.


There's no reason that models too large for consumer hardware wouldn't keep a huge edge, is there?


That is fundamentally a big O question.

I have this theory that we simply got over a hump by utilizing a massive processing boost from gpus as opposed to CPUs. That might have been two to three orders of magnitude more processing power.

But that's a one-time success. I don't hardware has any large scale improvements coming, because 3D gaming mostly plumb most of that vector processing hardware development in the last 30 years.

So will software and better training models produce another couple orders of magnitude?

Fundamentally we're talking about nines of of accuracy. What is the processing power required for each line of accuracy? Is it linear? Is it polynomial? Is it exponential?

It just seems strange to me with all the AI knowledge slushing through academia, I haven't seen any basic analysis at that level, which is something that's absolutely going to be necessary for AI applications like self-driving, once you get those insurance companies involved


Could be that you need massive amounts of data from those super expensive production training runs, and it's tough to figure that out from publicly available data and academic computing resources. Maybe the combination of gradual efficiency improvements, bigger compute clusters, and test-time reasoning keeps the cloud models in the lead. Plus, even if it's exponential scaling, wouldn't that still favor the big data centers? That would put local/edge models at a serious disadvantage.


> Open models are going to win long-term.

[1 of 3] For the sake of argument here, I'll grant the premise. If this turns out to be true, it glosses over other key questions, including:

For a frontier lab, what is a rational period of time (according to your organizational mission / charter / shareholder motivations*) to wait before:

1. releasing a new version of an open-weight model; and

2. how much secret sauce do you hold back?

* Take your pick. These don't align perfectly with each other, much less the interests of a nation or world.


> N-1 model value depreciates insanely fast

This implies LLM development isn’t plateaued. Sure the researchers are busting their assess quantizing, adding features like tool calls and structured outputs, etc. But soon enough N-1~=N


To me it depends on 2 factors. Hardware becomes more accessible, and the closed source offerings become more expensive. Right now it's difficult to get enough GPUs to do local inference at production scale, and 2 it's more expensive to run your own GPU's vs closed source models.


> Open models are going to win long-term.

[3 of 3] What would it take for this statement to be false or missing the point?

Maybe we find ourselves in a future where:

- Yes, open models are widely used as base models, but they are also highly customized in various ways (perhaps by industry, person, attitude, or something else). In other words, this would be a blend of open and closed.

- Maybe publishing open weights of a model is more-or-less irrelevant, because it is "table stakes" ... because all the key differentiating advantages have to do with other factors, such as infrastructure, non-LLM computational aspects, regulatory environment, affordable energy, customer base, customer trust, and probably more.

- The future might involve thousands or millions of highly tailored models




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: