Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

SaaS companies tended to need material engineering resource due to the software stacks and squad-style team structures in place -- also leaned heavily on costly metered infra

I'm not seeing anything like the same level of heads or stack complexity in this wave (Vercel, Firebase etc.), and the vendors involved get cheaper every day ... along with increasing ability to run models locally with no metered costs at all



If you're in the AI space you need roughly the same infrastructure as any other SaaS PLUS your LLM costs. Take a look at AWS Bedrock costs [1] and you'll soon realize your costs can escalate rapidly unlike traditional SaaS infra which is easier (er, less difficult) to predict costs.

[1] https://aws.amazon.com/bedrock/pricing/


The dirty secret of an awful lot of these LLM SAAS companies is that AWS are giving them tens of thousands of dollars to bootstrap, which they are paid back for with 8 figure investments from VCs. Anyone who is putting their own money on the line for anything other than the very first $100 or so for a PoC is being conned.


How?

Now you need to deal with all the traditional infra, plus a bunch of specific infra dealing with LLM apps, even if you’re just a wrapper using vendor APIs.

How are things in any way simplified? I only see more layers of complexity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: