Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's the only reason you need to avoid Anthropic entirely, as well as OpenAI, Microsoft, and Google who all have similar customer noncompetes:

> You may not access or use the Services in the following ways:

> ● To develop any products or services that supplant or compete with our Services, including to develop or train any artificial intelligence or machine learning algorithms or models

There is only one viable option in the whole AI industry right now:

Mistral



Funny how they all used millions (?) of texts, without permission, to base their models on, and if you want to train your own model based on theirs which only works because of texts they used for free, that is prohibited.


hotel california rules


I think this is a great idea. May I suggest this for the new VSCode ToS: "You aren't allowed to use our products to write competing text editors". Maybe ban researching competing browser development using Chrome. The future sure is exciting.


I think 99% of users aren't trying to train their own LLM with their data


However anyone that uses Claude to generating code is 'supplanting' OpenAI's Code Interpreter mode (at the very least if it's python). So, once Code Interpreter gets into Claude, that whole use case violates the TOS.


Where in the OAI TOS does it say you cannot subscribe to other AI platforms?


No where.

Rather I was pointing out that this clause in Anthropic’s TOS is so broad that if Claude ever adds code interpreter you can never use it as a code generator again.


Your logic being that Claude-as-code-gen competes with a putative future Code Interpreter-like product on Anthropic?

That seems like a wild over-reading of the term. You're prevented from 'develop[ing] a product or service'. Using Claude to generate code without or without sandboxed execution is not developing a product or service.

If you're offering ab execution sandbox layer over Claude to improve code gen, and selling that as a product or service, and they launch an Anthropic Code Interpreter ... then you might have an issue? But "you can't undercut our services while building on top of our services" isn't a surprising term to find a SaaS ToS...


Which part of the parent comment suggested they wanted to connect to other platforms and that would somehow violate the TOS?


The entire part? I can't help you with fundamental reading.


Sorry didn't mean to offend, it's okay if you don't want help with understanding.


I'm not offended but I don't understand what your confusion is. I have said anything that is not easy to understand.


Reminder that OpenAI's terms are much more reasonable:

> (e) use Output (as defined below) to develop any artificial intelligence models that compete with our products and services. However, you can use Output to (i) develop artificial intelligence models primarily intended to categorize, classify, or organize data (e.g., embeddings or classifiers), as long as such models are not distributed or made commercially available to third parties and (ii) fine tune models provided as part of our Services;


Where do you see that? I only see “e” and no “however”:

> For example, you may not:

> Use Output to develop models that compete with OpenAI.

That’s even less reasonable than Anthropic because “develop models that compete” is vague


What about Meta or H20?


Never heard of H2O, but llama has a restrictive license. Granted it’s like “as long as you have fewer than 70M users” or something crazy like that.

It’s a “use can use this as long as you not a threat and/or you’re an acquisition target” type license.


llama has a restrictive license, but pytorch doesn't.


Is that legally enforceable?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: