How is that related? we're talking of continuously sending proprietary code and related IP to a third party, seems a pretty valid concern to me.
I, for one, work every day with plenty of proprietary vendor code under very restrictive NDAs. I don't think they would be very happy knowing I let AIs crawl our whole code base and send it to remote language models just to have fancy autocompletion.
"Continuously sending proprietary code and related IP to a third party"
Isn't this... github?
Companies and people are doing this all day every day. LLM APIs are really no different. Only when you magic it up as "the AI is doing thinking" ... but in reality text -> tokens -> math -> tokens -> text. It's a transformation of numbers into other numbers.
The EULAs and ToS say they don't log or retain information from API requests. This is really no different than Google Drive, Atlassian Cloud, Github, and any number of online services that people store valuable IP and proprietary business and code in.
Do you read every single line of code of every single dependency you have ? I don't see how llms are more of a threat than a random compromised npm package or something from a OS package manager. Chances are you're already relying on tons and tons of "trust me bro" and "it's opensource bro don't worry, just read the code if you feel like it"
Isn't that what we do with operating systems, internet providers, &c. ?