Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am beginning to think that in these discussions these models are functioning more like an obscuring factor than anything else and the discussion is getting bogged down in that, and not the crux of the argument.

They’re giving people plausible deniability in the “chain of responsibility”, and I think if we took away “LLM” and replaced it with “fairground sideshow magic box” the argument that LLM’s are somehow special and deserving of exemptions disappears real quick.



I completely agree.

Betamax says that a technology which has significant non-infringing uses is not inherently infringing.

We've already got precedent saying that AI generated works don't accrue copyright protection, and by the same argument the act of generation by the AI expresses no intent, so infringement or otherwise must be down to the human using the output because the black box itself has no agency.


I agree, and I would prefer to see concrete examples of LLMs being used productively and profitably in the industry in a "disruptive" manner--putting people out of work, etc--before we conclude they're somehow the next big thing. Basically, before claiming LLMs (or generative techniques, more generally) mean that we're on the doorstep of "general" intelligence, show me door!

The outline of that door might look like industrial adoption of these things for solving some actual problem other than the entertainment value of typing things into the box and seeing what comes out the other side. But so far, as far as I can tell, nobody's actually doing this?


> ...nobody's actually doing this?

I think you're right.

I am a programmer and I use GPT occasionally, and I even pay 20 bucks a month (for now), but even for my job it's not a not a world-shattering improvement.

> ... the entertainment value of typing things into the box and seeing what comes out ...

I would only add that in a consumer society like ours, entertainment is important. Changes to entertainment seem to have, like, weird ripple effects. Not the knock-down economic disruptions that AI is promising, but I kind of think LLMs are just going to make our culture weirder. I can't anticipate how, but having a bunch of little LLM-powered daemons buzzing around the internet is just gonna be freaky.


> I am a programmer and I use GPT occasionally, and I even pay 20 bucks a month (for now), but even for my job it's not a not a world-shattering improvement.

I am also a programmer, and when I think about the amount of time I actually spend typing out code, even on a great day where all the stars have aligned just right and I can really bang out some code that's like... idk, 30-50% of my time? Usually it's much less, and I'm doing things like reading documentation, reading code, talking to people, etc. So it's hard to imagine Copilot or whatever making me much more effective at my job, as it can really only help with a fraction of it.

I could see someone making the assumption that being able to delegate programming tasks to a robot assistant might make them more productive, but often I find that I don't really understand a problem fully until I'm in the weeds solving it--by which I mean I haven't specified it completely until I've finished the implementation and written the tests. So I don't know to what extent being able to specify and delegate would really help me be more productive.

> having a bunch of little LLM-powered daemons buzzing around the internet is just gonna be freaky.

Yeah, they're not super cheap though so they need to get actual work done otherwise there's no reason to run them. Unlike blockchains, they don't have a pyramid scheme holding them up.


I saw a headline that oversees art shops have seen a hit to their order volumes. So if true then it could be putting people out of work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: