Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the LLM is "making up" APIs that don't exists, I'm guessing they've been introduced as the model tried to generalize from the training set, as that's the basic idea? These invented APIs might represent patterns the model identified across many similar libraries, or other texts people have written on the internet, wouldn't that actually be a sort of good library to have available if it wasn't already? Maybe we could use these "hallucinations" in a different way, if we could sort of know better what parts are "hallucination" vs not. Maybe just starting points for ideas if nothing else.


In my experience, what's being made up is an incorrect name for an API that already exists elsewhere. They're especially bad at recommending deprecated methods on APIs.


It’s not that the imports don’t exist, they did in the original codebase the LLM creator stole from by ignoring the projects license terms.


Back in GPT3 days I put together a toy app that let you ask for a python program, and it hooked __getattr__ so if the LLM generated code called a non-existent function it could use GPT3 to define it dynamically. Ended up with some pretty wild alternate reality python implementations. Nothing useful though.


> wouldn't that actually be a sort of good library to have available if it wasn't already

I for one do not want my libraries APIs defined by the median person commenting about code of making questions on Stack Overflow.

Also, every time I see people using LLMs output as a starting point for software architecture the results became completely useless.


The average of the internet is heavily skewed towards the mediocre side.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: