If the LLM is "making up" APIs that don't exists, I'm guessing they've been introduced as the model tried to generalize from the training set, as that's the basic idea? These invented APIs might represent patterns the model identified across many similar libraries, or other texts people have written on the internet, wouldn't that actually be a sort of good library to have available if it wasn't already? Maybe we could use these "hallucinations" in a different way, if we could sort of know better what parts are "hallucination" vs not. Maybe just starting points for ideas if nothing else.
In my experience, what's being made up is an incorrect name for an API that already exists elsewhere. They're especially bad at recommending deprecated methods on APIs.
Back in GPT3 days I put together a toy app that let you ask for a python program, and it hooked __getattr__ so if the LLM generated code called a non-existent function it could use GPT3 to define it dynamically. Ended up with some pretty wild alternate reality python implementations. Nothing useful though.