In my unqualified opinion, LLMs would do better at niche languages or even specific versions of mainstream languages, as well as niche frameworks, if they were better at consultig the documentation for the language or framework, for example, the user could give the LLM a link to the docs or an offline copy, and the LLM would prioritise the docs over the pretrained code. Currently this is not feasible because 1. limited context is shared with the actual code, 2. RAG is one-way injection i to the LLM, the LLM usually wouldn't "ask for a specific docs page" even if they probably should.
100% agreed on both points. Point 1 relates to https://news.ycombinator.com/item?id=43486526 as well. It's one of the biggest challenges, though maybe it'll automatically get better through models with bigger context windows (we can't assume that though)?