Language models have completely overhauled the NLP space. If you have a problem involving natural language data, you can prototype working pipeline in an afternoon. Often this prototype is very close in performance to a 'proper' solution.
> If you have a problem involving natural language data
That's a big "if", isn't it? We're seeing claims like "The future is an LLM at the front of just about everything: “Human” is the new programming language"[1] but so far that's not panning out, and it seems really dubious. Natural language seems like an absolutely atrocious user interface. As a machine operator, I'm going to use levers, wheels, and buttons to control the machine. As a computer programmer I'm going to use programming languages to control the machine. I'm not going to speak English to it.
So, ok, this marks an advance in NLP. How do we get from there to "omg it's gonna change everything!!!1111oneeleven"
It seems like they’ve accelerated our capabilities- previously tiresome and difficult-to-automate things are easier- but have done very little for our fundamental understanding. We have a tool, but cannot dissect it and explain how it fits together. LLM’a themselves don’t appear (happy to be wrong here) to actually have improved our understanding our NLP and associated theory. Yeah, it can parse a sentence and bang out some JSON/sql/mid-tier-essay, but these models (so far) aren’t helping us figure out how and why, and I think that understanding is critical to progress further. Anthropic seems to be trying to push a bit further on that front at least, but for all we know, they might just turn into another scummy OpenAI on us.
I think in order for something to properly be a tool it needs to behave deterministically. I don't need to understand every particular of how it works internally, but as the user I need to be able to rely on consistent, predictable results. Otherwise it's worse than useless. Hand tools, machine tools, programming languages, vehicles, CAD/CAM/CAE tools are all like this. You may have to do some learning to become proficient in the tool, but once you're proficient in its use it's very unlikely to ever truly surprise you. Generally those "surprising" experiences are pretty traumatic--hopefully only emotionally (if you've ever experienced a chainsaw kick back you know what I mean).
So I'm not sure how I could use an LLM as a tool, but maybe I'm just not a sufficiently proficient user? It seems like they're just too full of "surprises".