Today yeah, but tomorrow the clients can use GPT 7 consultant who has better context with access to all the mails/data of the client. It is fast/cheap and is a polymath.
I don’t understand why people see this future as guaranteed, just because GPT4 was a great improvement on 3.5 does not imply this approach will continue accelerating at this rate. I am not saying it definitely won’t happen but I find the confidence that it will pretty baffling.
It seems unlikely to me that this technology has plateaued. The gains have been exponential. It strikes me as prudent to at least consider its continued improvement a possibility and try to work out what that means for society. The best thing on the table so far is people lose jobs and business owners make more money. Some amount of that is unavoidable and maybe even beneficial, but too much of it is unsustainable. Better to have a plan and not need and all that.
I don’t take issue with considering continued improvement and largely agree with this, my issue is with all the proponents who speak about it as if it is guaranteed.
Why do you think such a world would have companies, governments or even money? We are way outside the horizon of predictability if LLMs reach AGI levels.