And that's why traditional search is going to give way to GPT search. It solves this problem more or less already. Feed it web pages and you can talk to it, guide it with a conversation etc. Here is a realistic example for how searching Intranet could look in 5 years time:
-----------
> Employee: Hey CorpGod; where is that document that had some information about our new project management processes? You know, the one I had open like 2 weeks ago
> CorpGod: Oh, you mean the one your boss asked you to read by Friday? It's here [link]
> Employee: No, that's not it. It's the one that we copied and made edits to after that, I can't find it.
> CorpGod: That got deleted after Employee2 made a copy and published it. It's here on the Wiki [link]
-----------
GPT already does stuff like that with data from the Internet.
What makes you think GPT can be trained that fast so that it keeps up with newer documents, discussions, etc? It takes a long time for GPT models to come out. chatGPT is 2021 internet data, and you had a lot of people involved that made it possible. Do you envision the LLM constantly being fed updates by crawlers and indexers, with no need for human review or intervention?
Yes. If I was on OpenAI's board or product team, this is exactly where I'd be trying to go. It's a purely technical challenge with nothing standing in its way beyond regular engineering problem solving.
1) Scaling
2) Efficiency
4) Quality
5) Security
Some of these may be super-duper hard problems, but hard problems worth solving = massive opportunity.
If IBM could put space age technology into corporate offices in the 50s, we can put dystopian era technology into offices in the 2020s. At first it will be stupidly expensive, and only the big players will be able to take advantage of the cost to benefit ratio, but in time it will more than likely be on your phone.
I am having a hard time seeing this.
Intranet/corporate search has forever been awful in comparison to internet search.