I’m glad the conversation about LLMs in UI is getting more nuanced!
There was a while there where it seemed like conversational (either text or by voice-to-text) interfaces were the only way people could imagine using LLMs. Everything being an empty text box staring back at you.
It seems like that may have just been due to ease of implementation to an API. Now that we’ve all had some time to work a bit, some interesting UI experiments are starting to peek out.
There was a while there where it seemed like conversational (either text or by voice-to-text) interfaces were the only way people could imagine using LLMs. Everything being an empty text box staring back at you.
It seems like that may have just been due to ease of implementation to an API. Now that we’ve all had some time to work a bit, some interesting UI experiments are starting to peek out.