I have a somewhat unique answer for that- I started with building a product, and ended up building a dev platform for LLM based products (more specifically- dev platform for json outputting LLM structured tasks).
Here's the story:
At first I was building a tool for stock analysis- the user writes in free language what companies they want to compare, along with a time period, and their requested stocks show up on a graph. They can then further reiterate on it- add companies, and change range all in free language (I had many more analysis functions planned).
Following some unique dev challenges I've found- I ended up not releasing the product (possibly will sometime in the future..), and switched to work on a dev platform to help with these challenges.
I was using what I called 'LLM structured task'- basically instructing the LLM to perform some task on the user input, and outputting a json that my backend can work with (in the described case- finding mentioned companies and optional time range, and returning stock symbols, and string formatted dates).
The prompting has turned out to be not trivial, and kind of fragile- things broke with even minor iterations on the prompt or model configurations. So- I developed a platform to help with that- testing (templated) prompt versions, as well on model configurations on whole collections of inputs at once- making sure nothing breaks in the development process (or after).
* If you're interested, welcome to check it out on https://www.promptotype.io
Here's the story:
At first I was building a tool for stock analysis- the user writes in free language what companies they want to compare, along with a time period, and their requested stocks show up on a graph. They can then further reiterate on it- add companies, and change range all in free language (I had many more analysis functions planned). Following some unique dev challenges I've found- I ended up not releasing the product (possibly will sometime in the future..), and switched to work on a dev platform to help with these challenges.
I was using what I called 'LLM structured task'- basically instructing the LLM to perform some task on the user input, and outputting a json that my backend can work with (in the described case- finding mentioned companies and optional time range, and returning stock symbols, and string formatted dates). The prompting has turned out to be not trivial, and kind of fragile- things broke with even minor iterations on the prompt or model configurations. So- I developed a platform to help with that- testing (templated) prompt versions, as well on model configurations on whole collections of inputs at once- making sure nothing breaks in the development process (or after). * If you're interested, welcome to check it out on https://www.promptotype.io