Anecdote of 1. Costs for openai on a per token basis have absolutely dropped and that accounts for new sota models over time. I think by now we can all agree that inference costs from providers are largely at or above breakeven. So more tokens is a good problem to have.