Hacker Newsnew | past | comments | ask | show | jobs | submit | kristjansson's commentslogin

No, the price of a contract for future delivery to a specific location went negative just before the delivery date, at a time when there was almost no unoccupied oil storage nor transport capacity at said location.

In that circumstance you might sell your right to some oil for almost nothing rather than deal with the consequences of accepting it. You might even pay someone to take it off your hands.

Options is “right but not obligation”. Physically settled futures are an obligation at maturity.


Thanks for correction, that is true!

(Both instruments are not that popular in my country, so my daily language is to put both of them as synonym, while they are different animals in some details)


There are cash settled futures there are closer to options in that they’re purely financial, but even those don’t have optionality at maturity.

Generally a dangerous thing to have as synonyms regardless, otherwise you end up with a coal barge in the east river https://thedailywtf.com/articles/special-delivery


Remembering: WTF was quite popular 20 years ago! :)

Regarding this story: I guess for most private participants, physical delivery is not possible/excluded


Your guess would be wrong. If you’re trading physically settled commodity futures, and don’t close before the settlement date, you are now the owner of a large quantity of your commodity of choice.

It just happened today: https://www.reddit.com/r/wallstreetbets/comments/1siq4m2/any...


No one is going to like this answer, but there’s a simple solution: pay for API tokens and adjust your use of CC so that the actions you have it take are worth the cost of the tokens.

It’s great to buy dollars for a penny, but the guy selling em is going to want to charge a dollar eventually…


...pay for API tokens and adjust your use of CC so that the actions you have it take are worth the cost of the tokens

Do you feel there is enough visibility and stability around the "Prompt -> API token usage" connection to make a reliable estimate as to what using the API may end up costing?

Personally, it feels like paying for Netflix based on "data usage" without having anyway for me to know ahead of time how much data any given episode or movie will end up using, because Netflix is constantly changing the quality/compression/etc on the fly.


Time is a relatively good proxy for spend. There are also more ex post diagnostics like count and cost it can write to the status line.

I agree that ex ante it’s tough, and they could benefit from some mode of estimation.

Perhaps we can give tasks sizes, like T shirts? Or a group of claudes can spend the first 1M tokens assigning point values to the prospective tasks?


Even time doesn't feel like it would provide consistent information.

Take the response on another post about Claude Code.

https://news.ycombinator.com/item?id=47664442

This reads like even if you had a rough idea today about what usage might look like, a change deployed tomorrow could have a major impact on usage. And you wouldn't know it until after you were already using it.


I'm forced to do this at work. It adjusts the net value to very close to zero. Github's pay per prompt pricing model is phenomenal for users to the point of blowing Anthropic's subscription offering out of the water, much less API pricing. At Copilot pricing, it's quite a useful tool if carefully managed. At API pricing, it's very hard to find a use case for AI.

Of course, I have no idea how MS is justifying the Copilot pricing. I can't imagine any world in which it is sustainable, so I'm trying to get as much as I can out of it now before they jack up prices.


This is it. These subscriptions have been heavily subsidized, which was fine when usage was much lower overall. But with so many folks trying to use the tools and soaking up all the chips something has to give.

Now we’re going to find out what these tools are really worth.


it's not a subsidy. It's predatory pricing and it should be illegal. I offer you a service at a loss to remove competition and then increase prices once you are stuck with it.

Actually, that is illegal.

Now we just have to vote for the DOJ that will enforce it. Or at least not just roll over for donations to their crypto scams.

Yes, but if it's never enforced is it really?

That's the VC playbook.

The problem with tokens is that they have wrong incentive. The quicker model arrives at the solution the less tokens you have to buy.

So I noticed the model is purposefully coming with dumb ideas or running around in circles and only when you tell it that they are trying to defraud you, they suddenly come back with a right solution.


I just want a little predictable insight into how much I get. For example, at a buffet, I know I can only eat so much food and can plan around it. This is like going to a buffet and not knowing how many plates I can take or how big the plates are, and it changes each week, and yet I have to keep paying the same price. Except it's not about eating, it's about my work and deadlines and promises and all that.

That's what these providers want as well, but from the other side. They want to know that a customer won't be able to eat more than certain number of servings, as they need to pay for each of those servings.

It works out even if some customers are able to eat a lot, because people on average have a certain limit. The limits of computers are much higher.


Fair, and I think openclaw and all the orchestrators are having agents maxing out the plans. So maybe they figure out a new tier that is agent-run vs human-run. Agents are much more insatiable, whereas humans have a limit. Not sure if it'd be possible to split between those two different modes, but I think that might address the appetite issue better.

I think it would be impossible to find the price point for a monthly subscribtion that would both be profitable to a provider, as well as attractive to the customer. If anything, as the customers would now be paying necessarily even a higher amount for agentic use, they'd make certain they'd be using the agent as effectively as they can, meaning it would be even more costly for the provider.

Pay-per-token is really the only way it can work. If some kind of fixed monthly price is desirable, then there should be a quota the user can assign, and then the agent could e.g. slow itself down by 50% when 50% of the quota is spent, another 50% at 75%, etc, to make it last longer..

As a side thought, I wonder how it could affect an agent's behavior if the information of this token usage/limit was brought to it..


Ironically in chatting with Gemini, helped me realize that telecoms have often solved this problem of unlimited usage with rate/speed limiting. If one goes over 10GB or whatnot of data, then the speed drops from 5G to 3G, and data remains unlimited.

I wonder if there could be something like that, maybe even a progressive rate limiting, where after a certain number of tokens or another metric of use, then the speed slows down a LOT.

Not saying that I would love that as a consumer, as I'd prefer this all-you-can eat, unlimited data plan, but I wonder if that would be a compromise that could work, as it seems to have worked OK with the telecom space.

edit: the nerd in me loves the irony of me making the above comment and then later seeing your username as flux :-)


When you hire a person, you don't know what you are going to get out of them today.

If an hour of an excellent developer's time is worth $X, isn't that the upper bound of what the AI companies can charge? If hiring a person is better value than paying for an AI, then do that.


Fair on not knowing what you'll get out of someone. But if that varies wildly, I may not want to hire that person. Even with employment, predictability matters a lot. If they underperform too much, I might feel annoyed. If they overperform, I might feel guilty.

They can charge whatever they want, I think many people like to make business decisions based on relative predictability or at least be more aware that there's a risk. If they want it to be "some weeks you have lots of usage, some weeks less, and it depends on X factors, or even random factors" then people could make a more informed choice. I think now it's basically incredibly vague and that works while it's relatively predictable, and starts to fail when it's not, for those that wanted the implied predictability.


If you need the tokens for real work, that’s what the API and the other providers like Bedrock are for. The subscription product is merely to whet your appetite.

Well then I would just not use their service. I used extra usage once and just for what I'd consider a low amount of tests and coding, racked up like $300 in an hour or more. For some, that's not a lot of money, for me, I'd just code it manually, especially without knowing almost any way to gauge how much I'll need and how fast it goes.

I'm not sure how businesses budget for llm APIs, as they seem wildly unpredictable to me and super expensive, but maybe I'm missing something about it.


Missing the point. I don't choose which tokens to buy. I send a request and the server decides how much it costs after its done.

Sheets and Numbers are spreadsheets. Excel is an application platform and programming language that’s convinced people it’s just a spreadsheet.

Basic quantization is easy if you have enough RAM (not VRAM) to load the weights.

Per TFA C++ is a purely functional, interpreted language. Should be trivial to embed into?

> without making clear how much effort has gone into it

I'm increasingly convinced this is the critical context for sharing LLM outputs with other people. The robots can inflate any old thought into dozens of pages of docs, thousands of lines of MR. That might be great! But it completely severs the connection between the form of a work and the author's assessment/investment/attachment/belief in it. That's something one's audience might like to know!


The relative value of those things are shifting. As the cost of polished LLM drivel falls to zero, some might prefer even the most unedited, off-the-cuff human writing to the slop.

I have stopped spell checking, grammar checking, and generally doing a lot of editing of my writing so that it feels more authentic. I have also had to give up my habit of prolific use of emdashes.

What if the reality is that both are worthless? LLM slop is of no value, but human slop doesn’t gain value because fingers typed it.

It depends on the purpose of the reader. I can learn a technical topic from an LLM but not what another person genuinely thinks. I certainly can't convince it of anything nor befriend it.

I mean there's lots of room at the bottom. but part of the reason LLM slop seems to me so objectionable is its sameness; it's obviously drawn from the same thin manifold of the language. A human articulating their own thoughts, however those may be rendered on the page, at least realizes their own idiosyncratic region of the language. Writing one's own thoughts in one's own words declares the existence of one's own language, consonant with but distinct from all the others. Asserting one's individual voice and style, even if the content is worthless and the aesthetics objectionable maintains diversity in face of the LLM monoculture. We lament the lost apples, even the bitter ones; we don't ask the birds to each justify their differences.

Indeed. I for one enjoyed this piece. Yes, it had errors and lots of odd grammatical choices, but the reading remained affordably challenging and the prose had a newness to it.

No one is asking that we reject all prose with emdash. Not all emdash-users are LLMs, but many LLMs are profligate emdash-users, so adjust your priors accordingly.

Secondarily, I think there's a part of the discourse missing: the presence of a syntactic emdash in a sentence on the internet is not itself a strong signal of LLM-writing - but the presence of an actual emdash glyph (—) should raise some eyebrows, esp. in fora that aren't commonly authored in rich text editors (here, twitter, ...)


Before LLMs, the em-dash glyph was a decent tell simply that... the author was using a Mac, because it's a simple and easy-to-remember (or even guess!) key-combo on there. Not that you can't type it on other keyboards, but the Mac one for whatever reason had a combo of users-who-wanted-to-type-it and layout-that-makes-it-easy that resulted in a high proportion of correct em-dash employers being Mac users.

(option-underscore, or option-shift-dash if you prefer to think of it that way)

On iOS, you can type it by simply holding down on the "dash" button then selecting the em-dash from the list of options it presents. It may also correct double-dash to em-dash a lot of the time, not sure.

I have used the correct em-dash everywhere I can for over a decade, which amounts to nearly everywhere.


> any fluff or inaccuracies are aggressively weeded out

this work is paramount. Without clear evidence of human filtering, a long, well formatted message/PR/doc is likely to reduce my estimate of the value/veracity/relevance of its content.


> English is quite forgiving as a language

it's a couple mutually-conflicting languages in a trenchcoat; forgiveness and flexibility are perhaps its defining properties.

To the broader issue: "polish" (in any language) is only valuable insofar as it makes the ideas clearer, attests to innate qualities of the author and/or the investment of their time, or carries its own aesthetic value. As LLMs make (a certain kind of polish) cheap to produce, the value of the middle category attenuates to nothing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: