While the subscription is definitely subsidized (technically cross-subsidized, because the subsidy is coming from users who pay but barely use it), Claude Code also does a ton of prompt caching that reduces LLM dependency. I have done many hours-long coding sessions and built entire websites using the latest Opus and the final tally came to like $4, whereas without caching it would have been $25-30.
Are you saying CC does caching that opencode does not? What does Anthropic care? They limit you based on tokens, so if other agents burn more then users will simply get less work done, not use more tokens, which they can't. I don't think Anthropic's objection is technical.
Cry me a river - I never stop hearing how developers think their time is so valuable that no amount of AI use could possibly not be worth it. Yet suddenly, paying for what you use is "too expensive".
I'm getting sick of costs being distorted. It's resulting in dysfunctional methodologies where people are spinning up ridiculous number agents in the background, burning tokens to grind out solutions where a modicum of oversight or direction from a human would result in 10x less compute. At very least the costs should be realised by the people doing this.
I guess that’s kind of the defense of Musk on the cyber truck. If Ford can’t sell hem off their F150 platform, it means you need to make more of a splash. He just went too far…
I've seen an argument that the Prius was intentionally made "ugly/noticeable" because they knew the buyers would be interested in the technology AND want to be recognizable as such.
When hybrids are common, the styling reverted to more normal car-like.
And the only thing they asked is like to add a chapter on a machine learning algorithm. I get that everyone wants to talk about how sick of AI they are. But there are plenty of AI projects that would fit right in the spirit of the book.
Doesn't seem especially out of the norm for a large conference. Call it 10,000 attendees which is large but not huge. Sure; not everyone attending puts in a session proposal. But others put multiple. And many submit but, if not accepted don't attend.
Can't quote exact numbers but when I was on the conference committee for a maybe high four figures attendance conference, we certainly had many thousands of submissions.
The problem isn't only papers it's that the world of academic computer science coalesced around conference submissions instead of journal submissions. This isn't new and was an issue 30 years ago when I was in grad school. It makes the work of conference organizes the little block holding up the entire system.
Checking each citation one by one is quite critical in peer review, and of course checking a colleagues paper. I’ve never had to deal with AI slop, but you’ll definitely see something cited for the wrong reason. And just the other day during the final typesetting of a paper of mine I found the journal had messed up a citation (same journal / author but wrong work!)
Is it quite critical? Peer review is not checking homework, it's about the novel contribution presented. Papers will frequently cite related notable experiments or introduce a problem that as a peer reviewer in the field I'm already well familiar with. These paragraphs generate many citations but are the least important part of a peer review.
(People submitting AI slop should still be ostracized of course, if you can't be bothered to read it, why would you think I should)
Fair point. In my mind it is critical because mistakes are common and can only be fixed by a peer. But you are right that we should not miss the forest through the trees and get lost on small details.
It’s a shame because “improve an off the shelf llm ti translate in line with this large dataset we prepared” is precisely the kind of project people love to work on. It could have been a chance to immortalize the hard work they did up until now.
You mean the two+ decades of labour of love was always done to be a nameless contribution to the AI machine? Somehow I think he would have picked another hobby if he had known that back then.
Is it? I don't think you quite understood the issue.
This issue is specifically centred around the human element of the work and organisation. The translators were doing good work, they wanted to continue that work. Why it's important that the work done is by a human is probably only partially about quality of output and likely more about authenticity of output. The human element is not recorded in the final translation output, but it is important to people that they know something was processed by a human who had heart and the right intentions.
> The human element is not recorded in the final translation output, but it is important to people that they know something was processed by a human who had heart and the right intentions
Not that I entirely disagree with the conclusion here, but…
It feels like that same sentiment can be used to justify all sorts of shitty translation output, like a dialog saying cutesy “let’s get you signed in”, or having dialogs with “got it” on the button label. Sure, it’s so “human” and has “heart”, but also enrages me to my very core and makes me want to find whoever wrote it and punch them in the face as hard as I can.
I would like much less “human” in my software translations, to be honest. Give me dry, clear, unambiguous descriptions of what’s happening please. If an LLM can do that and strike a consistent tone, I don’t really care much at all about the human element going into it.
Oh I wasn't really referring to tone or language like that, I also don't particularly like it and prefer concise clear language. While LLMs can totally achieve that, I want to know a human decided to do it that way. At some point this mindset is going to look very silly, and perhaps even more so for software. But ultimately it's a human feeling to want that and humans are also not deterministic or logical.
If there really is enough market demand for this kind of processor, it seems like someone like NEC who still makes vector processors would be better poised than a startup rolling RISC-V
So, a Systolic Array[1] spiced up with a pinch of control flow and a side of compiler cleverness? At least that's the impression I get from the servethehome article linked upthead. I wasn't able to find non-marketing better-than-sliced-bread technical details from 3 minutes of poking at your website.
I can see why systolic arrays come to mind, but this is different.
While there are indeed many ALUs connected to each other in a systolic array and in a data-flow chip, data-flow is usually more flexible (at a cost of complexity) and the ALUs can be thought of as residing on some shared fabric.
Systolic arrays often (always?) have a predefined communication pattern and are often used in problems where data that passes through them is also retained in some shape or form.
For NextSilicon, the ALUs are reconfigured and rewired to express the application (or parts of) on the parallel data-flow acclerator.
My understanding is no, if I understand what people mean by systolic arrays.
GreenArray processors are complete computers with their own memory and running their own software. The GA144 chip has 144 independently programmable computers with 64 words of memory each. You program each of them, including external I/O and routing between them, and then you run the chip as a cluster of computers.
Text on the front page of the NS website* leads me to think you have a fancy compiler: "Intelligent software-defined hardware acceleration". Sounds like Cerebras to my non-expert ears.
NEC doesn't really make vector processors anymore. My company installed a new supercomputer built by NEC, and the hardware itself is actually Gigabyte servers running AMD Instinct MI300A, with NEC providing the installation, support, and other services.
That’s the irony of the situation. This should’ve been a clear win for Trump, using the prize to help bolster his status and direction on Venezuela. But then we got this absurd media storyline about him wanting the prize himself (probably to bury the government shut down news).
Edit: or should I say, the subscription is artificially cheap
reply