Can you back this up with legal precedence? To my knowledge, nothing of the sort has been ruled by the courts.
Additionally, this raises another big issue. A few years ago, a couple guys used software (what you could argue was a primitive AI) to generated around 70 billion unique pieces of music which amounts to essentially every piece of copyrightable music using standard music scales.
Is the fact that they used software to develop this copyrighted material relevant? If not, then their copyright should certainly be legal and every new song should pay them royalties.
It seems that using a computer to generate results MUST be added as an additional bit of analysis when it comes to infringement cases and fair use if not a more fundamental acknowledgement that computer-generated content falls under a different category (I'd imagine the real argument would be over how much of the input was human vs how much was the system).
Of course, this all sets aside the training of AI using copyrighted works. As it turns out, AI can regurgitate verbatim large sections of copyrighted works (up to 80% according to this study[0]) showing that they are in point of fact outright infringing on those copyrights. Do we blow up current AI to maintain the illusion of copyright or blow up current copyright law to preserve AI?
You're asking a lot of very good and thoughtful questions, but none are directly related to the immediate issue, which is "do I have to credit the AI model?".
Assuming 80GB H100 and you inference a model that is MoE and close to the size of the 80GB VRAM, you're going to see around 10k tokens/second fully batched and saturated. An example here might be Mixtral 8x7B.
You're generating about 36 million tokens/hour. Cost of Mixtral 8x7b on Open router is $0.54/M input tokens. $0.54/M output tokens.
You're looking at potentially $38.88/hour return on that H100 GPU. This is probably the best case scenario.
In reality, inference providers will use multiple GPUs together to run bigger, smarter models for a higher price.
Demand is relative. How many Claude tokens would you buy if they had a 10x price hike?
The market has achieved it's current saturation level with loss-leader prices that remind me of the Chinese bike share bubble[0]. Once those prices go up to break even levels (let alone profitable levels), the number of people who can afford to pay will go down dramatically (and that's not even accounting for the bubble pop further constricting people's finances).
If they've already built themselves a loyal customer base (which is usually the point of fighting a price war) and the customers are happy with the technology they have, then if funding is tight and turning a profit is more important why wouldn't they pivot to optimizing inference by stopping further training, freezing the model versions, burning the weights into silicon and building better caching strategies and improving harnesses and tools that lower their cost and increase their margin?
If all they do is hike prices then they'll lose customers to competitors who don't or who find a way to serve a similar model cheaper.
The demand isn't going to go away purely through higher prices. Once people know something is possible they will demand it whether supply is constrained or not. That's a huge bounty for anyone who can figure out how to service that demand.
Easier said than done. What you're describing can take years to implement. Can OpenAI et al. keep burning cash at the same rate for two years while they wait for the salvation of custom silicon if the investments dry up?
There is no evidence that labs are losing money on inference subscriptions. The labs have massive fixed costs, but as long as inference spend is higher than the datacenters they use for inference cost all they need to do to become profitable is scale up. Right now software engineers are basically the only ones actually paying for inference, the labs just need to create coding assistants for everything that are good enough that every white collar worker in the country(world?) is paying a $1000/yr subscription. Certainly theres a lot of risk, will models become commoditized and everyone switches to open models? can they actually get non software engineers to pay for inference in mass? But its not like theres no path
Validating a core to server standards takes significantly longer.
V4 cores should be out this year using X925 and C1 Ultra-based V5 will probably be 2027-2028.
I suspect that X4 is already fast enough to beat EPYC in per-core performance when using the whole chip. ARM caught up/passed x86 in IPC all the way back around A77/78 in 2019-2020. They are now much faster per clock and hitting about the same all-core clockspeeds as standard EPYC (let alone zen5c EPYC).
The big issue is that Graviton5 is already starting to hit the market and uses the same v3 cores. A lot of marketshare for this chip will probably come from taking Ampere customers.
Cortex-X4 a.k.a. Neoverse V3 has significantly lower performance per core than Zen 5.
However, Neoverse V3 has a lower die area, so you could implement more cores per socket than with Zen 5, but this has not been done yet, as these new CPUs have only 136 cores per socket versus 192 cores per socket for Zen 5.
For programs that do not use array operations, i.e. which do not use AVX/AVX-512 instructions, Neoverse V3 has better performance per watt than Zen 5. But that changes for programs that benefit from AVX/AVX-512, where Zen 5 has better performance per watt.
Moreover, Zen 5 is already old. By the end of the year there will be Zen 6, which will be the real competitor for these new Arm CPUs, and Zen 6 will have better performance per watt, even more cores per socket and even more performance per core.
Crash probably a couple seconds later, wouldn't rely on the video for the exact timing.
So it seems that ATC made an error by allowing the truck to cross, and then the order to stop wasn't communicated clearly enough. I wouldn't place much blame on the truck.
Edit: Looking at some other videos with that audio, I'm also not sure if the video I linked represents the time between communications correctly, transmission at 3:15 may have been right after the one at 3:02. Anyway, the best thing is to wait for the investigation.
It was the “truck and company” that was cleared to cross. That’s probably 5+ vehicles you can see following the lead truck that were all approved. All while the jet was on short final. That controller lost situational awareness due to task saturation as a result of emergency aircraft on taxiway parallel to runway 13.
He said "Frontier 4195 stop there please". Then "stop stop stop stop truckon" By the time he clearly tells "truck 1" to stop, they're already entering the runway. Sounds like a bit of confusion.
There is no "the" issue in airline accidents. There are always multiple factors, and all of them had to happen in order for the accident to occur.
Understaffing is absolutely a factor. Had tower and ground not been combined, the erroneous clearance probably wouldn't have been issued.
The ARFF truck not complying with the stop instruction is absolutely a factor. Had they heard and complied, the accident wouldn't have happened.
And there are likely additional factors that will come out in the investigation.
I recommend reading some final aviation accident reports from the NTSB to learn more about how these investigations proceed and what kinds of conclusions and recommendations they include.
Other trucks slow down, but truck 1 does not even try to slow down. I'd also argue that driving so quickly that you cannot maintain control is its own problem. Getting to an emergency 20 seconds later almost never matters as much as arriving safely.
Maybe they were talking about the truck going 24mph, but the plane is clearly going WAY faster than that.
I'm not completely sure but it seems like the runway entry lights are red which very clearly indicates the runway is in use. They should have known better.
Overwork is an issue in general, but I don't know that it was the actual issue here.
> In audio from the air traffic control tower at LaGuardia, a staff member can be heard saying: "'Truck One, stop, stop, stop!" in the seconds before the crash.
It sounds to me like either the Cop or the Firefighter (whichever was driving) wasn't listening to ATC and this whole incident was probably completely avoidable.
EDIT: a video of the crash seems to have warning lights that the emergency vehicle ignored.
> Overwork is an issue in general, but I don't know that it was the actual issue here.
One controller working tower duties, ground movement duties, coordinating with other ATC functions off the radio, an active emergency request, and giving clearance amendments all within 2 minutes. It's insane understaffing. On top of it, there was nobody there to take over after the crash. He worked the whole cleanup for the next 30 minutes.
This is an Olympian level elite Air Traffic Controller who was setup to fail.
I've visited towers, center facilities, and have flying (and some instructing) in the San Francisco airspace for 10 years. That kind of failure is systemic way above an individual.
The audio I heard seems to show the firetruck asking if the runway is clear to cross, the controller responding in the affirmative, the firetruck confirming the affirmative, and then 7 seconds later, the controller saying STOP STOP STOP.
I think the gaps between transmissions have been trimmed, too; this isn't matching other versions of the ATC audio, such as [VASAviation's][1].
> The runway entrance lights look red to me which is also a huge warning flag.
IANA-ATC, but presumably in an emergency, you're permitted to obtain clearance from ATC to enter an active runaway, to get to the emergency. (Which they did, and got, but which ATC later effectively revokes with the command to stop, prior to the accident. Whether ATC should have granted the clearance, well, I'll wait for the NTSB report there.)
It’s supposed to be “permission to enter runway = obtained clearance && stop bars not red”. Pilots would know this; ground vehicles often do goofy stuff and it’s difficult to train them to follow procedures exactly while they have “we are responding to an emergency” in their head.
The syntax of languages like Lisp and Forth are so fundamentally different that they don't need an explicit statement separator. You don't have to think about many other things either, or I should say you don't have to think about them in the same way. Consider how much simpler the order of operations is in those languages.
Additionally, this raises another big issue. A few years ago, a couple guys used software (what you could argue was a primitive AI) to generated around 70 billion unique pieces of music which amounts to essentially every piece of copyrightable music using standard music scales.
Is the fact that they used software to develop this copyrighted material relevant? If not, then their copyright should certainly be legal and every new song should pay them royalties.
It seems that using a computer to generate results MUST be added as an additional bit of analysis when it comes to infringement cases and fair use if not a more fundamental acknowledgement that computer-generated content falls under a different category (I'd imagine the real argument would be over how much of the input was human vs how much was the system).
Of course, this all sets aside the training of AI using copyrighted works. As it turns out, AI can regurgitate verbatim large sections of copyrighted works (up to 80% according to this study[0]) showing that they are in point of fact outright infringing on those copyrights. Do we blow up current AI to maintain the illusion of copyright or blow up current copyright law to preserve AI?
[0] https://arxiv.org/pdf/2603.20957
reply