Hacker Newsnew | past | comments | ask | show | jobs | submit | monster_truck's commentslogin

You don't have to do any of that if you simply don't make mistakes in the first place FYI

Attitudes like this one are why people prefer working with AI to code lol.

It's obviously tongue in cheek

This is why I exclusively write C89 when handling untrusted user input. I simply never make mistakes and so I don't need to worry about off-by-ones or overflows or memory safety or use after frees.

Garbage collection and managed types are for idiots who don't know what the hell they're doing; I'm leet af. You don't need to worry about accidentally writing heartbleed if you simply don't make mistakes in the first place.


That they are still training models against Objective-C is all the proof you need that it will outlive Swift.

When is someone going to vibe code Objective-C 3.0? Borrowing all of the actual good things that have happened since 2.0 is closer than you'd think thanks to LLVM and friends.


Why would they not? Existing objective-c apps will still need updates and various work. Models are still trained on assembler for architectures that don't meaningfully exist today as well.

I’m sure you can find some COBOL code in many of the training sets. Not sure I would build my next startup using COBOL.

groq was targeting a part of the stack where cuda was weakest: guaranteed inference time at a lower cost per token at scale. This was in response to more than just goog's tpus, they were also one of the few realistic alternative paths oai had with those wafers.

It doesn't do anything. It shouldn't be shared in case people who do not know better are tricked into believing it does

It's not even a friendly word

I'm more concerned about what you think they did to earn your trust in the first place

You are simply not making any meaningful amount of heat with a 5800X3D. You're talking about a TDP of 105W and a PPT of 142W.


LGA2011 was an especially cursed era of processors and motherboards.

In addition to all of the slightly different sockets there was ddr3, ddr3 low voltage, the server/ecc counterparts, and then ddr4 came out but it was so expensive (almost more expensive than 4/5 is now compared to what it should be) that there were goofy boards that had DDR3 & DDR4 slots.

By the way it is _never_ worth attempting to use or upgrade anything from this era. Throw it in the fucking dumpster (at the e waste recycling center). The onboard sata controllers are rife with data corruption bugs and the caps from around then have a terrible reputation. Anything that has made it this long without popping is most likely to have done so from sitting around powered off. They will also silently drop PCI-E lanes even at standard BCLK under certain utilization patterns that cause too much of a vdrop.

This is part of why Intel went damn-near scorched earth on the motherboard partners that released boards which broke the contractual agreement and allowed you to increase the multipliers on non-K processors. The lack of validation under these conditions contributed to the aformentioned issues.



>and allowed you to increase the multipliers on non-K processors

Wasn't this the other way around, allowing you to increase multipliers on K processors on the lower end chipsets? Or was both possible at some point? I remember getting baited into buying an H87 board that could overclock a 4670K until a bios update removed the functionality completely.


Should be so, multiplier is locked at cpu level not firmware.

extremely loud incorrect buzzer noise, what are you going to say next "bastion servers are a scam"

Please spend 5 seconds thinking about this. It isn't impossible, it isn't earnestly valuable now that gambling has taken ad fraud's throne (guess what the throne is for), and fake data doesn't work.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: