Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From the article:

> It has been proven numerous times already that strcpy in source code is like a honey pot for generating hallucinated vulnerability claims

This closing thought in the article really stood out to me. Why even bother to run AI checking on C code if the AI flags strcpy() as a problem without caveat?



It's not quite as black and white as the article implies. The hallucinated vulnerability reports don't flag it "without caveat", they invent a convoluted proof of vulnerability with a logical error somewhere along the way, and then this is what gets submitted as the vulnerability report. That's why it's so agitating for the maintainers: it requires reading a "proof" and finding the contradiction.


Because these people who run AI checks on OSS code and submit bogus bug reports either assume that AIs don't make mistakes, or just don't care if the report is legit or not, because there's little to no personal cost to them even if it isn't.


even stupid report may give you invites to private programs


Because people are stupid and use AI for things it is not good at.


> people are stupid

people overestimate AI


Its weird though because looking through the hackone reports in the slop wiki page there aren't actually reproduction steps. It's basically always just a line of code and an explanation of how a function can be mis-used but not a "make a webserver that has this hardcoded response".

So like why doesn't the person iterate with the AI until they understand the bug (and then ultimately discover it doesn't exist)? Like have any of this bug reports actually paid out? It seems like quickly people should just give up from a lack of rewards.


> So like why doesn't the person iterate with the AI until they understand the bug (and then ultimately discover it doesn't exist)? Like have any of this bug reports actually paid out? It seems like quickly people should just give up from a lack of rewards.

This sounds a bit like expecting the people who followed a "make your own drop-shipping company" tutorial to try using the products they're shipping to understand that they suck.


As long as the number of people newly being convinced that AI generated bounty demands are a good way to make money equals or exceeds the number of people realising it isn't and giving up, the problem remains.

Not helped, I imagine, that once you realise it doesn't work, an easy pivot is to start convincing new people that it'll work if they pay you money for a course on it.


Apparently FOSS developers have been getting this kind of slop report even though they clearly don't offer a bug bounty.


There are no shortage of people wanting to be able to say they found CVE-XXXX-XXX or a bug in product X.


Have you ever had the chance to look at the public-facing support email inbox for a SaaS company? You get absolutely bombarded with these low quality “bug reports” from people trying to farm bounties. They do not care whether the bug is real or impactful, it’s a game of volume for them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: