Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Imagine that you created a function GetPixel that reads an RGB pixel at a memory address, and which has a NULL check as a precondition.

If the compiler can "prove" that the pointer is not NULL it can (after inlining the call) remove 20 million checks for a 20 megapixel image.

The silly issue is the compiler using "you accessed it before" (aka "undefined behaviour") to "prove" that the pointer is not NULL.

But I can attest that avoiding 20 million such checks does indeed make a huge difference.



Just make a non null checking version: GetPixelUnsafe() and let the responsibility onto the user to do the null check before the loop.

All of these 'problems' have simple and straigtforward workarounds, I'm not convinced these UB are needed at all.


>All of these 'problems' have simple and straigtforward workarounds, I'm not convinced these UB are needed at all.

He gave you a simple and straightforward example, but that example may not be representative of a real world program where complex analysis leads to better performing code.

As a programmer, its far easier to just insert bounds checks everywhere, and trust the system to remove them when possible. This is what Rust does, and it safe. The problem isn't the compiler, the problem is the standard. More broadly, the standard wasn't written with optimizing compilers in mind.


That's a non solution for existing code that already calls GetPixel 20 million times.

It's not like I'm saying C is the best possible way to write new code.

I'm just commenting why this matters for performance, and “remove all undefined behavior" from C compilers is a non-starter.

Now go write Rust for all I care.


If we're inlining the call, then we can hoist the NULL check out of the loop. Now it's 1 check per 20 million operations. There's no need to eliminate it or have UB at that point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: