Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel that the compiler is doing too much work here. I know they are thinking about special cases on generated code, but at some point it feels that it just adds compile time for no good reason.

Look at this --beauty-- eww, thing, should compilers really spend time trying to figure out how to optimise insane code?

    def is_even(n):
      return str(n)[len(str(n))-1] in [str(2*n) for n in range(5)]


These optimizations are very useful. Consider the only slightly less contrived case where you want to mod an index by the size of an array. And the compiler expands the inline function around a context where the array is a fixed power of two size at compile time. Poof, no division/modulus needed, magically. Lots and lots of code looks like this: general algorithms expressed in simple implementation that has a faster implementation in the specific instance that gets generated.


Maybe one day there will be compilers that can choose what to optimize based on their aesthetic judgement of the code.

I could see that as a novel feedback mechanism for software engineers.

As it stands, I'm glad they design optimizations abstractly, even if that means code I don't like gets the benefits


It's not about aesthetics, but about the sort of hit-rate of the optimisations as if they need to be too smart to figure things out, then it also means that they'd more rarely be used and necessary.


I'm not quite sure what you're visualizing for compilers, if I understand correctly, what I'd say is:

tl;dr: there are general optimizations for "this function in a for loop is a constant expression, we dont need to call it 500 times"

or

"this obscure combination of asm instructions is optimal on pentium iii 350 mhz dual core"

not "we need to turn this unholy CS101 student spaghetti code where they do a 500 branch-if into a for loop"

comment over here is attempting to communicate that as well https://news.ycombinator.com/item?id=42705758

I've never, ever, heard the idea that compilers are burdened by the workload of maintaining thousands of type-specific optimizations for hilariously bad code, until today. I've been here since 2009, so it is puzzling to me to see it referred to off hand, in a "this is water" manner https://en.wikipedia.org/wiki/This_Is_Water


> I've never, ever, heard the idea that compilers are burdened by the workload of maintaining thousands of type-specific optimizations for hilariously bad code, until today.

I've heard tons of people complain about slow compilers, so even if compiler devs find it easy architect their compilers to do multiple kinds of optimisations there's a cost to it that devs running the compilers pay.

Also, if you think about it, optimising code has to follow diminishing returns, so at some point we are putting too much CPU time into little to no gains, and it's also possible to get slower code with more optimisations if they interact poorly, or at least not better code even if spending more CPU time. This is why there's -O3 in gcc and it's not the default, there's a cost to it that's likely not worth paying.


> I've heard tons of people complain about slow compilers,

A slow compiler does not imply the compiler is slow because there's thousands of bespoke optimizations for nonsense code being ran

> Also, if you think about it, optimising code has to follow diminishing returns,

Nope, trivially. Though, I'm always eager for a Fermat-style marvelous proof that may have been too big for the initial margin you had. :)

Take a classic case of a buggy compiler generating O(n²) temporary copies due to missed alias analysis. One optimization pass to fix that analysis transforms it to O(n).

> at some point we are putting too much CPU time into little to no gains

It is theoretically possible to design a compiler such that it spends much more time looking for optimizations that the total sum of looking is greater than the program it is optimizing's runtime.

For example, an optimizer that is a while loop that checks if the function returns 42, but the function returns 43.

I'm not sure what light that sheds.

I'm not sure that implies that compilers have tons of bespoke optimizations for hand-transforming specific instances of absurd string code.

If they do, I would be additionally surprised because I have never observed that. What I have observed is compilers, universally, optimize code structures of a certain general form

> This is why there's -O3 in gcc and it's not the default, there's a cost to it that's likely not worth paying.

The existence of an argument with a higher processing level than default does not imply the compiler is slow because there's thousands of bespoke optimizations for nonsense code being ran. (n.b. -O3 is understood, in practice, to be risky because it might be too aggressive, not that it might not be worth it)


Or even:

    def is_even(n):
        return str(n)[-1] in "02468"




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: