Sure, if you quote the heading it's easy to pick apart, but read the contents of that section. He mentions "working smarter, not harder" and "no unnecessary logic" which are both hard to disagree with. Certainly we don't want a bunch of inlined calls and hard-to-mentally-parse-operations but that's not what he's advocating. In the very next section he describes code as being a means of communicating with the computer and humans.
So what's the distinction between simply making a passable program and working smarter, without unnecessary logic? It's hard to understand or even believe unless you've seen it happen. Even if you understand and believe, the magnitude of how powerful it is can easily be underestimated.
Take prime factors for example; how would you program it? Most people would probably expect they'll need some kind of primes against which they can factor a given number. You might create the list of primes or have a data source somewhere, and iterate over it looking for prime factors. The result could easily grow into a (small, but) considerable program.
If you practice TDD and "fake it till you make it", testing the next-dumbest case you can wind your way to very elegant algorithms through refactoring against tests. That sounds easy but in reality it often feels silly to test this one other simple edge case or to test every failure case (1..2..3..) and that makes it hard feel worth doing. It's also difficult to look at known-working code and see the other possibilities for how the same result could be accomplished for all cases with a different methodology.
If you haven't yet, check out Bob Martin's primes kata: http://butunclebob.com/ArticleS.UncleBob.ThePrimeFactorsKata
(The slides have a lot to be desired but the only video I could find was on vimeo with a lengthy intro and no apparent way to fast-forward)
Chances are we'd rather maintain the simple 3-line algorithm rather than the one with the data source and unnecessary complexity. That doesn't mean shorter is always better, but in many of the problems we face everyday there are elegant solutions hiding in the details. The simpler solutions inherently have less to maintain, for better or worse.
So what's the distinction between simply making a passable program and working smarter, without unnecessary logic? It's hard to understand or even believe unless you've seen it happen. Even if you understand and believe, the magnitude of how powerful it is can easily be underestimated.
Take prime factors for example; how would you program it? Most people would probably expect they'll need some kind of primes against which they can factor a given number. You might create the list of primes or have a data source somewhere, and iterate over it looking for prime factors. The result could easily grow into a (small, but) considerable program.
If you practice TDD and "fake it till you make it", testing the next-dumbest case you can wind your way to very elegant algorithms through refactoring against tests. That sounds easy but in reality it often feels silly to test this one other simple edge case or to test every failure case (1..2..3..) and that makes it hard feel worth doing. It's also difficult to look at known-working code and see the other possibilities for how the same result could be accomplished for all cases with a different methodology.
If you haven't yet, check out Bob Martin's primes kata: http://butunclebob.com/ArticleS.UncleBob.ThePrimeFactorsKata (The slides have a lot to be desired but the only video I could find was on vimeo with a lengthy intro and no apparent way to fast-forward)
Chances are we'd rather maintain the simple 3-line algorithm rather than the one with the data source and unnecessary complexity. That doesn't mean shorter is always better, but in many of the problems we face everyday there are elegant solutions hiding in the details. The simpler solutions inherently have less to maintain, for better or worse.