> With most classes, unless you design them really carefully, your methods are often at risk from being overridden by others in ways that don’t meet expectations — especially if it mutates the object and does so in a different way.
WAT. Because Ruby and Python programmers are just bleeding to death from the thousand cuts of overriding methods during subclassing. Or... not?
More seriously, is there something to Swift here that I'm missing? Because in just about any language I've run across with inheritance, either the superclass(es) are part of a framework that defines a protocol for the inheriting code (e.g. you may/must provide methods X, Y, and Z. These methods must behave according to the following rules {...}), or else the programmer writing the inheriting code takes on the full responsibility of integration.
I can't speak for the author, but you could take it as an argument against inheritance in general. There are a lot of smart people in that boat -- I think at one point even James Gosling said he might have left inheritance out of java if he had it to do over again.
I probably have no shot of convincing you that inheritance kind of sucks in a short post on a friday night, but I'd just point out that it's not an obscure opinion, there are a sizable number of people that really have no love for the "classic" inheritance model of OOP.
No doubt, and I'm not really debating the language design tradeoffs involved in various approaches to OOP (or heck, typing in general). It's just this kind of talk comes up from time to time, seems to be fear-driven, and ignores the pragmatic practices that have been developed to work with inheritance (or any number of other language features that trigger a similar reaction, c.f. Pythonistas vs Rubyists on the topic of monkey-patching).
It was surprising enough in context of an otherwise incisive article that I was honestly wondering if there was something specific to Swift and/or the ObjC framework world that posed a unique hazard.
Speaker here. Nothing particular although ObjC lacks any language based features to control inheritance. The concern is that if a method has a particular effect on state (possibly even private internal state) that you expect to take place that subclasses may override the method in such a way that it changes all the expectations and may leave you in an inconsistent state. This means that it is necessary to document (or have conventions for) what should and should not be overridden and whether overriding methods should call the superclass implementation too.
In unsafe languages with raw pointers the issues may be more severe than languages like Python and Ruby but they are generally similar.
I may have overstated the point too, I had notes but not a script. The point is that you need to be careful designing and documenting for subclassing.
There's an in-between option seen in some languages where you don't inherit and override but instead implement a superclass's stub (or event if you want to think of it another way), which I think gives almost all the benefits of inheritance without most of the downsides. Sadly it hasn't penetrated the "major" languages.
That would negatively effect the encapsulation benefits of inheritance: we could never safely access private fields on any value other than "this." Try implementing binary methods without that!
Inheritance should evolve...there is plenty that can be done with it (e.g. mixin-style inheritance, dynamic inheritance, family inheritance). At the very least, we should have that conversation on how to address inheritance criticisms.
he appears to be referring to the http://en.wikipedia.org/wiki/Liskov_substitution_principle . Generally type systems don't enforce the invariants that subclasses have the power to break. It is the usually the duty of the programmer to ensure that subclasses don't break these invariants.
Invariants aren't things like whether methods are overidden or not, it's things like whether area for a shape can be computed by just knowing a side, exemplified with the familiar case of whether a rectangle should be a subclass of a square, or the other way around
Anyone who inherits from a class and accidentally overrides a method is a hopeless case or a newbie needing basic review from a senior dev. The bugs are on them. They can either test that their inheritence works, learn from the mistake, or they can piss off and find a new career emptying bins, flipping burgers, or whatever they fancy that doesn't involve making sure that what they just did will work beyond the immediate function.
> Anyone who inherits from a class and accidentally overrides a method is a hopeless case or a newbie needing basic review from a senior dev
That's a pretty harsh stance. I don't mean to be a jerk, but how much professional experience do you have? Even in good code bases there are frequently complicated hierarchies with non-obvious dependencies that are very easy to screw up even by people who know what they're doing. Good teams find ways of automating the checking on these things so that they can spend their time thinking about other problems, or avoiding the complication altogether, rather than blaming the victim.
I'm a decade in. I know what you mean about complex hierachies, but inheritence is a very, very dangerous tool; you either take the time to review the results, or you suffer the consequences. That's where the review by senior developers comes in. You learn or you don't. Who would want to work with devs who don't?
I do, because I don't consider people's ability to deal with the vagaries of inheritance a valuable skill when you can just avoid it with effectively no negative consequences.
Of course, to clarify, it's not really possible to avoid in languages like Java, where it is more or less mandatory due to the design of its standard library (unless you avoid that too, which is a considerably worse idea), but you can still keep your own CODE clean with liberal use of `final` and interfaces.
Personally, I'm consistently baffled when programmers like yourself, who have come to appreciate that "inheritence [sic] is a very, very dangerous tool," continue to defend it so vigorously. If it offered really powerful or unique advantages, sure--but if you take a hard at what it actually provides, it's just syntactic sugar and a performance "optimization" (thin pointers) that's usually pessimal compared to other approaches. The downsides are significant and well-known (as mentioned above, Liskov substitution being undecidable is one consequence, but there are plenty of others). There's no point in making life harder for ourselves.
It's much less obvious than you think. In many cases safely overriding a method requires correctly in invoking the inherited method, and failing to do so can cause subtle problems.
Speaker here, a little late to the conversation though. Let me know if there are any particular questions, I'll read through everything later and respond.
It was honestly confusing to me. I was like, is the language named after some design philosophy that it is supposed to embody? Because then the title would make sense.
AFAICT Swift does the sensible thing to maintain backwards compatibility with ObjC: When Swift code calls Swift code, the dispatch is static (a jump, and some stack fiddling, when you're counting the change). But if it's interfacing with ObjC then it must necessarily resort to the ObjC runtime's message based dispatch, which is a bit slower.
Aren't non-inlined function calls in any language "slow"? I'm not seeing any details here, but as Swift is a compiled language with little support for runtime monkey-patching, I assume it's not doing anything especially ridiculous with normal function calls.
You'd have to define "slow" before you can answer that. I say this not because Go is the epitome of speed, but merely because I happen to have the number on hand from recent testing: Go can call an interface method with no arguments and no return value in a tight loop in about 5 nanoseconds. Is that fast? Is that slow? Depends on your definition. On the machine in question that's about 15 cycles. That strikes me as pretty good in the broad sense, but certainly 15 cycles slower than an inlined call.
(I'd like to compare it to a static function call but I haven't yet worked out how to write a static function call in such a way that I can run it in the profiler correctly but it doesn't inline down to nothing.)
If you put a dynamically dispatched function call in the middle of a tight loop you will slow it down significantly. As I say in the talk in most code it just doesn't matter. It is all relative. A function call is fast relative to a context switch or and I/O operation.
If you think what is required to call an instance method on the class it is significant. Roughly, read the address of the object, read the pointer to the class, read the vtable of the class for the function pointer then set registers and stack values and make the function call.
In ObjC method calls are actually via objcmsg_send calls to do the dynamic dispatch.
Computers are fast and they do these things quickly but not having to do them is much faster.
I was just going to down vote you, that's all trolling deserves on HN, but on the off chance that you're just ignorant, and not actually trolling:
1) there are 5 OSes with more than 100million users. Apple makes two of them
2) there are 4 major web browsers out there, apple makes one of them
3) there are 3 major office suites out there, Apple makes one of them
Apple is one of the biggest software companies in the world. I suspect they are second only to Microsoft in that regard, and yes, I do include Google in that assessment.
They do, usually edge on. Every year they get more focused on what their hardware looks like from the side and seem less concerned with how their software works when you're facing the screen.
WAT. Because Ruby and Python programmers are just bleeding to death from the thousand cuts of overriding methods during subclassing. Or... not?
More seriously, is there something to Swift here that I'm missing? Because in just about any language I've run across with inheritance, either the superclass(es) are part of a framework that defines a protocol for the inheriting code (e.g. you may/must provide methods X, Y, and Z. These methods must behave according to the following rules {...}), or else the programmer writing the inheriting code takes on the full responsibility of integration.