Having lived through all that I think it’s worth reminding that old typed languages were no match to modern ones. I’m not talking about arcane envs here, only about practical and widely used.
People didn’t really go just “untyped”. Untyped simply was quicker in becoming easier to use and less screwed than the old. Or you may even view it as paving the road. Typed slowly catched up with the same quality under new names.
This nuance is easy to miss, but set the cut off date to 2005 and recollect what your “options” were again. They weren’t that shiny.
I remember a lot of praise for duck "typing" as if it was an innovation, and claims you should just write unit tests to catch type errors.
People kept glossing over the fact that writing typed code would be so much easier than writing untyped code with sufficient manual tests to catch type errors, because no one did the last bit. Things just broke runtime instead.
Agreed. I caught a bug in Python code I wrote yesterday by adding a typehint to a variable in a branch that has never been executed. MyPy immediately flagged it, and I fixed it. Never had to watch it fail.
I put typehints on everything in Python, even when it looks ridiculous. I treat MyPy errors as show-stoppers that must be fixed, not ignored. This works well for me.
Yes, this always struck me as completely bananas. Types are a vastly superior form of test for certain conditions.
Duck typing with type inference would be nice but seems to be completely esoteric. You could have both the flexibility of being able to write "func(arg)" without having to specify the type, and the constraint that if you call arg.foo() in your function the compiler should enforce that you actually have a foo method on arg.
(special side-eye to the people materializing methods and properties all over the place at runtime. This seems to have been a rails/python thing that is gradually going out of fashion.)
> I remember a lot of praise for duck "typing" as if it was an innovation, and claims you should just write unit tests to catch type errors.
> People kept glossing over the fact that writing typed code would be so much easier than writing untyped code with sufficient manual tests to catch type errors, because no one did the last bit. Things just broke runtime instead.
That was the dumbest thing ever. Exchanging automation for lots of manual work was supposed to be innovation?
So many tradeoffs so sometimes someone could write a clever, unmaintainable bit of brainfuck in the production app.
I love ruby for relatively small scripts, but I wouldn’t want to attempt anything very large with it. I find that once my scripts start to get too long I find the need to start adding manual type checking to keep from tripping over myself.
I think it's a "forgetting" again: Hindley-Milner type inference dates back to 1969! And still very few languages let you use it. Some have wisely added a very weak type inference (var/auto).
Let's not forget the massive handicap that there is one and only one programming language that the browser allows: Javascript.
I think HM is simply not practical. You don’t want your types to be a multidimensional logic puzzle solved by a computer, cause you want to reason about them and you are much weaker than a computer. You want clear contracts and rigidity that they provide to the whole code structure. And only then to automatically fill up niches to avoid obvious typing. You also rarely want Turing Completeness in your types (although some people are still in this phase, looking at some d.ts’es).
Weak var/auto is practical. An average program and an average programmer have average issues that do not include “sound type constraints” or whatever. All they want is to write “let” in place of a full-blown declaration that must be named, exported, imported and then used with various modifiers like Optional<T> and Array<T>. This friction is one of the biggest reasons people may turn to the untyped mess. Cause it doesn’t require this stupid maintenance every few lines and just does the job.
Very few people use HM type systems even today, though.
I think it really is worth considering that Java effectively didn't have sum types until, I think, version 17, and nowadays, many modern and popular statically typed languages have them.
I have no issue at all recommending typescript over Python today. This is not something I’d ever do before ~2020. I completely agree this has nothing to do with fashion and everything with typescript and v8 being amazing despite the ugly JavaScript in between.
In the meantime, the parallel universe (to HN at least) of dotnet happily and silently keeps delivering… can’t wait for the C# rennesaince.
I've been doing C# for a long time, and throughout C# has quietly got on with delivering.
No drama over Generics, Exceptions, Lambdas/Closures, functional vs procedural syntax, etc. C# either did a lot of this well already or was quietly added to the language in broadly sensible ways. The package management isn't perfect, but I don't think any ecosystem has mastered that yet. It's more sensible than NPM at least. ( But what isn't! )
And all the while, a lot of work has been done for performance, where now I'd trust it to be as fast as almost anything else. I'm sure the SIMD wielding C/Rust experts can out-perform it, but for every day code C# is writeable, readable and still performs well.
Okay, so C# 1.0 was a clunky beast, it wasn't until version 2 where they fixed things like captures of variables in closures when it became pleasant to write. I don't know if it was Microsoft's ownership or the 1.x C# experience that put people off.
It was definitely the fact it was Windows-first, IIS-first and smelled of Enterprise Software (i.e. Not Cool(tm).) mono was workable but second tier, though I feel it did help convince MS that the dotnet project is worth pursuing.
Today I'm running a home asisstant with a side service written in dotnet on a raspberry pi 4 raspbian in docker containers. If it didn't advertise the dotnet version in the logs I'd never know.
Most of the mainstream untyped languages are the "children of the 1990s", the era of a delusion that PC's can only get faster and that a computer is always plugged to a free power source.
Then along came the iPhone that prohibited inefficient languages.
So these cycles don't just happen for the sake of cycles, there are always underlying nonlinear breakthroughs in technology. We oscillate between periods of delusions and reality checks.
I'm not sure the runtime efficiency is the critical determinant: assembly is after all an untyped language (+), Javascript is not really typed and is the major language run on the iPhone (on every web page!).
It was more of a natural language/DWIM movement, especially from Perl.
(+ one of these days I will turn my "typesafe macro assembler" post-it note into a PoC)
Yes, Apple made an exception for JavaScript because of the browser. But in the early days of iOS, dynamic languages were prohibited for power efficiency reasons.
With Obj-C it was only dynamic method calls (message sending), the rest was very static and native-code too. I think Apple was after dynamic VM-based languages specifically.
Objective-C has plenty of dynamism, basically you can do almost everything as in Smalltalk.
Apple was after nothing, they got NeXT in a reverse acquisition, thus Objective-C.
Had they kept their Copland plan, or acquired Be, C++ would be the lucky one.
They only introduced Java in OS X, because initially they were afraid Mac OS community, raised in Object Pascal and C++, weren't that keen into adopting Objective-C.
Once they saw otherwise, Java support was killed, and the Apple specific extensions given to Sun/Oracle.
Apple was after “ios and downloadable code for vms/jits that we cannot control is a recipe for disaster” really. And it was.
Partly due to ios “security” based on appstore checks rather than technical ways (private ios apis, calls allowed only under specifically claimed features, and just bugs). Partly due to the fact that running something locally is prone to local vulnerabilities naturally (the same category as “never run curl | bash”).
The majority of downloadable code since day one of iOS app store, is native compiled code from C, C++, Objective-C, nowadays Swift.
The only dynamic stuff VM used by Apple is JavaScript.
Even the original lets make HTML5 apps went nowhere, as given the reception, it took less than one year for the official Objective-C SDK to enter the scene, and the whole HTML5 apps idea was thrown away.
Sorry but no. Obj-C used to be my bread and butter, I knew it pretty well (alas, poor Yorick! I knew him, Horatio). The only truly dynamic part was message sending and even that got much stricter in later versions of the language, i.e. you wouldn't be able to call any method on any object like before without at least explicitly casting to `id`.
But message sending included properties and KVO/KVC that basically every object in an iphone/osx program used. It wasn’t long until they started to @synthesize these cause everything was a property. Yes, full-C modules could be written and C “interop” was just code, but the main use case was dynamic as hell with a little type checking peppered over it. They promised that objc_msgSend was fast and assembly, of course, but that is dynamic dispatch in basically every line of code still.
Sorry, but that seems like not being knowledged of the spectrum of Objective-C capabilities then, specially to the extent how it was used in NeXTSTEP and first round of OS X frameworks.
People didn’t really go just “untyped”. Untyped simply was quicker in becoming easier to use and less screwed than the old. Or you may even view it as paving the road. Typed slowly catched up with the same quality under new names.
This nuance is easy to miss, but set the cut off date to 2005 and recollect what your “options” were again. They weren’t that shiny.