It is a mysterious thing how the influencers of various kind
in software engineering, have to jump from something new to
something new. Even if the "new" new has been created a long
time ago.
Using old systems is in no way fashionable.
Everyone wants the latest whatnot on their resume to be more attractive
for future job opportunities.
We are forever stuck in a fast fashion web.
I like to think about typed vs non typed langauges
For a long time they both existed in relativ harmony
(much more so than emacs vs vi).
Then it became the THE THING to use non typed langues.
In part because "having to write the type in the
code was far too much work".
We got young software developers who had never used a
type langauge but who had joined the "church of non typed".
Skipping ahead, some influencer discovers typed languages
and how it solves many problems.
What about that.
Choices are good.
We can make informed choices as to what tool makes the
most sense in the context of what we are trying to solve.
(and its fit with the team and the legacy code etc)
Is it so mysterious, though? isn't it "simply" the thesis - antithesis - synthesis cycle? and this cycle needs time, years, decades sometimes.
OO starts in the 1970s. C++ comes 1985, Java 1995 (while smalltalk was born 1972)
Hoare talks about CSP in 1978. golang mainlined it 2009.
SQL started in 1970s and reached first consensus in the 1980s.
or take the evolution from sgml->xml->json w/ json-schema...
concepts need time to mature.
and merging concepts (like duck typing and static type analysis in python) in addition needs both base concepts to be mature enough first.
and likewise, at a personal level, brains need time to absorb the concepts. So of course "younglings" only know part of the world.
And on the other hand mature technologies often have evolved their idiosyncracies, their "warts", and they sometimes paint themselves into some corner. C++ with ABI stability; perl with sigils; Java with "Java only" and a poor JNI and "xml xml xml"; UML with "graphical language"; many languages with no syntax baseline marker "this is a C++-95 file" or "this is a python 2 file", freezing syntax...
So imho it's not simply a mysterious urge for the shiny, new stuff, but a mixed bag of overwhelm (I'll rather start something new before I dig through all that's here) and deliberate decisions (I do know what I'm talking about and X, and Y can't be "simply evolved" into Z, so I start Z).
I feel the main reason this cycle still needs those decades to loop back again is that the world underwent its digital revolution during current iteration. We can't do synthesis without breaking everything, so we instead spawn sub-cycles on top - and then most of those end up frozen mid-way for the same reason, and we spawn sub-cycles on top of them, and so on. That's why our systems look like geological layers - XML at the bottom, JSON in the middle, and at the top JSON w/ schemas, plus a little Cambrian explosion of alternatives forming its anthitesis... and looking at it, SGML starts looking sexy again. And sure enough, you can actually use it today - but guess how? That's right, via an npm packge[0].
Wonder if we ever get a chance to actually pull all the accumulated layers of cruft back - to go through synthesis, complete the cycle and create a more reasonable base layer for the future.
There's rarely if ever a time that its cheaper to pull so much back, some people do it but they are overwhelmed by the power of the accumulated cruft's inertia.
Having lived through all that I think it’s worth reminding that old typed languages were no match to modern ones. I’m not talking about arcane envs here, only about practical and widely used.
People didn’t really go just “untyped”. Untyped simply was quicker in becoming easier to use and less screwed than the old. Or you may even view it as paving the road. Typed slowly catched up with the same quality under new names.
This nuance is easy to miss, but set the cut off date to 2005 and recollect what your “options” were again. They weren’t that shiny.
I remember a lot of praise for duck "typing" as if it was an innovation, and claims you should just write unit tests to catch type errors.
People kept glossing over the fact that writing typed code would be so much easier than writing untyped code with sufficient manual tests to catch type errors, because no one did the last bit. Things just broke runtime instead.
Agreed. I caught a bug in Python code I wrote yesterday by adding a typehint to a variable in a branch that has never been executed. MyPy immediately flagged it, and I fixed it. Never had to watch it fail.
I put typehints on everything in Python, even when it looks ridiculous. I treat MyPy errors as show-stoppers that must be fixed, not ignored. This works well for me.
Yes, this always struck me as completely bananas. Types are a vastly superior form of test for certain conditions.
Duck typing with type inference would be nice but seems to be completely esoteric. You could have both the flexibility of being able to write "func(arg)" without having to specify the type, and the constraint that if you call arg.foo() in your function the compiler should enforce that you actually have a foo method on arg.
(special side-eye to the people materializing methods and properties all over the place at runtime. This seems to have been a rails/python thing that is gradually going out of fashion.)
> I remember a lot of praise for duck "typing" as if it was an innovation, and claims you should just write unit tests to catch type errors.
> People kept glossing over the fact that writing typed code would be so much easier than writing untyped code with sufficient manual tests to catch type errors, because no one did the last bit. Things just broke runtime instead.
That was the dumbest thing ever. Exchanging automation for lots of manual work was supposed to be innovation?
So many tradeoffs so sometimes someone could write a clever, unmaintainable bit of brainfuck in the production app.
I love ruby for relatively small scripts, but I wouldn’t want to attempt anything very large with it. I find that once my scripts start to get too long I find the need to start adding manual type checking to keep from tripping over myself.
I think it's a "forgetting" again: Hindley-Milner type inference dates back to 1969! And still very few languages let you use it. Some have wisely added a very weak type inference (var/auto).
Let's not forget the massive handicap that there is one and only one programming language that the browser allows: Javascript.
I think HM is simply not practical. You don’t want your types to be a multidimensional logic puzzle solved by a computer, cause you want to reason about them and you are much weaker than a computer. You want clear contracts and rigidity that they provide to the whole code structure. And only then to automatically fill up niches to avoid obvious typing. You also rarely want Turing Completeness in your types (although some people are still in this phase, looking at some d.ts’es).
Weak var/auto is practical. An average program and an average programmer have average issues that do not include “sound type constraints” or whatever. All they want is to write “let” in place of a full-blown declaration that must be named, exported, imported and then used with various modifiers like Optional<T> and Array<T>. This friction is one of the biggest reasons people may turn to the untyped mess. Cause it doesn’t require this stupid maintenance every few lines and just does the job.
Very few people use HM type systems even today, though.
I think it really is worth considering that Java effectively didn't have sum types until, I think, version 17, and nowadays, many modern and popular statically typed languages have them.
I have no issue at all recommending typescript over Python today. This is not something I’d ever do before ~2020. I completely agree this has nothing to do with fashion and everything with typescript and v8 being amazing despite the ugly JavaScript in between.
In the meantime, the parallel universe (to HN at least) of dotnet happily and silently keeps delivering… can’t wait for the C# rennesaince.
I've been doing C# for a long time, and throughout C# has quietly got on with delivering.
No drama over Generics, Exceptions, Lambdas/Closures, functional vs procedural syntax, etc. C# either did a lot of this well already or was quietly added to the language in broadly sensible ways. The package management isn't perfect, but I don't think any ecosystem has mastered that yet. It's more sensible than NPM at least. ( But what isn't! )
And all the while, a lot of work has been done for performance, where now I'd trust it to be as fast as almost anything else. I'm sure the SIMD wielding C/Rust experts can out-perform it, but for every day code C# is writeable, readable and still performs well.
Okay, so C# 1.0 was a clunky beast, it wasn't until version 2 where they fixed things like captures of variables in closures when it became pleasant to write. I don't know if it was Microsoft's ownership or the 1.x C# experience that put people off.
It was definitely the fact it was Windows-first, IIS-first and smelled of Enterprise Software (i.e. Not Cool(tm).) mono was workable but second tier, though I feel it did help convince MS that the dotnet project is worth pursuing.
Today I'm running a home asisstant with a side service written in dotnet on a raspberry pi 4 raspbian in docker containers. If it didn't advertise the dotnet version in the logs I'd never know.
Most of the mainstream untyped languages are the "children of the 1990s", the era of a delusion that PC's can only get faster and that a computer is always plugged to a free power source.
Then along came the iPhone that prohibited inefficient languages.
So these cycles don't just happen for the sake of cycles, there are always underlying nonlinear breakthroughs in technology. We oscillate between periods of delusions and reality checks.
I'm not sure the runtime efficiency is the critical determinant: assembly is after all an untyped language (+), Javascript is not really typed and is the major language run on the iPhone (on every web page!).
It was more of a natural language/DWIM movement, especially from Perl.
(+ one of these days I will turn my "typesafe macro assembler" post-it note into a PoC)
Yes, Apple made an exception for JavaScript because of the browser. But in the early days of iOS, dynamic languages were prohibited for power efficiency reasons.
With Obj-C it was only dynamic method calls (message sending), the rest was very static and native-code too. I think Apple was after dynamic VM-based languages specifically.
Objective-C has plenty of dynamism, basically you can do almost everything as in Smalltalk.
Apple was after nothing, they got NeXT in a reverse acquisition, thus Objective-C.
Had they kept their Copland plan, or acquired Be, C++ would be the lucky one.
They only introduced Java in OS X, because initially they were afraid Mac OS community, raised in Object Pascal and C++, weren't that keen into adopting Objective-C.
Once they saw otherwise, Java support was killed, and the Apple specific extensions given to Sun/Oracle.
Apple was after “ios and downloadable code for vms/jits that we cannot control is a recipe for disaster” really. And it was.
Partly due to ios “security” based on appstore checks rather than technical ways (private ios apis, calls allowed only under specifically claimed features, and just bugs). Partly due to the fact that running something locally is prone to local vulnerabilities naturally (the same category as “never run curl | bash”).
The majority of downloadable code since day one of iOS app store, is native compiled code from C, C++, Objective-C, nowadays Swift.
The only dynamic stuff VM used by Apple is JavaScript.
Even the original lets make HTML5 apps went nowhere, as given the reception, it took less than one year for the official Objective-C SDK to enter the scene, and the whole HTML5 apps idea was thrown away.
Sorry but no. Obj-C used to be my bread and butter, I knew it pretty well (alas, poor Yorick! I knew him, Horatio). The only truly dynamic part was message sending and even that got much stricter in later versions of the language, i.e. you wouldn't be able to call any method on any object like before without at least explicitly casting to `id`.
But message sending included properties and KVO/KVC that basically every object in an iphone/osx program used. It wasn’t long until they started to @synthesize these cause everything was a property. Yes, full-C modules could be written and C “interop” was just code, but the main use case was dynamic as hell with a little type checking peppered over it. They promised that objc_msgSend was fast and assembly, of course, but that is dynamic dispatch in basically every line of code still.
Sorry, but that seems like not being knowledged of the spectrum of Objective-C capabilities then, specially to the extent how it was used in NeXTSTEP and first round of OS X frameworks.
Its also part of the economic cycle- centralized architectures thrive in a monopolistic oligarchy - and would quickly fall out of fashion after some trustbusting.
Using old systems is in no way fashionable. Everyone wants the latest whatnot on their resume to be more attractive for future job opportunities.
We are forever stuck in a fast fashion web.
I like to think about typed vs non typed langauges For a long time they both existed in relativ harmony (much more so than emacs vs vi).
Then it became the THE THING to use non typed langues. In part because "having to write the type in the code was far too much work".
We got young software developers who had never used a type langauge but who had joined the "church of non typed".
Skipping ahead, some influencer discovers typed languages and how it solves many problems. What about that.
Choices are good. We can make informed choices as to what tool makes the most sense in the context of what we are trying to solve. (and its fit with the team and the legacy code etc)