I am very skeptical of research about software development. Whether it's about tooling (static typing, etc.) or process (pair programming, etc.).
It's really hard (I would go so far as to say impossible) to set up useful metrics and it's really hard to create comparable scenarios. And even if you had those, very few things apply generally to all fields of software, let alone the different types of personalities that developers have, even if the stereotypes have a bit of truth to them.
Software "engineering" (little to no actual engineering in this area) is still in its infancy. Civil and mechanical engineering are millenia old and we still had the Tacoma Narrows Bridge less than a hundred years ago; electrical engineering is roughly a century old and things still self-ignite on the regular, and there are all kinds of weird and poorly understood quantum thingoes at the really small scales; software engineering is just decades old.
Everything that's written on the subject -- every bit of research, fad, and personal anecdote -- is just one more experiment flailing around in the darkness while we try to figure out what it means to "engineer" software.
I don't think that makes research about it valueless. Some is good, some is not, some will get revised over time because science and engineering gradually tack towards discovering natural laws. But also, yes, an enormous amount of human effort is wasted on following practices that are often not well-founded.
> Everything that's written on the subject -- every bit of research, fad, and personal anecdote -- is just one more experiment flailing around in the darkness while we try to figure out what it means to "engineer" software.
I think Ian Sommerville does a pretty good job in the book "Software Engineering" (I have the 10th edition), from the section 1.1.1:
> Software engineering is an engineering discipline that is concerned with all aspects of software production from the early stages of system specification through to maintaining the system after it has gone into use. In this definition, there are two key phrases:
> 1. Engineering discipline: Engineers make things work. They apply theories, methods, and tools where these are appropriate. However, they use them selectively and always try to discover solutions to problems even when there are no applicable theories and methods. Engineers also recognize that they must work within organizational and financial constraints, and they must look for solutions within these constraints.
> 2. All aspects of software production: Software engineering is not just concerned with the technical processes of software development. It also includes activities such as software project management and the development of tools, methods, and theories to support software development.
These quotes don't do it justice. I'd really recommend this book.
Thanks for the recommendation! I've added it to my to-read list.
I will note though that "Changes from the 9th edition" on page 4 reads to me like it supports my point moreso than contradicts it. I'd also add my favorite quote about engineering, "any idiot can build a bridge that stands; it takes an engineer to build a bridge that barely stands". That's the engineering that is as yet unsettled in software. Any of us, given enough time and financial support, can eventually build some software that does something. Approximately none of us however can agree on the right way to build such a thing (where "right way" means something like "most efficient, cost-effective, and future-proof, according to the users' and owners' needs and specifications").
Hell, pick 10 people from HN and you'll get 12 different recommendations for which language to use.
Well, as this is also depends on parameters. What languages the team members know, what languages the problem domain maps to well, how much capital do we have, what's the schedule, what's the expected rate of modifications on the system, load, security and safety hardening requirements.
All bridges, tunnels, houses, roads, pipes, and dams are more similar than websites, Android, databases, videogames, and embedded software.
Mostly because the former had hundreds of years to settle down somewhat. (And of course there are huge advances due to materials science advances and because of better structural modeling, due to software advances.) And that settling down allows for time to develop models for the common scenarios. That's why there are engineering standards (building codes).
> All bridges, tunnels, houses, roads, pipes, and dams are more similar than websites, Android, databases, videogames, and embedded software.
Are you sure about this? I feel like somewhere between "dirt driveway I dug out last weekend" and the Hoover Dam, they would've had to figure some hard shit out. I could probably dig a tunnel with a shovel, but it wouldn't survive the next rain, much less be able to go under the ocean or support a vacuum.
I mean, outsiders probably couldn't tell apart a WordPress template from whatever gee-whiz app. It's all just computer stuff. And as a web dev, I could probably figure out an Android app given a weekend or two, or learn a new SQL dialect in a few hours. I would have no idea how to, say, frame a window for a house or calculate the right materials and techniques to move water from here to there, even to Roman standards.
What I mean is if you have a parcel of the Earth and you want to make a road through it you open a specially designed software (Autodesk Civil3D) and you can basically do it. Whether it's a dirt road or a 6 lane highway (or a tunnel or a bridge). Finite element analysis is magic after all :)
Sure, that tunnel might not survive, but the way to fix it is pretty clear. More structural support and then protecting the structures from the environment (corrosion, soil erosion from water and wind, underwater currents, etc), and then due to the inherent nature of brick and mortar stuff is they are hard to change after the fact (it's hard to change a skyscraper to have a stronger core to resist more wind loads for example) , so it basically forces the customer to commit to some requirements (eg. how many rooms do you need, how much traffic the bridge/tunnel has to carry, how big floods the dam has to be able to withstand) and this has downstream effects of drastically pruning the search space. And then cost estimation is usually again simpler, because we have measurements (this long, this amount of mass, we need this amount of stainless steel of this grade, this amount of concrete of this type, in this region by this availability, etc...)
> frame a window for a house
sure, but there are about ten options, and you can go through them pick the one you like and get a craftsmen/contractor
Most IT/software problems reduce to very straightforward ones once we have the same degree of constraintedness, but mostly we don't.
Wait, so just because there's fuzziness to the data you're going to instead defer to what? What's the proposed alternative/status quo that you're giving a data-less free pass?
Decision driven from bad data is IMO worse than data-less decision.
Data grants authority to a decision that gut feel driven one doesn't. It is hard to argue against evidence as it should be, but that assume a certain level of quality in the evidence.
Second, if practice doesn't match the expected outcome, the first thing you will look at is what the team is doing wrong, not review the decision as not working.
That said, parent is far from unique in his skepticism, so I think the problem is more often reversed in the industry. Having some data, even flawed can help your company decide to try something new.
Think of things like "lines of code written" or "bugs closed" as measurements of productivity or quality. These are real things that people have used in real published studies - and any conclusion drawn from them is obviously bogus.
There is more than some fuzziness to the data! At some point, measuring something sufficiently poorly is worse than not measuring it at all, and the empirical efforts I've seen to evaluate things I care about (programming languages, static types, etc) all fall into that category.
The alternative is to rely on rich experience and good taste. If you want to make it a bit more rigorous, you can approach this in terms of qualitative research—which makes sense for academic research, but isn't necessarily the best way to learn for yourself or to design tools.
Expert experience is far more effective at capturing complex, multi-dimensional phenomena than anything we can reduce to a small number of easily-gathered quantitative metrics.
By looking at the research? By understanding what kind of studies and experiments are actually viable? By evaluating what we can measure effectively, what we can't, and how good (or bad!) our proxy metrics can be?
As a field, we really need to understand the inherent limits of quantitative methods.
I think fuzziness understates by far the wild variations that uncontrolled (mostly uncontrollable) elements often create.
Very few of these kinds of studies follow scientific method well enough to be vaguely useful let alone generally applicable. How many have you ever read of being successfully reproduced?
So just data, but with small sample value, biased (since humans are inherently biased) and non filtered by a scientific method. How is that better than proper research?
Of course personal experience is very valuable, precisely because it's a (limited form of) research.
Your “gut” instincts can also be a remarkably effective tool in the right situations. You have to feed it enough data and are careful of systematic biases, but your instincts can be brilliant.
I had an incredible moment in a job interview: for context, I’ve been programming for about 30 years and 10 of those years have involved a lot of JavaScript. I was given a timed debugging problem to test my skills: “Here is some JS code with failing tests. Go fix the bugs.” Before I even ran the tests I scrolled through the code to get a sense of it. My instincts tweaked on one section. “This code smells. I don’t trust it.” I said. On second glance - “Oh yeah this is totally buggy”. Sure enough, I was right. The interviewer was blown away.
There’s no way I could do that consciously. Science is great for a lot of things, but when we don’t have science, we could do a lot worse than trusting our intuition.
some people are better than others, the evidence is there by how some people are more successful than other people.
you can attempt to dismiss that as being a small sample value, but that's the reality.
The reason people want to be so data driven is because it takes the decision making out of their hands, which takes the responsibility for poor decisions out of their hands.
But when it's all said and done, some people are better than others, not all "small sample values" are equal.
you cannot find causation in data, only correlation with an attempt to identify the causation.
it's another version of cargo cult programming. The data suggests if a plane is present, food will come. We want food to come, lets build a plane, with no real understanding of where the food actually comes from.
> Better at what though?
whatever the fuck it is you're trying to evaluate. Are you evaluating musical aptitude? then the answer to your question is that some people are better at music. are you evaluating the ability to tear down an engine? then the answer to your question is that some people are better at tearing down an engine.
Linus Torvalds opinion on kernel development is not equivalent to that 16 year old web developer. Dismissing opinions as "small sample value" _completely_ misses the forest for the trees.
Finding correlation in data is how we find out about the real world though. There are extra steps needed there but let’s not throw out the baby with the bathwater. Otherwise we should just jack in science which AFAIK has been a spectacularly successful project. And it’s actually this study of collected data that will defeat cargo cults when their hypothesis doesn’t stand up to scrutiny. It’s only reliance on presumed experts and anecdotal evidence that shores them up. From the original cargo cults to conspiracy theorists like Alex Jones to software orthodoxy like TDD, SOLID and Scrum.
Likewise anecdotal success of an individual is also just correlation but with only a single data point.
I’d also agree that expert opinions are great but they’re also not infallible or necessarily generalisable. Nor is it necessarily easy to tell who is an expert and who is a charlatan. Which is why we should use them as the basis for investigating the larger picture.
Similarly both expert opinion and singular anecdotes can lead to cargo culting.
Care to have another go without intentionally being misleading about what I actually wrote. Otherwise I don’t think there’s much point in engaging with you.
Snipping out parts of sentences and replying to them out of context to the whole is misrepresentation. In particular your replies don’t make any sense in relation to the rest that you conveniently left out. I dunno what you think that achieves but enjoy arguing with yourself I guess!
>Wait, so just because there's fuzziness to the data you're going to instead defer to what?
I'm constantly amazed at how people are willing to throw away data if it's not perfect. All you need is a bit of signal, and it's, literally, better than nothing.
No, it's not better than nothing. Having played out the exact situation you are describing, If you have some data, but it has no firm causal link between your input and your output, it is simply a distraction and a waste of time and effort to utilize it. You are better off experimenting purely randomly or based on first principles in that case.
> Hype: "Formal Verification is a great way to write software. We should prove all of our code correct."
> Shower: Extensive literature review showing that formal methods are hard to learn, extremely expensive to apply, and often miss critical bugs.
Glad you had the caveat "Written in 2000". What was hard and perhaps not worthwhile in 2000 has changed. Computers are a bit faster and software is more pervasive.
A colleague of mine was involved in the formal verification of a really tricky cellular network bug. That was around year 2000. It was hard but was still necessary and successful.
GM has a buggy car. That wouldn't have been an issue in 2000. Now it's an issue that forced them to withdraw a car from the market after about two weeks. Not saying that formal methods could have for sure avoided that, but I suspect that it very well may have.
I think the author should have expanded a bit upon "extremely expensive to apply", because that phrase is technically true but really hides the actual problems with formal verification and why it is seldom used in practice.
People unfamiliar with formal verification think it means "the code is provably correct", but that's not what it does. Formal verification can prove that your code correctly implements a specified standard. You still have to define and write that standard. And that's where the problems begin:
- Standards themselves can have bugs, or the standard you wrote is not what you actually wanted. Note that this is the same problem that often occurs in code itself! You've just pushed it up a level, and into an even more arcane and less-debuggable language to boot (formal standards are generally much, much harder to write, debug, and reason about than code)
- The standard is constantly changing as feature requests come in, or the world around you changes. Modern software engineering mostly consists of changing what you or somebody else already built to accommodate new features or a new state of the world. This plays very badly with formal verification.
Formal verification can work in areas where you have a long time to do development, or where your problem space is amenable to having a formal standard (like math problems). In most other cases it just runs completely opposite to the actual problems that software engineering is trying to solve.
A lot of the types of formal methods that Hillel Wayne (the author) writes about are the opposite of what you describe—you write a specification which is provably correct, and then it's up to you to translate that specification into code. The error-prone part is the translation more than the specification.
It amounts to the same thing: it takes a long time to develop, the specification can get out of sync with the code, and once out of sync you now have no provable characteristics of the new system.
Though this does make me wonder: If there exist some formal methods that can prove that code implements a standard, and others that can prove that the standard is correct, it seems like there ought to be something in the middle that can prove both.
> Formal verification can work in areas where you have a long time to do
> development, or where your problem space is amenable to having a formal
> standard (like math problems). In most other cases it just runs completely
> opposite to the actual problems that software engineering is trying to solve.
Have you actually tried, or is this just the standard line?
I've heard this from people I've worked with, and when I followed up to ask what kinds of projects they've worked on with Agda, Idris, Coq, Lean, whatever, they had nothing.
I don't have anything either, other than that I've toyed around with some of these systems a little bit. But it seems to me like there's a lot of potential in dependently-typed programming, and we just don't have a lot of experience with it -- which is fine, but that's a very different situation than what seems to be the standard line, "all formal methods take a ton of time and are only viable in a few niche projects" (what are people imagining here? NASA space probes and stacks of binders? really it doesn't seem obvious to me, I'm not trolling).
I'm not sure I agree with your conclusion. There's a large grey area in the formal methods space where general software engineering thrives (in my opinion). Especially the "lightweight formal method" variety. I doesn't go for complete proofs, but instead pursues a style of exhaustive (or something that approximates it) checking. Going down this route puts a very positive design pressure on your applications as well as your thinking in general.
The big pay off in formal methods is learning to think abstractly (imho). Even if you don't make a full blown model, sketching out the problem in something like TLA+ can be extremely valuable just by forcing you to think about the modeling independently of code. Even in the world of general software engineering, being able to reframe requirements as temporal invariants has felt something like a super power.
I like that this calls out caveats to the downsides and mentions places where things really did hold up okay ("Most bugs were at the system boundaries; none were found in the implemented protocols. Formally verified systems, while not perfect, were considerably less buggy than unverified systems."). Good to keep perspective in both directions:)
> Hof's first attempt the day before failed when he began his swim without goggles and his corneas froze solid and blinded him. A rescue diver pulled him to the surface after he passed out.
For anyone wanting to try this, I‘d just warn that many extremes that humans haven’t evolved to endure (sitting for long periods, spending no time in sunlight, spending too much time in sunlight, eating no fat, eating too much fat, etc) have been shown repeatedly to shorten lifespan. I’d see daily ice baths as an unnatural extreme and wouldn’t consider doing this, at least not for a long period of time.
Humans weren't optimized to endure the unnatural extreme heat of a 175 degree sauna, yet studies point to frequent sauna use being associated with a reduction in all-cause mortality
> yet studies point to frequent sauna use being associated with a reduction in all-cause mortality
Is this perhaps because frequent sauna access is correlated with higher socioeconomic status/less stress/more free time/etc. What kind of confounding factors did they include in that study?
Emerging evidence suggests a plausible mechanism is the "heat-shock response", which upregulates a lot of crap related to cleaning up misfolded (= aggregatey) proteins, like protein degradation and chaperone protein expression. Where most of these neurodegenerative diseases AD, PD, some dementias, CTE, prion diesases etc. involve protein aggregation at some point whether necessary-and-sufficient or just along for the ride. Like at this point if some kind of degenerative disease isn't thought to relate to protein aggregation, it's more likely nobody's bothered to look for it
Anecdotally/personally, it seems like there's kind of a step response going from "not cooked" to "cooked", the good stuff doesn't happen until you get "cooked" (ideally remaining that way for a while), and this seems to happen around a body temperature of 38.5-39C.
I don't think so. Saunas are economically available to almost everyone (where there is fuel). I grew up a yooper in Upper Michigan and saunas are a very common part of the culture and even folks without indoor plumbing (much less common now) would have wood-burning saunas in the back yard. Saunas are one of the most relaxing, and invigorating, experiences I know of.
> Saunas are economically available to almost everyone
This can't possibly be true. 65% of Americans are living paycheck to paycheck. They definitely aren't in a position to get a Sauna.
And besides, just because people can afford to do something, doesn't mean that less wealthy people do. Almost anybody could afford to golf, but that doesn't mean golf doesn't skew wealthy if you're looking at the people who actually play. If you looked at how long people who golf live vs people who don't, I'd be willing to bet that golfers live longer. I'm not about to suggest that golf is what is keeping them alive though.
Golf takes gear, course access, and above all large amounts of free time. It's certainly not something almost anybody can afford to do. Access and the time to commit are things those with higher economic status often take for granted when considering whether everyone can afford some activity. These factors likely come into play for saunas too - they're not a staple for any given city gym, for sure, which means you need access to a more well-outfitted gym, or to have a sauna in your own home. And then, again, free time beyond the essentials of fitness and daily life.
Yes, that is exactly my point. Anybody could golf, but it isn't practical for people who don't have a good amount of expendable income, so they don't do it.
Same thing with saunas.
You and I have a different view of what a sauna is. My sauna in the UP is simply wood-sided shack, interior cedar board, and a wood burning sauna stove (basically a plain wood stove with free rocks on top) bought locally for around $600, and all going strong since 1998.
Here at home I belong to a gym for $55/ month and sauna there almost daily, and in prior home in Green Bay, each YMCA had a sauna included with membership. Mostly northern cultures and many southern have had something similar for thousands of years. The Oneida tribe outside Green Bay has regular sweat lodge ceremonies, same basic thing. I won’t enumerate the benefits of a true deep heating sauna but it is deeply meditative for me. This is one of the healthy ways to relax that is often available if sought out. Even during a recent trip to Orlando, I was able to sauna at one of the Y’s. My dad (86 yo) saunas at his health club near his home in SC.
I have never heard saunas described as something outside normal economic lifestyles.
Interesting saunas being associated with wealth, we come from different places. When I was in college at Michigan Tech in the early 80’s, dorms had saunas, the frat house had a sauna, and if you were lucky, someone was renting a place with a sauna. Of course I understand that this was very much a regional thing, the point is they are simple to construct and simple to maintain.
While there, I became friends with an old Finnish couple about 10 miles east of town near the Lake Superior shores. They explained that until just a few years ago (at that time), people had outhouses, and a sauna for relaxing, bathing and socializing. They did not eat cake. (They ate pasties :-) )
Do you really think that the people who worked on this study didn't consider your knee-jerk level-1 confounding factor? Obviously they look socioeconomic status into account when they were studying all-cause mortality, what study wouldn't?
There is a reproducibility crisis in many fields where people tacitly assumed everyone else was doing their due diligence. Many basic assumptions about confounding factors, confidence values, etc are still being criticized and revisited. You should not assume that such factors have been accounted for, instead you should join in continually asking if they have been and challenging all assumptions.
I'm not making that assumption, I'm asking what factors they included. Lots of studies are poorly run and fail to account for very simple confounding factors.
The big study was a 20+ year observational study in Finland [1] where Sauna's are supposed to be quite accessible, but the world is complicated and there are probably many confounding factors.
The mechanism of action seems very plausible. Sauna's are moderate stress which spikes your heart rate temporarily (they say 100-150bpm in the study). So it's not surprising it could be protective for cardiovascular disease just like many other forms of moderate stress/exercise.
I don’t mind people asking obvious questions. Even if the answer is also obvious, it’s still an important base to cover, and we shouldn’t assume it was. The humanities is replete with flawed studies.
It's almost as if complex systems resist deterministic patterns of thinking. If only someone decades ago had written a book warning us about seeking 'silver bullets' in software development...
Depends, if you're very diligent, you could use a "smart constructor" (in some languages you can make it so there's only one way to construct a type, and then enforce validation at construction time) and instead of taking in a float/int for temperature, take in a Temperature or OutsideTemperature type as an argument.
A strictly newtype can be very powerful, but needing to "unwrap" your datatype to use it in normal operations of that datatype can be a little unwieldy so tagged types are a pretty interesting mechanism that provides type safety and convenience. I don't think it's strictly better though, it depends heavily on context.
Would something like this [1] be the function equivalent (not type equivalent) written in rust?
A function that has a parameter of ints but will only yield numbers within a certain range? I guess I'm really confused by the Scala example you posted but really trying to understand.
> You could put 999 into a variable that's supposed to hold the outside temperature
I've never done this, but you could also define types for stuff like this. type: PositiveInteger, or type: BoundedTemperature that would only ever hold valid values.
Or one thing where it might come in handy would be when dealing with user/potentially malicious input - SanitizedString etc.
There will obviously be a complexity tradeoff, but instead of using integer to pass this value around you can create an OutsideTemperature type that validates this rule.
Yeah the main benefits are reducing bugs in developer write-run-debug cycles, and acting as machine-readable documentation (including to generate tooltips and autocomplete). The “cold shower” doesn’t check those.
You should be documenting most type info at least at the function/method signature and data structure definition level anyway. May as well do it in the most-useful format possible.
This is what gets me about “even modern relatively-low-boilerplate type systems just slow me down” folks. That means you’re skipping documentation you ought to be doing.
Hype: "Static Typing reduces bugs."
Shower: A review of all the available literature (up to 2014),
showing that the solid research is inconclusive, while the
conclusive research had methodological issues.
Static typing lets you do more complicated things by offloading a subset of complexity-management to robots. The remaining human-managed complexity expands until new development slows to a crawl, and no further human-managed complexity can be admitted to the system, similar to adding more lanes on a freeway.
Even if it doesn't reduce bugs (and how do we even measure this? in terms of bugs per loc? bugs per unit time?), it does make APIs easier to use (not even in terms of correctness, but in terms of time required to grok an API).
"Reduce bugs" is kind of a loaded term anyway. Static typing doesn't reduce bugs in an absolute sense, but I think it does reduce bugs per unit of value delivered. That's a lot harder to measure in a formal study.
> Static typing lets you do more complicated things by offloading a subset of complexity-management to robots
I've read some of the research on this! Yes, static typing improves documentation and helps you navigate code.
It also correlates with code quality and reduces smells. Inconclusive whether that's because of static typing or because more mature teams are likelier to choose static typing.
But all the research agrees: Static typing does not reduce logic bugs. You can build the wrong thing just as easily with dynamic and with static typing. The only type of bug that static typing reduces is the sort of bug you'll find by running the code.
In my experience, static typing is best thought of as a way to reduce the need for manually written unit tests. Instead of writing tests that break when a function signature changes, you write types that break when you call functions wrong.
You still need tests for logic. Static typing doesn't help there.
This seems like a strong statement to make based on the research. What I've seen falls into several camps:
- research that made some conclusion about logic bugs for complete beginners on small assignments, with languages that have bad type systems
- research that had significant limitations making it impossible to generalize
- research that failed to demonstrate that static typing reduced bugs—which is very different from demonstrating that it didn't!
I haven't done a super thorough review of the literature or anything, but I have looked through a decent number of software engineering papers on the subject. The only strong conclusion I got from the research is that we can't get strong conclusions on the subject through purely empirical means.
Hell, the whole question is meaningless. "Static typing" is not one thing—there's way more difference between Java and Haskell than between Java and Python, even though both Java and Haskell are statically typed and Python isn't. (This is even assuming you completely ignore Python's type annotations and gradual typing!)
> The only type of bug that static typing reduces is the sort of bug you'll find by running the code.
This is a pretty solid argument in favor of static typing, then, unless you somehow have a test suite that exercises every possible code path and type variation in your codebase, and also keeps itself perfectly up to date. Because otherwise you're rarely running all of your code and verifying the result.
If "type bugs are an obvious thing and happen all the time" and "static Typing reduces type related bugs" then it should be easy to demonstrate this empirically. However, "a review of all the available literature (up to 2014), show[s] that the solid research is inconclusive while the conclusive research had methodological issues."
Why would you need an empirical study for this? It’s trivially provable. Runtime exceptions in a language like JavaScript can arise from type mismatches. That’s impossible to do in a language like Java, because the compiler catches it before you ever run the program. This eliminates an entire class of bugs.
What you’re proposing here sounds like somebody saying “How do we know Rust results in less bugs than C++ without an empirical study?”. Even though, we _know_ Rust eliminates an entire class of memory related bugs. I say this as a C++ advocate too. Anytime I run into a memory bug, that’s a bug that would not have happened in Rust. Likewise, any time you run into a runtime exception due to a type mismatch (for example: expected an int not an object), that is a bug that would not have happened with a type safe language.
Edit: I also want to add that the metric is important. Is it number of bugs per line of code? What does that even mean? Assembly programs consist of many more lines of code because it’s more terse, but the number of bugs in assembly will probably be greater than a higher level language. Even though the large number of lines of code would probably push the metric down and make it seem like assembly has a low number of bugs per line of code. Because of this, bugs per line of code isn’t a useful metric.
The only way I could think of measuring this would be to have two feature for feature equivalent projects in two different languages and compare the number of bugs in each. But even that probably has a bunch of flaws.
I think you're right that static typing reduces bugs, but I am not convinced the reduction is significant or meaningful. If static typing has a significant effect, then why is the existing research so weak and inconclusive?
I don't understand your point about metrics and measurement. Are you saying the effect of static typing is so small that it is completely dominated by other confounding factors and thus cannot be measured?
My point about metrics is why I think the research is inconclusive. It’s very difficult to get a metric that’s meaningful in this context. If you said: this code base on average has 1 bug per 100 lines of code, that doesn’t say anything meaningful. If that code is assembly, that’s not very good because of how terse the code is. Whereas, if that code is Python or Ruby, that’s much better because of how concise those languages are.
Because of this, I feel like the only way to truly measure whether or not static typing has a significant effect would be to create two equivalent projects. Say you created stack overflow in Python and in C#. Then you could compare the quantity of bugs and see if it differs. But even this has problems because who knows how many bugs haven’t been caught? Is the code truly equivalent? Did the people who wrote the two codebases have slightly different experience resulting in differing number of bugs?
There’s too many variables in an experiment like this to conclusively determine whether or not static typing reduces the bugs. But, I don’t think that means that we can’t infer that eliminating a whole class of bugs is helpful.
Edit: the more I try to think about my reasoning the more I’m thinking it’s flawed. I think the answer to whether or not static typing reduces bugs is unknowable, but I strongly believe that it helps. Maybe we’ll get a study that isolates this metric one day :)
I think the important question is: at what cost? E.g., if it takes me 4x more time to write statically-typed code, and it saves me 10% fewer bugs (completely made up numbers here), is that worthwhile? Maybe, if I'm programming self-driving cars or autopilot software for aircraft. Probably not if I'm programming a web calendar for dog sitters.
But this is where the studies come in. Lots of people think this is true. And it seems perfectly reasonable. But there's really no research to back this up.
It's even worse than that. If it saves me 10% bugs per unit of code, but I have to write 20% more code, am I actually even ahead in the bugs department?
Dynamic typing doesn't change/improve this, though, so I'm not sure what the point being made is. I'm also not sure I agree with it at its premise anyway.
A literary review I made in 2020 was actually pretty conclusive about it also reducing bugs. I think we might be missing some of the later literature here.
Are you still going to use the word software or compiler or do you plan on switching over to calling everything a robot? Is your coffee maker a robot too?
I'd be ok calling my coffee maker a robot. It's got a cpu and sensors, and is capable of limited manipulation of its environment (via a heating element).
But to the main point, I read "robots" as a metaphor. Metaphors can be situational, jut because I might call a compiler a "robot" in one context doesn't mean I have to call them that every time.
And it's not as if there isn't long-standing precedent for using "robot" to refer to a piece of software. Have you ever heard of a "robots.txt" file? People complaining about "bots" on various social media sites?
Stepping back, is this simply pointing out that Rule 34 also applies to scientific research? If a claim can be made, there is a scientific study that "proves" the claim true.
If someone came to me with some outlandish claim, "Studies show that singing in the shower lowers cancer risk," I honestly won't be surprised if they could produce 2-4 white papers published in modern journals in support of it. So much of modern science seems to be reading between the lines and meta-analyses of scientific studies.
Scott Alexander's review of Ivermectin[0] is a great example of this. Bold claim is made, everyone divides into two camps, and in fact both camps have multiple peer reviewed studies backing their side, and to arrive at some semblance of understanding of the topic you need to spend hours diving into the studies and checking off boxes: were they peer reviewed, were there confounders, do the authors have a history of fraud, and on and on and on.
I get what you're saying, but "rule 34 also applies to scientific research" is a bizarre way to word it; the way I'd interpret that phrase out of context is just that it logically follows because scientific research is a subset of "things that exist".
I thought you were going a different tack when I read rule 34 - That I think is more apropos to the article's intent - If something is interesting, someone is getting excited about it, and charging forward into fan art. Papers and news articles are unfortunately just real-world fan art.
To add, if a claim can be made that can be turned into a product, there will be a paid-for study that proves it to be effective.
When it comes to ivermectin though, I'm not convinced that was for profit per se, I think that was a crowd looking for a cure or something to latch onto because ????.
There was certainly profit made, but not so much by manufacturers. Many folks looking for miracle cures were duped into “health consultations” and bilked for $100-200 a pop during the pandemic. Frequently these services were advertised to church mailing lists.
I like the one that considered the additional factor of people just having invasive worms in them already, especially those living in squalor in the heavily Republican southeast.
American healthcare is such that this would never be tested for until it becomes a problem, when your immune system is strained by other things the worms become a burden to your body. COVID is that other thing. So taking Ivermectin kills the worms as it was designed to do and this results in the body being able to focus just on fighting COVID. Given the "improved outcomes" being such low percentages to begin with, that fits pretty well to the distribution of the population that maybe have worms. So they say, "aha ivermectin did it, big pharma doesn't want you to know!" despite ivermectin being big pharma and just not being in on the conspiracy club
so then the other camp tries to do a controlled study of just covid patients and finds no link. their headline says "no link found, not recommended" of course, because they're John Hopkins instead of their next door neighbor, this is just a symptom of the conspiracy theory and leftist institutions.
but beyond that, it is a symptom of Republicans feeling accurately underserved by those institutions. There is an obsession with pointing out how dumb someone is, instead of empathizing and doing the additional study. "ah I see how you reached that conclusion, a bunch of people have been in a symbiotic relationship with parasites the whole time!"
On the other side, though, there was a huge financial incentive to discredit ivermectin.
If it turned out that ivermectin was effective at treating covid, the emergency authorization would no longer hold, and it would have ended the extremely lucrative business of covid vaccines. There was incentive to design studies that, on purpose, applied ivermectin incorrectly (not the right time, not the right dosage).
In the media, ivermectin was described as horse dewormer, a fake story about hospitals being overwhelmed by people overdosing ivermectin was repeated for weeks in mainstream media, even though before covid, it was considered a low risk medication, and was described as a wonder drug.
I'm not implying ivermectin successfully treated covid, mind you. I'm just saying that it's hard to trust the people that were saying it doesn't and, going further, they said it was a silly dangerous idea to even try using it for treating covid under medical supervision.
With that in mind, I can see how some people could come to the conclusion that, if big pharma and its lackeys in media and government want to discredit it so hard, it might even work.
You can slap this kind of “maybe X really cures Y but big pharma doesn’t want you to know” onto essentially any X and Y. It’s a conspiracy which is an unfalsifiable patch of epistemological quicksand: the more proof you provide, the more bs “evidence” and bad faith skepticism comes up.
At the end of the day, if X really cured Y, I don’t see why big pharma wouldn’t be all over it as well. There are plenty of countries that didn’t have American resources to buy expensive vaccines, why wouldn’t they use something cheap like ivermectin? In India they were manufacturing ivermectin and shipping it out to rich countries like the US while they were burning bodies in the streets because they couldn’t afford treatments. If ivermectin really worked, they would’ve used it.
My version of taking a cold shower is to re-read von Neumann, J., Goldstine, H. H. Planning and Coding of Problems for an Electronic
Computing Instrument (1947) to realise how much JvN et. al.* had already learned about writing and debugging code.
> Up to a point it is better to let the snags [bugs] be there than to spend such time in design that there are none (how many decades would this course take?). — A. M. Turing, Proposals for ACE (1945)
I would like to see one on all Apple's latest advancements. The hype: "SwiftUI, async/await and actors, and declarative animations make building a robust app easier than ever". The shower: "Imperative programming is really easy for our little brains to handle. UIKit, Core Animation, and GCD are easier to work with and reason about"
I don't understand this. This is a GitHub repository of someone's largely unverified opinions? Since when is talking about static typing considered hype?
Saying static typing is "hype" is like saying "adding a handle to a coffee mug makes it easier to hold" is hype.
When you look at the history the “scene” swings from one side to another. Initially people mostly used static languages (C time). Afterwards came some dynamic languages (abc). After that the internet time came back with static (java time). After which the dynamic took over again (python time). Currently we are moving again to static (kind of) mypy / typescript.
My guess would be this depends on what problems is the majority of developers trying to solve.
But of course this summary is just a huge simplification of evolution of languages.
You have to look at whether software is being hacked out by an individual and then left behind for the next cool thing, or a massive project built by teams and maintained over years. Static typing just slows down the first case and can be vital in the second case. And the overall trend on these swings back and forth too, when a new environment like the web or mobile comes out then quick hack jobs rule the day, then as the environment matures and the dinosaurs take over projects get bigger and longer lived.
It's a bit strange to say "hype" to static typing, because the idea is old and pervasive, but nevertheless there is hype around static typing. For example, the reason why TypeScript was created is because JavaScript isn't statically typed, and many people missed this, or attributed shortcomings to this. So while "hype" might not be the best descriptor, the debate itself around typing, and how clever and self-aware data types should be is an evergreen one, with no exclusive winners.
> Saying static typing is "hype" is like saying "adding a handle to a coffee mug makes it easier to hold" is hype.
I think so too, but that doesn't mean it's not controversial.
There are plenty of commenters who will happily say that it takes too long to put a handle onto a coffee mug. Handle-less mugs will get to market first! Besides, you spend too long fighting the handle.
I've been wondering what is the "new" innovation about concurrency is in Go? Since most languages provide some type of channel/lightweight thread, what is the hype all about? I don't think Go makes the "hard" stuff in concurrency any easier, mainly the easy stuff becomes very easy, but in concurency most your time is spent in the "hard" parts.
This may as well be titled Awesome Scissor Statements[0].
I found myself wanting to argue with the typing one as soon as I saw it went up to 2014 and so would have nothing on more modern Typescript and gradual typing and then I realized what I was doing.
Static typing can often lead to over-engineering. It gets so complicated that you fight the type system at some point. Most Java frameworks are an excellent example of this problem. I prefer progressive typing like JS with JSDOC and light type checking with TSC.
Hi layer I saw that you commented on a post about needing to set up a freelance "Gewerbe" in germany to work for US company. I would literally pay you 50 bucks if you can help me really quickly NOW...need to sign a contract for a job until midnight. I appreciate you!
I just started incorporating cold showers (a controlled "cold plunge") into my routine for a variety of "life hack" reasons, so was a bit let down by the content. Maybe it could add a cold shower on cold showers?
EDIT: The original title was simply "Awesome Cold Showers"
My dad was always going on about how he never got colds because he did alternate hot/cold showers (as in hot/cold within one shower session). This is also something I saw the old folks do at the mineral baths in my hometown of Stuttgart: take blisteringly hot showers, then jump into the cold, bubbly, sulphury pool, repeat as long as you can withstand it. It's very relaxing but not sure it has any lasting benefits.
The human condition is such a weird one that for me the biggest benefit is hacking with hedonic adaptation. Of constantly resetting baselines so there is appreciation of the many wonderful things in life.
Many of the most enjoyed moments in my life -- moments when I really was living the moment, and appreciating and enjoying every element -- were often after "bad" moments. Not bad like war or trauma or anything, but like my best camping moment ever was when it had been raining for days, we were cold and miserable, but then the sun rose and it was a warm day and that day, with everything drying, warmth, and comfort, suddenly everything was brighter and better than it had ever been. The meal we made over the fire was next level. My best moment skiing was having a long, tiring and intensely cold day and then going into the blazing hot cabin to sit by a fire and have hot chocolate.
I make a french press pot of freshly brewed coffee and honestly the experience quickly becomes....meh. It's just coffee. But then I go on a business trip for a week, where I can only source mediocre coffee, and suddenly come home and my home coffee is just revelatory. It is just something I can sink into and sit in awe of.
So cold showers for me are like that. It is intense discomfort to reset the baseline, and suddenly warmth feels exhilerating and wonderful.
Haha, I've been doing this for years. When I'm done with my hot shower, I flip it cold and do a cool down even in the winter. Reminds me how much of a miracle hot water from a shower is.
Only downside is I live in a desert so the water from the tap is warm when it's summertime.
I took a real cold shower yesterday for the first time. Lasted about 35 seconds, but honestly felt great for a couple hours afterwards. Couldn't bring myself to do it again today :D
Absolutely. I recommend doing the Wim Hof breathing technique right before turning it cold, it gives you a little boost of adrenaline that just tips the emotional valence of the experience from, "this is terrible why would anyone do this on purpose" to "wow this is exhilirating".
I think a cold shower in the morning, when I do make it happen, gives me a can-do and energetic emotional state for the rest of the day. Force your body to start moving, and the mind follows.
This is a plot, a trick, a ploy. You're all some sort of environmental mega zealots, trying to trick me into cold showers to save hot water power costs. I won't have it, I won't.
Wim Hof, a man who learned one of the most advanced spiritual practices (Tummo), and uses it to teach people to stay warm. It's like studying with a Michelin chef then working in McDonald's.
Unnecessary put-down I think, the guy is also an author, coach, TV personality, etc. If you want to make a McD's analogy, he'd be the engineer that fabricates and optimizes their meals.
That may be, but my point is he's learnt an amazing technique and doesn't teach its true purpose. It's wasted on him it seems, as he's just trivialised it.
It's really hard (I would go so far as to say impossible) to set up useful metrics and it's really hard to create comparable scenarios. And even if you had those, very few things apply generally to all fields of software, let alone the different types of personalities that developers have, even if the stereotypes have a bit of truth to them.