You are absolutely correct. Valve's linux push was driven by developments in the windows platform, specifically around the release of windows 8. Microsoft was pushing a windows store similar to Apple's app store, and Valve was unequivocally stating that they were worried Microsoft would basically lock down the platform and only allow software sales through their own store, destroying their steam business. Gabe said it plainly himself (https://www.bbc.com/news/technology-18996377):
> Mr Newell, who worked for Microsoft for 13 years on Windows, said his company had embraced the open-source software Linux as a "hedging strategy" designed to offset some of the damage Windows 8 was likely to do.
> "There's a strong temptation to close the platform," he said, "because they look at what they can accomplish when they limit the competitors' access to the platform, and they say, 'That's really exciting.'"
> This is seen by commentators, external to be a reference to the inclusion of a Windows Store in the Microsoft operating system.
Having an open platform is good for consumers, but Valve is primarily looking out for themselves here. Gabe realized that windows could take Apple's IOS route (i.e. https://blog.codinghorror.com/serving-at-the-pleasure-of-the...) and lock down their OS, and everything he's done since has been an effort to protect his company against that existential threat.
GabeN was the lead developer on Windows 1, Windows 2, and Windows 3. When Windows 95 launched, he was a bit upset that no one was making games for Windows. He did a rough port of Doom to prove the viability. Around the same time Alex St. John, Craig Eisler, and Eric Engstrom were building DirectX, GabeN saw the potential, left to create Valve, and proceeded to try and making Windows gaming a great thing.
I can only imagine that he was heartbroken to see Windows go the way it did with Windows 8, 8.1, 10, and now 11.
Well actually they tried with Windows Phone, Windows RT and Windows 10 S but failed miserably. Even Apple didn't even try to lock their macOS from installing 3rd party app.
Hours to seconds conversion probably, the number 60 plays a role there. (Albeit not base 60 but mod 60, but I'm not firm enough in the math to rule out that there is some correspondence between the concepts)
I'm sort of the inverse of this author: I have always liked Python and disliked Ruby. It's true though that python has changed a lot, and it's a mixed bag IMHO. I think every language feature python has added can have a reasonable argument made for its existence, however collectively it kind of makes the language burgeon under the weight of its own complexity. "one way to do it" really hasn't been a hard goal for the language for a while.
I'm really charmed by ML style languages nowadays. I think python has built a lot of kludges to compensate for the fact that functions, assignments, loops, and conditionals are not expressions. You get comprehensions, lambdas, conditional expressions, the walrus operator... most statements have an expression equivalent now.
it seems like, initially, Guido was of the opinion that in most cases you should just write the statement and not try "to cram everything in-line," so to speak. However it can't be denied that there are cases where the in-line version just looks nice. On the other hand now you have a statement and an expression that is slightly different syntactically but equivalent semantically, and you have to learn both. Rust avoids this nicely by just making everything an expression, but you do get some semicolon-related awkwardness as a result.
I feel similar about "weight" in Python. Some people can really overdo it with the type annotations, wanting to annotate every little variable inside any procedure, even if as a human it is quite easy to infer its type and for the type checker the type is already clear. It adds so much clutter and at the end of the day I think: "Why aren't you just writing Java instead?" and that's probably where that notion originates from.
I used to be like that. When I did Java. I used to think to myself: "Oh neat! Everything has its place. interfaces, abstract classes, classes, methods, anonymous classes, ... everything fits neatly together."
That was before I learned more Python and realized: "Hey wait a moment, things that require me to write elaborate classes in Java are just a little bit of syntax in Python. For example decorators!" And slowly switched to Python.
Now it seems many Java-ers have come to Python, but without changing their mindset. Collectively they make it harder to enjoy using Python, because at workspaces they will mandate the most extreme views towards type annotations, turning Python into a Java dialect in some regards. But without the speed of Java. I have had feedback for a take-home assignment from an application process, where someone in all seriousness complained about me not using type annotations for what amounted to a single page of code(, and for using explanatory comments, when I was not given any guarantees of being able to talk with someone about the code - lol, the audacity).
Part of the problem is how people learn programming. Many people learn it at university, by using Java, and now think everything must work like Java. I mean, Java is honest about types, but it can also be annoying. Has gotten better though. But that message has not arrived yet at what I call the "Java-er mindset" when it comes to writing type annotations. In general languages or their type checkers have become quite good at inferring types.
I am not an experienced programmer, but I liked python because of the dynamic typing, but tbh no type hints are a nightmare (as I used to use python). These days I gravitate towards using type hints unless I am using an ipynb because it looks clean, but it can be a little much, it can look quite ugly. Not every usecase needs type hints is what I've learned.
A good compromise can be for example: Have your type annotations in the head of the procedure you are writing. That includes types of arguments and return type. You write it once at the head, and when you need to know you can look it up there, but you don't need to clutter the whole rest of the code. If you write well composing functions, then this will be all you ever need. If you write procedures 300 LoC, then well ... you shot yourself in the foot.
There definitely is an element of shooting oneself in ones own foot, but sometimes it seems unavoidable to me, or the effort just isn't worth it e.g. if I am using sklearn or numpy and the return types are ambiguous, then I'd have to overload each function at the head of the file or wrap it although it is clear what it does. What do you think? I think that if it's only my own code, then yes this is certainly avoidable with good composing functions.
I've come to the view that the best flow is to build a system in a dynamic language, and then - once you've got the broad strokes figured out - begin gradually typing it, where appropriate.
You definitely need to have a decent grasp of architecture to make this work - strict FP is very helpful to prevent any early spaghettification - but you ultimately get the best of both worlds this way: rapid iteration for the early stages and type safety once you develop a feel for the system you're building.
I've been doing this in Elixir in the last few months and I've really been enjoying it.
Yep, I agree with this. This is what I usually try in Python. Granted, Python is a way worse vehicle for FP than Elixir is, but I try to keep my procedures as pure functions as far as possible with not too big sacrifices in readability and performance. Most of the time a functional solution can be found, even in Python.
And maybe I am a little bit delusional thinking this, but in my experience, when you think deeply and come up with strict FP solutions, and you know what you are doing, then a lot of type issues don't arise, or are obvious to avoid. The simple fact that one thing you initialize once doesn't change over the course of its lifetime, already avoids tons of mistakes. You simply don't get this "Oh, is at that point in time that member of object x already modified, to be value y?" shit.
> Yep, I agree with this. This is what I usually try in Python. Granted, Python is a way worse vehicle for FP than Elixir is, but I try to keep my procedures as pure functions as far as possible with not too big sacrifices in readability and performance. Most of the time a functional solution can be found, even in Python.
That's really interesting. The last time I wrote any serious Python was back in the Python 2 era, so it's been a hot minute, but that Python certainly didn't feel very amenable to FP. Nice to hear that it's turned a corner. I'll keep an eye out for any use case where I could give FP-flavoured Python a spin.
> And maybe I am a little bit delusional thinking this, but in my experience, when you think deeply and come up with strict FP solutions, and you know what you are doing, then a lot of type issues don't arise, or are obvious to avoid. The simple fact that one thing you initialize once doesn't change over the course of its lifetime, already avoids tons of mistakes. You simply don't get this "Oh, is at that point in time that member of object x already modified, to be value y?" shit.
I very much agree with this, and I wish more FP evangelism focused on the many wonderful emergent properties of FP systems in production, instead of cutesy maths cleverness that turns off more people than it attracts (and those that it attracts were always going to be functional programmers to begin with).
To be clear, Python is still no good for FP, when comparing to other languages, including the aforementioned Elixir. Even though the Python ecosystem is huge, few people if any seem to spend any thought on functional data structures. And also you wouldn't be able to use them like in typical FP languages, because you don't get tail call optimization. You can only try your best in Python.
As far as I know, the science on this is far from settled. There is no consensus and the evidence in favor of a trophic cascade in Yellowstone came predominantly from two studies done by the same team/person. Later studies failed to replicate findings.
Do wolves fix ecosystems? CSU study debunks claims about Yellowstone reintroduction
That looks like a quite biased interpretation of these studies. Direct quotes:
> The average height of willows in fenced and dammed plots 20 years after the initiation of the experiment exceeded 350 cm, while the height in controls averaged less than 180 cm
> This suggests that well watered plants could tolerate relatively heavy browsing. It also shows that the absence of engineering by beavers suppressed willow growth to a similar extent as did browsing
They posit that the growth in control groups not matching the fenced areas is evidence of wolves reintroduction not having the effects they are said to have. It is a pretty unconvincing argument since there are so many other variables involved. They also prove that IF the wolves have indirectly lead to either the return of beaver dams, or reduced elk browsing, there is undoubtedly an impact in tree growth, which is a positive result regardless.
Their theory that things will never return to their original state, and instead will settle into a new alternate equilibrium is probably correct, but does not seem like the definitive blow to the wolf theory that it’s made out to be.
Both links are paywalled so I can't comment on what they say (positive or negative). That said, I did attend an interesting lecture about systems that looks a bit at the Yellowstone as a cautionary tale about extrapolating how a system works from observational data. Basically it came down to there are secondary and tertiary effects from systems variables that express visibly differently depending on both the magnitude of the system elements influence and the time where it it changes. Thus making "simple" conclusions like 'wolves did this' often insufficient to explain system behavior and sometimes outright incorrect.
However, the introduction of wolves did, incontrovertibly, add a system element that had not been present before. Exactly what that element was, and how it expressed is up for interpretation :-)
Brilliant observation. Dynamic systems like this are rarely a cut-n-done. Like the study of ozone, with it's seven counter intuitive steps, it is all an evolving study.
It also proves the worth of just simple studies over a long period of time. Science used to do a lot of that, and it was very interesting, as many appear on hacker news, but now it seems that cut-n-done grab more popular news.
It also bears the question: what longitudinal studies are popular here besides this one, and retro computing?
Awesome thanks. Added to my libraary. Interesting that the Coloradan study was asking a slightly different question regarding wolves in Colorado vs the magnitude of the wolf impact in Yellowstone. I felt the experimental setup was also good but might quibble that Elk have more impact than just eating, they can squash saplings just by walking on them. The point that Colorado maintains its Elk population by hunting was relevant as well, any impact of wolf predation would be less than it would be on a previously un-predated population. All in all though, I quite agree that the stories of Yellowstone’s wolves likely overstate the specific impact of wolves and understate the system dynamics that had other mechanisms also affecting them.
I also recognize that ‘popular’ writing is more about persuasion than facts :-) and it was important to persuade people that wolves weren’t “bad/evil” just predators that had lived there before. Telling that story as a rebalancing is certainly more palatable than saying “Yeah, if we had allowed hunting Elk (and perhaps Bison) in Yellowstone it would have similarly improved.” Generally keeping the human role as apex predator out of the headlines :-). Thanks again, great links.
It was the least I could do to help the discussion, as I’m not really knowledgeable enough about Yellowstone ecology personally to have a nuanced discussion with you or others in this thread, so I have to find other ways to contribute positively to help us all catch where catch can by enabling debate through proper context.
You’re welcome, and thank you for your response on the points for the benefit of me and the thread, as it was beyond my ken.
TL;DR - the observed reduction of the elk herd correlated with wolf introduction, but also with an increase in cougars, grizzly bears, and even bison, all of which either reduce or compete with elk. Human hunting also added pressure, but that has been limited as the herd size reduced. It is complicated.
Unfortunately you may have to wait some time, at the moment the journey is not be completable because the Paris-Moscow express service (and indeed all train service between Russia and Western Europe) is suspended due to sanctions against Russia.
iDeal is ubiquitous in The Netherlands for individuals sending money to each other, and for online payments. However it does not support NFC payments in physical stores. Dutch banks decided to go with Google/Apple wallet for this. I believe in the longer term Wero https://wero-wallet.eu/ (and potentially the digital euro https://www.ecb.europa.eu/euro/digital_euro/html/index.en.ht...) is supposed to take over this usecase in the EU.
That sounds good! Instead of NFC, they could use QR codes. All the payment terminals can show a QR code, and it's even possible to print a QR code on a piece of paper without the need for the terminal.
Twint does that. They started trying to make it "seamless" with NFC, but couldn't do it because of Apple (maybe now with the DMA it may change) and went for some kind of weird bluetooth stuff. Nobody used it. Then they moved to QR codes, and it quickly got very popular.
Everybody understands QR codes. No need to know whether your phone supports NFC or not, no need to go check if NFC is enabled in the settings, nothing. Everybody understands that they need to scan the QR code with their camera, period. Seems perfect to me.
Great man, on whose shoulders many dwarfs have postured, about having brought peace by self-posturing and self-producing. We talked ourselves into having changed and being better than our predecessors, while eating their meals. We got drunk with ourselves on their grapes.
This is generally unknown, of course. However it currently appears that the expansion of the universe is accelerating, i.e. it is expanding faster and faster.
Obviously it can't be ruled at that at some point this would stop and/or reverse. But there's no reason to think so, and if we're considering arbitrary future changes then we may as well consider that the universe might suddenly start heating up again in the future, or more mass will start appearing out of nowhere. Or god appears and hands out free decryption keys to everyone.
A key (hah!) property of key derivation functions is that they allow you to customize the key that you get out, mainly because you may need a specific length (e.g. 512 bits) for whatever encryption algorithm you're using. bcrypt lacks this functionality: you only ever get 192 bits of hash.
> Mr Newell, who worked for Microsoft for 13 years on Windows, said his company had embraced the open-source software Linux as a "hedging strategy" designed to offset some of the damage Windows 8 was likely to do.
> "There's a strong temptation to close the platform," he said, "because they look at what they can accomplish when they limit the competitors' access to the platform, and they say, 'That's really exciting.'"
> This is seen by commentators, external to be a reference to the inclusion of a Windows Store in the Microsoft operating system.
Having an open platform is good for consumers, but Valve is primarily looking out for themselves here. Gabe realized that windows could take Apple's IOS route (i.e. https://blog.codinghorror.com/serving-at-the-pleasure-of-the...) and lock down their OS, and everything he's done since has been an effort to protect his company against that existential threat.