He also now has a strong incentive to stick to what he said. Complying with China after publicly saying he wouldn't would be an even bigger PR disaster than what Blizzard is facing right now.
They touched on this in the article, but I'll go into more detail here since it's been fashionable for years to bash on "Blub" languages like Java or Go. I myself was guilty of this for a long time until I started using these languages in settings where they shine, and developed an appreciation for them.
The argument is nothing to do with machine chugging time, and is entirely towards developer time. The problem with expressive languages like Lisp, Ruby, Python, etc. is that the language ends up varying from person to person - the more expressive the language, the more variance there is. This is a feature when you're a small team because the abstractions you build let you move quickly, but it is a bug when you're a large team maintaining a piece of software over years, where developers have come and gone. The ramp-up time to learn and understand the various abstractions that people have built over the years ends up accumulating and cancelling out the gains that those abstractions gave earlier on.
Blub languages on the other hand tend to be more uniform, so it's easier for someone who isn't very familiar with the code to dive in and understand what is going on.
Java is no longer uniform, especially in the latest versions they're adding more and more features. And there's also Kotlin and project Lombok for people that want even more excitement.
And yet Java somehow manages to be both a boring language and still have too many things to lean. This achievement is probably not appreciated enough by those that criticize it.
Yep they are two very different languages. However with the possible exception of the build times argument, the points for choosing Go over C++ also apply to Go over C.
> "Machines are taking control of investing..... Funds run by computers that follow rules set by humans"
One of the points that the article makes is that this statement is changing, that computers are increasingly creating their own rules. The literal next sentence:
"New artificial-intelligence programs are also writing their own investing rules, in ways their human masters only partly understand."
You're right though, that the responsibility still falls on the humans. Anybody running algos that they don't understand should ensure that they are covered from a legal/ethical perspective.
Actually, the problem is not of the human masters' understanding, it's the ignorance of the human master. Some models are so involved, no one knows how they'll behave under certain market conditions, as we saw in the perfect storm of 2008 with credit default swaps, CMOs and CBOs. That's the danger, no matter how smart and clever the creator believes they are.
What I meant was the machine learning systems that have developed in the last decade since the 2008 crash. Many of these are black boxes with thousands to millions of variables, and it's very difficult to understand exactly why a ML algorithm made a decision that it did.
The former. LTCM blowup in the late ‘90’s is a perfect example of a generally correct trade, but the market moving against them long enough to force insolvency.
Apparently, none of the Nobel prize winners thought to model out that particular adverse condition.
Bunch of smart people + one missed market condition = failure (eventually).
Speaking for Canada rather than Europe but I imagine the argument is mostly the same. The supply and demand ratios are different. Canada has very few pure software firms like Google or Amazon compared to the US, so there is significantly less competition for the engineers that work there. This drives prices down - or rather, prevents prices from being driven up.
> most of us have seen managers try to improve quarterly numbers at the expense of company health
Yep, this does happen. The point though, is to let shareholders rather than managers decide if this is a good choice. If managers are sacrificing the long-term health of the company, shareholders can vote them out or choose to sell their stake.
This is one of the downsides of the rise of passive investing and ETFs based on market cap - less votes are going into the system and holding people accountable.
> This is one of the downsides of the rise of passive investing and ETFs based on market cap - less votes are going into the system and holding people accountable.
uh. the fund managers vote their shares. and they've got way more time and subject matter expertise to do it than the mom & pop investors.
> In this study, we tested 77 undergraduates who were randomly assigned to play either a popular video game (Portal 2) or a popular brain training game (Lumosity) for 8 h.
Sorry my comment should have been clearer. I wasn't actually asking for the sample size - i was making a statement that the sample size was ridiculously small as to be irrelevant. It was more like an eye roll. Thank you for responding to my literal comment snd reminding me to be clearer in my messages :)
77 is a large enough sample size to produce statistically significant results in a lot of cases, depending on the problem. For example, using 77 men and women, you could easily see a statistically significant difference in height between genders.
This paper is pretty simple, it's just a straight-forward before-and-after test. They are holding a lot of factors constant and not doing an observational study, so 77 is actually a fine sample size for something like this.
This is exactly the reason why Typescript took off and none of the other compile-to-JS languages did (other than a brief flash by CoffeeScript). Dart had a very similar problem as Reason - it wasn't Javascript. The interop between Dart and JS libraries was just too much of a pain to deal with, where in Typescript everything just worked.
Since building your own ecosystem that rivals the JS ecosystem for libraries is an extremely difficult task, any candidate to replace JS must have good interop with its libraries to be successful.