Part of the reason why in particular English & French settlers saw the local Indians as incapable of building the things they saw is because by the time they were settling these areas, the local Indians were massively depopulated and really weren't capable of it anymore. This isn't the case in the Spanish settlement regions because the Spanish got there so much earlier and actually saw the complex societies in their heyday. Bernal Diaz exploring and conquering Mexico in 1521 was not dismissive of the locals' construction abilities! Mexico did depopulate - it didn't reach the same population as it had in the early 1500s until the mid-20th century - but the plagues didn't hit until the Spanish saw what was there.
I've heard bitching about some of the interview questions our team asks before, but here's the thing: each of those questions is about a problem our team actually had to solve before, reformulated into an interview-style question. Yes, we've had to use Hamming distances, worry about the scaling (N log N vs N^2) of particular solutions, use error-correcting codes, interesting data structures and all of that. Is that most of the job? No, we do a lot of more boring stuff too, but the algorithms and data structures are definitely a part of it. I don't want someone who can only glue pieces together, developing novel tools to solve the problem is important too.
One needs to be especially careful with adapting experience to interview questions. Unlike the candidate, you probably were already immersed in the problem domain for months or years and have good context in what techniques exist and are relevant to the problem domain.
One mistake some interviewers make is implicitly assuming that candidates can somehow conjure the same level of context from first principles, or that a specific algorithm might be familiar or reusable outside of its original context. Another mistake is "looks-like-me" bias.
For example, I happen to have a lot of context on a very specific algorithm that underlies basically every modern web framework but if I wanted to evaluate a candidate on web performance, I'd look at performance optimization as a open ended problem domain rather than drilling them on the particulars of this specific algorithm. In fact, out in the world of web framework performance, the most novel advancements come not from revisiting the algorithms but from looking at the problem domain from entirely new angles that had not even been considered before.
Every time I feel "this would be a good interview question!" it is not. It's usually something juicy that I chewed on for two days. How can I expect a candidate to solve it in 45 minutes?
I've had that same feeling on that side of the table, too. Like some of the stuff I've run into is really cool and really rewarding and is really tempting because it would make an interview something more fun for me. Because I know the answer.
Over time I've learned that I'd kind of rather lean a little more towards the easier side than the harder for writing code during an interview, because the interview is unpredictably stressful. But at the same time, as prioritizing communication and a degree of thoughtfulness has become more important (which has ended up with me bopping over to a devex job where I am now), I've leaned more heavily on "let's talk through XYZ and suss out how you discursively approach the problem" types of interviews. Which definitely selects for a particular audience, but it's one more useful for the roles I've hired for.
I bet you have.
But how long did you have to solve it?
Did you have access to the internet, talking out ideas with your colleagues, coffees?
I agree that you can’t simulate real scenario in an interview, but you can also acknowledge that the process is a little cartoonish.
Interviewees are encouraged to use Google, code on a laptop rather than on a whiteboard, etc. Sadly, I only get an hour to interview people rather than a week. But I have had people complain about "completely unrealistic problems unrelated to the real work" when the problem is something I literally had to solve 3 months ago.
To be fair, you could just give them the question ahead of time. Email them 3 days before their interview with the question and say see you soon. That is much closer to actual conditions
> each of those questions is about a problem our team actually had to solve before
On my last job search I had one interviewer state (very proudly) the systems design question I was being asked was an actual problem his team had to solve. I don't doubt the veracity of his claim at all, but it probably wasn't solved by a single person under the time constraints and pressure of an interview.
Most likely someone on that team spent hours or days researching and designing potential solutions before drafting a design document that was shared and discussed with others, perhaps informally or perhaps in a meeting (or over the course of multiple meetings) where tradeoffs were considered among people with deep knowledge of the existing system and problem space. Expecting a candidate with only superficial (at best) knowledge of your current system to come up with the same or similar solution on their own in 30 or 40 minutes seems a bit unrealistic to me.
The context is more like this: we regularly have internal brainstorming sessions when we run up an interesting or tricky problem, to come up with ideas on how to solve it.
So in the context of an interview, I'm trying to treat the interviewee like a colleague who I'm coming to with a problem I'm having, so we can come up with a solution together. That often involves drawing things out on a whiteboard: not code, but more diagrams to describe the problem. Then we come up with ideas on how to do it, under various constraints that I share.
Usually I have in my pocket 2-3 different approaches that we tried when we did it ourselves, and I'm looking for: can you understand the tradeoffs between these different approaches, do you understand how they work, and are you capable of implementing them to test and cross-compare them?
Scientific computing. We write software to do domain-specific scientific analysis on customers' machines. So we have a diverse set of needs of the specific analyses to do: sometimes we just use algorithms out in the open literature, sometimes we have to develop them ourselves. We've also had substantial work in devops, because we have to release software packages that not-very-savvy academic users can deploy and run on their own wildly-varying machines.
Compare to needle cases, which were similar womens' tools. Most of them were wooden or bone, but rich women would get fancy ones made of bronze, and pretty commonly be buried with them.
Sewing, spinning, weaving, and knitting tools are fairly common grave goods to find in women's graves: spindle whorls, needle cases, bone beaters, etc. As far as wood vs. bronze, almost certainly survivorship bias. The only wooden objects that survive that long are those that have fallen in bogs, where the anoxic environment prevents rot. Normal wooden grave goods will have rotted away by now.
Plenty of people foresaw it. California spent ~$200m on building up a stockpile of PPE and other medical supplies back in 2006, before Brown cut funding for the program in 2011.
The risks of Covid to children is, as you say, extremely low. The risk of the Pfizer vaccine to children is even lower.
Right now those "individuals with an underlying complex disability" are not allowed to get vaccinated because the vaccine has not been approved for young children, even with an emergency use authorization.
I think the parents of children with disabilities that make Covid more potentially dangerous for them ought to be allowed to vaccinate their child. Do you disagree?
I think we're at the point where public mandates are doing more harm than good. I am disappointed that there are so many people pushing back against simple approval of an effective vaccine because they are worried about an unnecessary mandate being imposed.
Indeed. Even this panel discussed that they must keep in mind that what they are voting on is whether to release it for optional use not mandatory use. Unfortunately they have no bearing on whether the government bodies decide to mandate it.