First, those are questions for a variety of different positions all mixed together. Product manager candidates get asked different questions than engineering candidates.
Second, yes, Microsoft tracks the effectiveness of interviewers based on the outcome of the interview loops and the performance of the people who get hired.
Third, all of the questions listed, no matter how trivial they seem, are dispositive. I've interviewed engineering candidates with PhD's from top tier universities who couldn't reverse a linked list when asked, or who couldn't even explain basic concepts in their putative focus areas. You have to ask the seemingly stupid stuff -- it's a continual surprise.
Full disclosure: no, I don't work there now. Yes, I used to, a very long time ago.
"Third, all of the questions listed, no matter how trivial they seem, are dispositive. I've interviewed engineering candidates with PhD's from top tier universities who couldn't reverse a linked list when asked, or who couldn't even explain basic concepts in their putative focus areas. You have to ask the seemingly stupid stuff -- it's a continual surprise."
While there are certainly idiots with PhDs, if you've got a candidate with a PhD from a top-tier institution who "can't reverse a linked-list", it's most likely because (s)he is under-prepared at the art of technical interviewing. Idiocy is clearly not impossible, but the conclusion that the candidate is an idiot should be on the low-prior-probability event list.
One thing that drives me absolutely crazy about technical interviews today is that most interviewers have completely lost the bubble on what they're trying to accomplish. It's become a bizarre, nerdy form of Kabuki theater, wherein candidates are madly trying to cram their heads full of list- and string-algorithm esoterica, while hoping that they're not presented questions so unfamiliar that they can't derive the answer in under 30 minutes, at a whiteboard. If the interviewer doesn't take this into account (and few do), the interview becomes little more than a random, high-pass screen, wherein lots of smart people are eliminated from positions based on bad luck, and little else.
Joel (on Software) has it right, but it seems like few people are listening: you want to make sure the candidate is smart, and can get things done. That's it. The goal is not to see if they're the next human incarnation of Alan Turing, and it's certainly not to see if they can derive fiendishly difficult algorithms during a whiteboard lecture (call me crazy, but I'm reasonably sure that Turing didn't come up with his theories while talking continuously in front of a whiteboard, while some asperger-y geek sneered at him from across the room.)
When you're interviewing, you want to make sure that the candidate can write code, that they're reasonably smart, and (IMHO) that they're not an asshole. It seems to me that our industry has taken "coding puzzle" and turned it into "crappy intelligence test", while largely ignoring the "not an asshole" part of the equation -- the part that's usually the most important in real life.
I could not agree more. The thing I finally realized is, it doesn't matter. Someone who will struggle reversing a linked list will struggle on any coding problem, no matter how simple. The one I ask is: given a string, return a hash where the keys are each letter in the string, and the values are how often it appears: "hello" becomes
{"h"=>1, "e"=>1, "l"=>2", "o"=>1"}
This question has stumped almost everyone I have asked it to. Either they just sat there and wouldn't code it, or they made insane mistakes like getting the syntax of writing a function wrong, or they couldn't write a loop that worked.
I let them do it on a laptop, I let them look at any API documentation they want, and I offer them help when they're stuck. It doesn't matter. Would they also flail trying to reverse a linked list? Sure, definitely. But why bother? This is the simplest thing that will cause non-programmers to flail, so why make it hard?
Plus the added benefit: a manager might claim "when will they ever need to reverse a linked list?" but nobody's going to say that writing a loop is unreasonable.
Reversing a linked list is not an Alan-Turing-level problem.
I've used the reverse-a-list problem (and others of a broadly similar kind) in interviews. It shouldn't matter at all if a candidate hasn't read or written any list-manipulation code in years; that gives them the chance to sit and scribble some diagrams for me and work out how to do it. I don't mind much if they don't end up with a beautifully neat solution. I want to know things like: Did they check for edge cases? Did they blunder around trying things until they got code they couldn't prove was wrong, or did they work out something that ought to work and then implement it, or what? Did they ask themselves questions like "what facts ought always to be true at this point in the code?"? If they produce an inelegant or inefficient solution and I say "So what happens if you do X?" or "Could you do anything about Y?", do they panic and freeze or do they get thinking about the question? These are not a matter of having crammed their heads with algorithm esoterica. They're a matter of being reasonably comfortable with code, and being a problem-solver rather than a copy-and-paste artist.
I'm quite sure there are plenty of software developer jobs that can in fact be done by someone who's fundamentally not very comfortable thinking about code, and/or who has little interest in problem-solving or little aptitude for it. It happens that those aren't the jobs I've interviewed people for, but they're real enough and collectively they probably account for a majority of the value added to society by software development. But when you're interviewing for a job that does call for independent thinking and fluent code reading and writing, that sort of question -- if used correctly -- is very valuable.
Of course, an interviewer who thinks the point is to say "Write me some code that reverses a linked list" and then vote yes if the candidate does it on the spot and no otherwise, is going to reject some good candidates and accept some bad ones. But an interviewer who thinks that way is going to get lousy results regardless.
"Reversing a linked list is not an Alan-Turing-level problem."
I'm not suggesting that it is. The problem is, people are asking much harder questions -- questions that require "aha" brilliance -- and using it as a proxy for intelligence. That's stupid.
Hell...it's not even a deal-breaker if someone has to struggle a little to work out the algorithm for reversing a list, so long as they get it right. The problem is that if you spend more than 5 minutes (or whatever) doing it, 99% of nerds are going to flip the idiot bit on you, and it's time for "Do You Have Any Questions For Me?"
Once you've set up the game such that a candidate has to memorize the answer to "easy" questions to perform, there's no end to the regurgitation that could be required. Before long, people are committing obscure algorithms to memory, because "someone might ask", and they don't want to appear to be stupid. It's a waste of time and energy for everyone involved.
Well, FWIW, when I've asked people "write me code to reverse a list", I've simply assumed that unless the candidate is a real superstar it'll probably take them a while to get to a working solution, and I'm happy to give them some help along the way. I completely agree that an interviewer who expects a correct solution within a few minutes (even to a relatively simple problem like that one) is being dumb and helping to make the world a worse place.
As for "aha! brilliance", the trouble with that is that the variance is so large; someone very good may well take a while, and someone not so good may well happen to get there quickly.
"Gets-things-done" people prepare for technical interviews, such that they don't flub simple questions. The danger for the employer is that lots of Computer Science PhDs end up not writing much code during their research, and may not be a good fit at many tech employers.
"Interview for Smart and Gets Things Done" is an accurate explanatory abstraction over what good interviewers do, but it's not prescriptive enough to tell people what questions to ask and how to judge the results.
"'Gets-things-done' people prepare for technical interviews, such that they don't flub simple questions."
So, let's be clear: you're admitting that you're screening for a trait that you assume is correlated with the trait that you actually want. I don't grant the assumption, but it should at least be explicitly stated.
"The danger for the employer is that lots of Computer Science PhDs end up not writing much code during their research, and may not be a good fit at many tech employers."
How many computer science PhDs have you hired, let alone interviewed? I'd wager that it's not enough for you to be able to make this judgment with any confidence. And even if you're right, how does asking questions about linked-list reversal address the question of the tendency of a person to do practical work? Aside from tech interviews, I've never once in my life had to write a linked-list reversing routine.
Look, I'm not saying "don't ask coding questions" -- I'm saying that we need to start being reasonable. Don't assume that a candidate is a no-hire simply because that they haven't pre-memorized the algorithms for the questions that you're asking. It's ridiculous that we're screening people based on the number of silly tricks that they can memorize from interview question websites.
I think by and large we agree: It's wrong to expect memorized answers or to ask questions that are so narrow that they only test whether a candidate spent time studying. I'd even go so far as to say that algorithmic questions are probably not good indicators for skill at many kinds of work we would call "programming"---i.e. Programming skill is more heterogeneous than many interviewers admit. We shouldn't expect a jQuery wiz to nail low-level data structure questions, and we shouldn't expect a bit-twiddling video codec developer to really grok method chaining in 30 minutes.
The trouble with "Be reasonable" is that's it's the advice equivalent of a tautology. Of course we should be reasonable when interviewing. But I don't think there's widespread agreement about how to operationalize that. I'd be curious for more detail about how you would do it---you seem to have strong, well-informed feelings on this issue.
To my knowledge, there's basically no publicly-available research on tech interview factors and how they correlate with on-the-job performance. The good big employers do this research internally and keep it to themselves. The rest of us are stuck with assumptions, intuitions, logic and argument. So unfortunately I don't think we'll be able to get the debate into the realm of interpreting real data anytime soon.
"The trouble with "Be reasonable" is that's it's the advice equivalent of a tautology. Of course we should be reasonable when interviewing. But I don't think there's widespread agreement about how to operationalize that. I'd be curious for more detail about how you would do it---you seem to have strong, well-informed feelings on this issue."
I think my primary argument is that it's basically impossible to assess "intelligence" at a 30-minute white-board session. There are too many other factors: thinking style, nerves, fear of speaking, etc. But most interviewers will flip the idiot bit if you don't answer their pet algorithm question quickly enough; it can be very difficult to come back from that kind of deficit.
It's usually pretty easy to tell if someone can't code -- you give them a straightforward problem (no "aha" moment required), and make them write the code. If you're really worried about it, give them a phone-screen problem that requires coding, then make them write a variant of the solution during the interview.
For "intelligence" testing, I like to give design problems, since they're far more representative of what will happen on the job. For these, I usually reach back into the grab-bag of recently-solved problems, and ask them to sketch out a solution, then iterate. If their solution is better than my own, that's a win. If it's the same, that's good too. If they just can't come up with something reasonable...well, we have a problem.
But note what you don't see here: there are no situations where I ask a question that requires a moment of algorithmic brilliance. That's just too random. Google can get away with that kind of stuff, because they get thousands of resumes a day, and can't possibly hire every good engineer that applies. For the rest of us, we have to be a bit more intelligent.
If the job involves writing code, you're going to be asked to write code in the interview.
The "reverse-a-linked-list" question is an attempt at "minimal coding question that anyone should be able to do in 15 minutes at a whiteboard". It's an indicator, a mark of the ability to think on your feet and understand the basics -- think of it as the "pons asinorum" of programming. (cf http://en.wikipedia.org/wiki/Pons_asinorum)
If you're going to demand argument-from-authority, I've probably interviewed more than 200 PhD's for various positions over the last 30 years, and a statistically significant number of them were unable to pass a basic set of tests that history has shown to indicate the ability to function in an industrial-style software development environment.
Industry demands a different skill set than research; not better, not worse, just different.
I wasn't demanding an argument from authority; I was trying to point out that your argument is just an opinion -- you're already arguing from authority.
" I've probably interviewed more than 200 PhD's for various positions over the last 30 years, and a statistically significant number of them were unable to pass a basic set of tests that history has shown to indicate the ability to function in an industrial-style software development environment."
What you're saying is that a "significant" number of PhDs (from a small sample that you have interviewed), have not passed the tests that you put in front of them. That's a long way from the argument that PhDs tend not to thrive in industry.
Ignoring the fact that you're begging the question (are your tests any good?), I would wager that a "significant number" of interviewees with any degree would fail your tests. The question is, do PhDs fail at a higher or lower rate? I very seriously doubt you have enough data to substantiate your claims about the industry-worthiness of people with doctoral degrees.
I don't know how it works in the US, but in the UK a PhD is usually awarded based purely on research - there is relatively little cutting edge research that I am aware of that hinges on manipulating linked lists. :-)
First, those are questions for a variety of different positions all mixed together. Product manager candidates get asked different questions than engineering candidates.
Second, yes, Microsoft tracks the effectiveness of interviewers based on the outcome of the interview loops and the performance of the people who get hired.
Third, all of the questions listed, no matter how trivial they seem, are dispositive. I've interviewed engineering candidates with PhD's from top tier universities who couldn't reverse a linked list when asked, or who couldn't even explain basic concepts in their putative focus areas. You have to ask the seemingly stupid stuff -- it's a continual surprise.
Full disclosure: no, I don't work there now. Yes, I used to, a very long time ago.