> The thing is, none of these things really changed with AI
I agree that lying was possible before AI, but something about AI has emboldened a lot more people to try to lie.
Something about having the machine fabricate the lie for you seems to lessen the guilt of lying.
There's also a growing sentiment online that using AI to cheat/lie is "fair" because they think companies are using AI to screen candidates. It's not logically consistent, but it appeals to people who approach these problems as class warfare.
With all the unemployed tech workers, would it just make sense to hire someone who knows their salt to do recruiting and interviews? Recruiters always seem to have a blast moving between random high-level companies and ghosting people over text, socials and the phone. If they lack both the social skills and the technical knowledge, I don't know what their value proposition is, but compared to chronic underemployment after actually learning Java, C, C++, they're clearly winning.
The problem is "knowing their salt to do recruiting" is very hard. In all places I've been, the kinds of interview we are talking about here (technical problems, etc...) are delegated to regular engineers. So those technical interviewers are likely great at reading and writing the code, but they many not be the best at spotting fake AI.
(The recruiters only come in for non-technical parts like resume filtering, general information and benefits. Sometimes there is non-technical "culture fit" interview, that is usually some sort of middle manager from the department doing the hiring)
Interviewing has also become harder too. You try to search the net during the interview, because you forgot the name of a thing, and the interviewer will assume you are running with an AI chat, and are cheating the interview.
Its not about transparency, it about what the interviewer assumes about you, first hand. Just like you assuming that whoever is looking up things must be doing it in secret, with the intention of cheating.
It depends - I'm conducting interviews now and I'm totally ok with people screen sharing and showing me their internet searches and AI prompts as part of the interview. Part of the skills I'm hiring for is "can you find the docs/information you need to solve this", so knowing how to use whatever tools you prefer in order to do that is important.
People taking minute-long pauses before answering questions.
People confidently saying things that are factually incorrect and not being able to explain why they would say that.
People submitting code they don't understand & getting mad when asked why they wrote something that way.
I get that candidates are desperate for jobs, because a bunch of tech companies have given up on building useful software and are betting their entire business on these spam bots instead, but these techniques _do not help_. They just make the interview a waste of time for the candidate and the interviewer alike.
I interviewed every single candidate for development positions in a 300-400 company for the last three years and I saw some incredibly crazy stuff.
- A candidate who wore glasses and I could faintly see the reflection of ChatGPT.
- A candidate that would pause and look in a different specific direction and think for about 20, 30 seconds whenever I asked something a bit difficult. It was always the same direction, so it could have been a second monitor.
- Someone who provided us with a Resumé that said 25 years of experience but the text was 100% early ChatGPT, full of superlatives. I forgot to open the CV before the interview, but this was SO BAD that I ended in about 20 minutes.
- Also, few months before ChatGPT I interviewed someone for an internship who was getting directions from someone whispering to them. I managed to hear it when they forgot to mute the mic a couple times.
Our freelance recruiter said that people who aren't super social are getting the short end of the stick. Some haven't worked for one, two years. It's rough.
> I interviewed someone for an internship who was getting directions from someone whispering to them. I managed to hear it when they forgot to mute the mic a couple times.
What do you do when something like this happens in an interview? Do you ignore it, call out the interviewee, make a joke about it?
I ignore and cut the interview short in a subtle way, then ask HR to reject the candidate.
I'm not cold blooded enough to joke about this hahahaha
I do tend to give immediate feedback to most candidates, but I try to make it strictly technical and very matter-of-fact. A suspicion of cheating is not really something that I'd give feedback on. :/
I would tell the interviewee that I want to continue the interview with the other person since their answers indicate they’d be a good fit for the position.
Years back, I had someone interviewing in person for a low-level, bit-twiddling, C++ role without knowing what hexadecimal is (no clue how they got that far; the external recruiter was given "feedback"). Pretty much lied about everything, tried to bullshit his way through questions. I have no idea how they thought they'd manage the job.
Just like with semi-personalized phishing/spam, it's not that these things didn't happen already, it's that people are empowered and emboldened to cheat by it becoming easier. The difference is in quantitative not qualitative.
>There's also a growing sentiment online that using AI to cheat/lie is "fair" because they think companies are using AI to screen candidates. It's not logically consistent
Because it's a nonsensical reduction and false equivalence.
It's like if you saw a headline that some grocery stores were price fixing, so you decide it's only fair if you steal from your local grocery store. One bad behavior does not justify another in a different context. Both are wrong. It's also nonsensical to try to punish your local grocery store for perceived wrongs of other grocery stores.
That's why it's such a ridiculous claim: Two wrongs don't make a right and you don't even know if the people you're interviewing with are the same as the people doing the thing you don't like.
>It's like if you saw a headline that some grocery stores were price fixing, so you decide it's only fair if you steal from your local grocery store.
That's a false equivalence on your part. Real equivalence would be to find out that the store decided to keep zero tills manned and forced you to do the work yourself and go the self checkout. You go do the self checkout and keep a few items extra as a form of payment for the work you did. This would be the real equivalence
i used my words to speak to the candidate, so they think its fair game to use their words to lie.
screening using AI could be a totally legitimate usage of AI depending on how its done. cheating/lying has no chance of being legitimate. just like speaking can potentially be used to lie.
most people here arent straight up vilifying the use of AI, just certain uses of it.
I've conducted interviews where the candidate asked if he could use google to try to get an answer. I often say "sure". If a guy can read an explanation out of context, understand it in a way he can explain it using his own words, and reason about corner cases in a couple of minutes, he's hired. The same goes with AI; canned responses work when you ask canned questions, not so much on open-ended ones.
That's missing the point. The goal is to have a level playing field for the interview.
If your interview format allows people to use outside help but only if they think to ask, that's hardly a level playing field. You're testing the candidate's willingness to ask. In most interview formats it would not be acceptable to Google the answer, so most people won't ask.
If you have an interview format that allows Googling, you should mention that at the start. Not leave it as a secret for people to discover.
The questions dont require google; but what do you do when you don't know a specific thing? You search for it.
The notion that a candidate must remember the name of a thing or a specific algorithm is just ridiculous. When was the last time you implemented some fancy sorting or tree traversal algorithm from memory?
and if a guy thinks he's able to parse that amount of information in less than a minute, why should I refuse it? The end goal is to hire problem solvers, people with analytical thinking and capable of learning autonomously.
In most companies, the development process is collaborative - spikes, code reviews, informal meetings; why would you evaluate a candidate for such a team solely on what narrow knowledge he brings to the table when the power is down?
My personal theory is less that it's reducing the guilt of lying if the machine fabricates it but rather more that the average person has historically been not so good at fabricating a fib (and they now have instant access to plausible-sounding lies)
I agree that lying was possible before AI, but something about AI has emboldened a lot more people to try to lie.
Something about having the machine fabricate the lie for you seems to lessen the guilt of lying.
There's also a growing sentiment online that using AI to cheat/lie is "fair" because they think companies are using AI to screen candidates. It's not logically consistent, but it appeals to people who approach these problems as class warfare.