When we did annual pen testing audits for my last company, the security audit company always offered to do phishing or social engineering attacks, but advised against it because they said it worked every single time.
One of the most memorable things they shared is they'd throw USB sticks in the parking lot of the company they were pentesting and somebody would always put the thing into a workstation to see what as on it and get p0wned.
Our company does regular phishing attacks against our own team, which apparently gets us a noteworthy 90% ‘not-click’ rate (don’t quote me on numbers).
Never mind that that 10% is still 1500 people xD
It’s gone so far that they’re now sending them from our internal domains, so when the banner to warn me it was an external email wasn’t there, I also got got.
I used to work for an anti-phishing focused brand protection firm, we provide training and testing to third parties and heavily, aggressively dog-fooded our own products.
So, of course, we got to a point as a company where no one opened any email or clicked any link ever. This caused HR pain every year during open-enrollment season, for other annual trainings, etc.
At one point they started putting “THIS IS NOT A PHISH” in big red letters at the top of the email body to get folks to open emails and handle paperwork.
So then our trainers stole the “NOT A PHISH” header and got almost the entire company with that one email.
At a previous position, I had a rather strained relationship with the IT department - they were very slow to fill requests and maintained an extremely locked down windows server that we were supposed to develop for. It wasn't the worse environment, but the constant red tape was pretty frustrating.
I got got when they sent out a phishing test email disguised as a survey of user satisfaction with the IT department. Honestly I couldn't even be mad about it - it looked like all those other sketchy corporate surveys complete with a link to a domain similar to Qualtrics (I think it was one or two letters off).
TBH this is probably the best argument for actually conducting phishing pentests. It shuts up the technical users who think they're too smart to need the handrails and safety nets that the IT department set up for the rest of the average plebs who work there.
(Speaking as one of the technical users here. Of course, it wouldn't happen to ME! :P )
if you've got email filters set up that sort emails by (dkim-verified) sender into folders, phishing becomes immediate obvious as you start to wonder why it isn't sorted into the right folder.
I dunno, if I get phishing emails in my inbox I feel like a certain team has already failed. We have a firewall that blocks anything non- approved. Do the same thing with emails.
My former company would send out rewards as a thank you to employees. It was basically a “click here to receive your free gift!” email. I kept telling the security team that this was a TERRIBLE president but it continued none the less. The first time I got one I didn’t open it for ages, even after confirming the company was real. It was only after like the 5th nagging email that I asked security about it and they confirmed that it was in fact a real thing the company was using. I got a roomba, a nice outdoor chair, and some sweet headphones. =)
There are SO MANY terrible practices like this carried out by companies big enough to know better. From registering new domains for email addresses (for a while a BigCorp customer of ours had a mix of @bigcorp.com and @bigcorp2.com email addresses, how the hell is any user meant to guess that MediumCorp hasn't also spun up a mediumcorp2.com mail server?!) to FedEx sending "click this link to pay import duties" texts from random unaffiliated (probably personal?) mobile numbers as their primary method of contacting recipients for import duties... The internet (like credit cards) is built on and around trust, and it shouldn't be.
Congrats on the loot, though! Your former company can't be all bad. ;)
This pisses me off when the company I work for as a website for the new application for the week. I couldn't even begin to tell you how many websites we have. They don't have a list of them anywhere.
I'm so surprised by this, not because I don't think that many people would fall for a phishing attempt, but because the corporate "training" phishing emails are so glaringly obvious that I think it does a disservice to the people being tested. I feel like it gives a false impression you can detect phishing via vibes when the real ones will be much stealthier.
Are your phishing emails good? If so if you don't mind name dropping the company so I can make a pitch to switch to them.
I had the opposite problem recently, I got a work phishing email from netflix.com . Now I still shouldn’t have clicked on it, netflix isn’t attached to my work email, but you couldn’t actually send a phishing email from account@netflix.com, they had to give access to our inboxes so the phishing company could manually drop it into our inboxes.
Like many other scams, an “obvious” entry point can be very useful as it makes victims self-selected, and a lot more likely to follow to completion. Even if the opportunity cost of phishing is low, having nobody report the attempt makes for a longer window of operation.
>> so when the banner to warn me it was an external email
These are so obviously useless. When the majority of your email has a warning banner it stops to be any sort of warning. It's like being at "code orange" for 20 years after 9/11; no-one maintained "heightened security awareness" for decades, it just became something else to filter.
>they'd throw USB sticks in the parking lot of the company they were pentesting and somebody would always put the thing into a workstation to see what as on it and get p0wned.
One of my favorite quotes is from an unnamed architect of the plan in a 2012 article about Stuxnet/the cyber attacks on Iran's nuclear program:
"It turns out there is always an idiot around who doesn't think much about the thumb drive in their hand."
I don't think we should be calling the users idiots when we failed to make our systems secure by design. If a simple act like plugging in a thumb drive by a well-meaning user undermines the security of an entire operation, then why do we allow such a thing to happen?
Are we still at the "Bill Gates got a BSoD during the demo of USB" level?
I know that at least on Linux mounting filesystems can lead to nasty things, so there's FUSE, but ... I have no idea what distros and desktop environments do by default. And then there's all the preview/thumbnail generators and metadata parsers, ...
One big problem with USB is that something might look like a storage device to the human eyes and hands, but it's actually a keyboard as far as the computer is concerned.
The U stands for Universal, and it's awfully convenient, but it contributes to the security nightmare.
A CD we can just passively read the bytes off, but if we want our keyboards to just work when we plug them in, then it's going to be harder to secure a supposedly dumb storage device.
Sure, and it can be any kinds of device, and ... it can trick the OS to loading strange drivers (with juicy vulnerabilities), but that's the point. How the fuck is this still the norm? (Despite user mode driver frameworks!?)
Last year I got a phishing email at my work address, and it was more convincing than most. I knew it was phishing, but it might have fooled me if I'd been less attentive.
When I see these sophisticated phishing messages I like to click through and check out how well-made the phishing site itself is, sometimes I fill their form with bogus info to waste their time. So I opened the link in a sandboxed window, looked around, entered nothing into any forms.
It turns out the email was from a pen testing firm my employer had hired, and it had a code baked into the url linked to me. So they reported that I had been successfully phished, even though I never input any data, let alone anything sensitive.
If that's the bar pen testing firms use to say that they've succeeded in phishing, then it's not very useful.
I think it's fair to put more (or, maybe, less) nuance on that. Zero-days against browsers exist, zero-days against plugins installed via MDM exist. Sure, you didn't actually submit any credentials, but cybersecurity training and phishing simulations have to target a lowest common denominator: people shouldn't click on links in shady emails. Sometimes just the act of clicking is bad enough. So that's what they base assigning training or a pass/fail on: whether you accessed the pretend TA site, and not whether you hit a submit button there.
For what it's worth, all vendors I've worked with in that space report on both. I'm pretty sure even o365's built-in (and rather crude) tool reports both on "clicked link" and "submitted credentials". I'd estimate it's more likely your employer was able to tell the difference, but didn't both differentiating between the two when assigning follow-up training because just clicking is bad enough.
If you are getting powned by running random executables found on usb drives, passkeys aren’t going to save you. Same if the social engineering is going to get you to install random executables.
If you're getting pwned a physical Security Key still means bad guys don't have the actual credential (there's no way to get that), and they have to work relatively hard to even create a situation where maybe you to let them use the credential you do have (inside the Security Key) while they're in position to exploit you.
These devices want a physical interaction (this is called "User present") for most operations, typically signified by having a push button or contact sensor, so the attacker needs to have a proof of identity ready to sign, send that over - then persuade the user to push the button or whatever. It's not that difficult but it's one more step and if that doesn't work you wasted your shot.
All your 2FA apps, token, security keys, certificates and what not only protect the authentication (and in the case of online banking, a few other actions like transferring money). After that, a single bearer token authenticates each request. If your endpoint is compromised, the attackers will simply steal the bearer token after you authenticate.
That's true, but in terms of system design you definitely should ask to see the proof of identity again during unusual transactions and not just that bearer token - for example attempts to add or remove 2FA should need that extra step, as well as say high value financial transactions or irreversible system changes.
I think the claim is that plugging in the USB device is enough. If people needed to try running an executable from the device, some devices would still be compromised, but with lower frequency. I don't know exactly what happens. Automatically-triggered 'driver' install that is actually malware? Presenting as a keyboard and typing commands? Low-level cracks in the OS USB stack?
It feels to me more like OSes ought to be more secure. But USB devices are extremely convenient.
Still fits "It feels to me more like OSes ought to be more secure."
New USB-HID keyboard? Ask it to input a sequence shown on screen to gain trust.
Though USB could be better too; having unique gadget serial numbers would help a lot. Matching by vendor:product at least means the duplicate-gadget attack would need to be targeted.
But, haven’t there been bugs where operating systems will auto run some executable as soon as the USB is plugged in? So, just to be paranoid, I’d classify just plugging the thing in as “running random executables.” At least as a non-security guy.
I wonder if anyone has tried going to a local staples or bestbuy something, and slipping the person at the register a bribe… “if anyone from so-and-so corp buys a flash drive here, put this one in their bag instead.”
Anyway, best to just put glue in the USB ports I guess.
Even if the OS doesn't have any bad security practices and doesn't do this, there is a very simple way to execute code from an USB stick: the USB stick pretends it's a USB keyboard and starts sending input to access a terminal. As long as the computer is unlocked, this will work, and will easily get full local user access, even defeating UAT or similar measures. It can then make itself persistent by "typing in" a malicious script, and going further from there.
> there is a very simple way to execute code from an USB stick: the USB stick pretends it's a USB keyboard and starts sending input to access a terminal
Good systems these days won't accept such a "keyboard" until it's approved by the user.
Which systems ask before allowing you to use a keyboard you just plugged in over USB? Windows, Ubuntu, Fedora certainly don't, at least not by default.
Mine. Not asking whoever happens to have local physical access interactively, strictly speaking, as that just papers over one of the problems; but controlling what Human Input Devices are allowed when plugged in, by applying rules (keyable on various device parameters) set up by the administrator.
Working thus far on NetBSD, FreeBSD, and Linux. OpenBSD to come when I can actually get it to successfully install on the hardware that I have.
In principle there's no reason that X11 servers or Wayland systems cannot similarly provide find-grained control over auto-configuration instead of a just-automatically-merge-all-input-devices approach.
It's not an interactive approval process, remember. It's a ruleset-matching process. There's not really a chicken-and-egg problem where one builds up from nothing by interactively approving things at device insertion time using a keyboard, here. One does not have to begin with nothing, and one does not necessarily need to have any keyboard plugged in to the machine to adjust the ruleset.
The first possible approach is to start off with a non-empty ruleset that simply uses the "old model" (q.v.) and then switch to "opt-in" before commissioning the machine.
The second possible approach is to configure the rules from empty having logged in via the network (or a serial terminal).
The third possible approach is actually the same answer that you are envisaging for the laptop. On the laptop you "know" where the USB builtin keyboard will appear, and you start off having a rule that exactly matches it. If there's a "known" keyboard that comes "in the box" with some other type of machine, you preconfigure for that whatever it is. You can loosen it to matching everything on one specific bus, or the specific vendor/product of the supplied keyboard wherever it may be plugged in, or some such, according to what is "known" about the system; and then tighten the ruleset before commissioning the machine, as before.
The fourth possible approach is to take the boot DASD out, add it to another machine, and change the rules with that machine.
The fifth possible approach is for there to be a step that is part of installation that enumerates what is present at installation time and sets up appropriate rules for it.
Out of curiosity, how does that work if this is the only input method connected? Or is this only shown if you have another keyboard (and/or mouse) already connected.
What I heard about the Stuxnet attack was different from what you are saying:
The enrichment facility had an air-gapped network, and just like our air-gapped networks, they had security requirements that mandated continuous anti-virus definition updates. The AV updates were brought in on a USB thumb drive that had been infected, because it WASN'T air-gapped when the updates were loaded. Obviously their AV tools didn't detect Stuxnet, because it was a state-sponsored, targeted attack, and not in the AV definition database.
So they were a victim of their own security policies, which were very effectively exploited.
A USB can pretend to be just about any type of device to get the appropriate driver installed and loaded. They can then send malformed packets to that driver to trigger some vulnerability and take over the system.
There are a _lot_ of drivers for devices on a default windows install. There are a _lot more_ if you allow for Windows Update to install drivers for devices (which it does by default). I would not trust all of them to be secure against a malicious device.
I know this is not how stuxxnet worked (instead using a vulnerability in how LNK files were shown in explorer.exe as the exploit), but that just goes to show how much surface there is to attack using this kind of USB stick.
And yeah, people still routinely plug random USBs in their computers. The average person is simultaneously curious and oblivious to this kind of threat (and I don't blame them - this kind of threat is hard to explain to a lay person).
Sure, but who's going to pick up a random USB-to-SD adapter from the parking lot and plug that into a computer? The point of the USB key experiment is that the "key" form factor advertises "there is potentially interesting data here and your only chance to recover it is to plug this entire thing in wholesale".
You're moving your own goalposts, by now restricting this to a storage device that is fitted into an adapter to make it USB. There is no requirement to limit this to USB, however.
They'll pick up the SD/TF card and put it into a card reader that they already have, and end up running something just by opening things out of curiosity to see what's on the card.
One could pull this same trick back in the days of floppy discs. Indeed, it was a standard caution three decades ago to reformat found or (someone else's) used floppy discs. Hell, at the time the truly cautious even reformatted bought-new pre-formatted floppy discs.
This isn't a USB-specific risk. It didn't come into being because of USB, and it doesn't go away when the storage medium becomes SD/TF cards.
> You're moving your own goalposts... This isn't a USB-specific risk
I'm not, because I am talking about a USB-specific risk that has been described repeatedly throughout the thread. In fact, my initial response was to a comment describing that risk:
> A USB can pretend to be just about any type of device to get the appropriate driver installed and loaded. They can then send malformed packets to that driver to trigger some vulnerability and take over the system.
The discussion is not simply about people running malware voluntarily because they have mystery data available to them. It is about the fact that the hardware itself can behave maliciously, causing malware to run without any interaction from the user beyond being plugged in.
The most commonly described mechanism is that the USB device represents itself to the computer as a keyboard rather than as mass storage; then sends data as if the user had typed keyboard shortcuts to open a command prompt, terminal commands etc. Because of common controller hardware on USB keys, it's even possible for a compromised computer to infect other keys plugged into it, causing them to behave in the same way. This is called https://en.wikipedia.org/wiki/BadUSB and the exploit technique has been publicly known for over a decade.
A MicroSD card cannot represent anything other than storage, by design.
Stuxnet deployment wasn't just a USB stick, though. It was a USB stick w/ a zero-day in the Windows shell for handling LNK files to get arbitrary code execution. That's not to say that random thumb drives being plugged-in by users is good, but Stuxnet deployment was a more sophisticated attack than just relying on the user to run a program.
Versions of it work necessarily. The gist is that the USB device presents as a keyboard and is pre-programmed to pop a shell and start blasting. No exploits required. See: https://en.wikipedia.org/wiki/BadUSB.
I've seen someone do a live, on stage demo of phishing audit software, where they phished a real company, and showed what happens when someone falls for it.
Live. On stage. In minutes. People fall for it so reliably that you can do that.
When we ran it we got fake vouchers for "cost coffee" with a redeem link, new negative reviews of the company on "trustplot" with a reply link, and abnormal activity on your "whatapp" with a map of Russia, and a report link. They were exceptionally successful even despite the silly names.
Once a head of security worked for me (CTO), and she was great great great. She did the same, putting USB sticks on the printers for example and see who would plug one into their computer.
These audits are infuriating. At one company I was at it got so bad that I eventually stopped reading email and told people "If it's important, ping me on Slack"
One of the most memorable things they shared is they'd throw USB sticks in the parking lot of the company they were pentesting and somebody would always put the thing into a workstation to see what as on it and get p0wned.
Phishing isn't really that different.
Great reminder to setup Passkeys: https://help.x.com/en/managing-your-account/how-to-use-passk...