Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you are getting powned by running random executables found on usb drives, passkeys aren’t going to save you. Same if the social engineering is going to get you to install random executables.


If you're getting pwned a physical Security Key still means bad guys don't have the actual credential (there's no way to get that), and they have to work relatively hard to even create a situation where maybe you to let them use the credential you do have (inside the Security Key) while they're in position to exploit you.

These devices want a physical interaction (this is called "User present") for most operations, typically signified by having a push button or contact sensor, so the attacker needs to have a proof of identity ready to sign, send that over - then persuade the user to push the button or whatever. It's not that difficult but it's one more step and if that doesn't work you wasted your shot.


Malicious binary steals browser cookies giving attacker access to all active sessions?


It gets better. With malware on the box you own the primary refresh token, which can mint new browser tokens without needing passwords or MFA.

Definitely use FIDO2, but understand that it's not foolproof. Malware, OAuth phishing, XSS, DNS hijacking, etc. will still pwn you.


All your 2FA apps, token, security keys, certificates and what not only protect the authentication (and in the case of online banking, a few other actions like transferring money). After that, a single bearer token authenticates each request. If your endpoint is compromised, the attackers will simply steal the bearer token after you authenticate.


That's true, but in terms of system design you definitely should ask to see the proof of identity again during unusual transactions and not just that bearer token - for example attempts to add or remove 2FA should need that extra step, as well as say high value financial transactions or irreversible system changes.


I think the claim is that plugging in the USB device is enough. If people needed to try running an executable from the device, some devices would still be compromised, but with lower frequency. I don't know exactly what happens. Automatically-triggered 'driver' install that is actually malware? Presenting as a keyboard and typing commands? Low-level cracks in the OS USB stack?

It feels to me more like OSes ought to be more secure. But USB devices are extremely convenient.


Usually presents as a keyboard that types commands, yeah. Win-R -> powershell -> execute whatever you want.

E.g. https://shop.hak5.org/products/usb-rubber-ducky


Still fits "It feels to me more like OSes ought to be more secure."

New USB-HID keyboard? Ask it to input a sequence shown on screen to gain trust.

Though USB could be better too; having unique gadget serial numbers would help a lot. Matching by vendor:product at least means the duplicate-gadget attack would need to be targeted.


Sure; the fix for that is blocking unexpected USB devices on corporate devices.


I don’t disagree.

But, haven’t there been bugs where operating systems will auto run some executable as soon as the USB is plugged in? So, just to be paranoid, I’d classify just plugging the thing in as “running random executables.” At least as a non-security guy.

I wonder if anyone has tried going to a local staples or bestbuy something, and slipping the person at the register a bribe… “if anyone from so-and-so corp buys a flash drive here, put this one in their bag instead.”

Anyway, best to just put glue in the USB ports I guess.


Even if the OS doesn't have any bad security practices and doesn't do this, there is a very simple way to execute code from an USB stick: the USB stick pretends it's a USB keyboard and starts sending input to access a terminal. As long as the computer is unlocked, this will work, and will easily get full local user access, even defeating UAT or similar measures. It can then make itself persistent by "typing in" a malicious script, and going further from there.


> there is a very simple way to execute code from an USB stick: the USB stick pretends it's a USB keyboard and starts sending input to access a terminal

Good systems these days won't accept such a "keyboard" until it's approved by the user.


Which systems ask before allowing you to use a keyboard you just plugged in over USB? Windows, Ubuntu, Fedora certainly don't, at least not by default.


Mine. Not asking whoever happens to have local physical access interactively, strictly speaking, as that just papers over one of the problems; but controlling what Human Input Devices are allowed when plugged in, by applying rules (keyable on various device parameters) set up by the administrator.

Working thus far on NetBSD, FreeBSD, and Linux. OpenBSD to come when I can actually get it to successfully install on the hardware that I have.

* https://jdebp.uk/Softwares/nosh/guide/user-virtual-terminal-...

In principle there's no reason that X11 servers or Wayland systems cannot similarly provide find-grained control over auto-configuration instead of a just-automatically-merge-all-input-devices approach.


Assuming the OS isn't running on a laptop, how do you approve the first keyboard or mouse you plug in?


It's not an interactive approval process, remember. It's a ruleset-matching process. There's not really a chicken-and-egg problem where one builds up from nothing by interactively approving things at device insertion time using a keyboard, here. One does not have to begin with nothing, and one does not necessarily need to have any keyboard plugged in to the machine to adjust the ruleset.

The first possible approach is to start off with a non-empty ruleset that simply uses the "old model" (q.v.) and then switch to "opt-in" before commissioning the machine.

The second possible approach is to configure the rules from empty having logged in via the network (or a serial terminal).

The third possible approach is actually the same answer that you are envisaging for the laptop. On the laptop you "know" where the USB builtin keyboard will appear, and you start off having a rule that exactly matches it. If there's a "known" keyboard that comes "in the box" with some other type of machine, you preconfigure for that whatever it is. You can loosen it to matching everything on one specific bus, or the specific vendor/product of the supplied keyboard wherever it may be plugged in, or some such, according to what is "known" about the system; and then tighten the ruleset before commissioning the machine, as before.

The fourth possible approach is to take the boot DASD out, add it to another machine, and change the rules with that machine.

The fifth possible approach is for there to be a step that is part of installation that enumerates what is present at installation time and sets up appropriate rules for it.


I've only seen it on Macs


Out of curiosity, how does that work if this is the only input method connected? Or is this only shown if you have another keyboard (and/or mouse) already connected.


Sorry, IDK, I've only used their laptops


Good luck doing hardware development without USB ports, as the IT team at my employer recently found out.


Second best option is to whitelist USB's PID and VID.


USB devices can be USB HIDs




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: