Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Synology MFA Fails if Internet is down (synology.com)
60 points by alexfromapex on Oct 26, 2023 | hide | past | favorite | 66 comments


Genuine question: does MFA conceptually make any sense for a device on your local network, accessed locally?

I understand why you'd want it if trying to access your NAS from the internet, while traveling. (And in which case, if the internet is down, you can't access it anyways.)

But I'm struggling to understand why you'd ever want to enable MFA for signing into a device on your LAN. If it's on your LAN, you probably have physical access to it, and that's basically a factor in itself. Not "something you have", but "where you are" (or I guess it is "something you have" -- the device itself).

I've only encountered MFA before as something to protect remote account access. None of my other local devices support MFA -- I don't use it to unlock my phone, or my encrypted hard drive. So supporting it for local NAS access seems a bit unexpected.


Lots of companies do in fact treat MFA this way. Microsoft's AAD^H^H^H sorry, Entra ID has conditional access rules that forgo the need for MFA when signing in from known locations, and lots of companies make use of that.

But it's becoming more and more popular, and in many cases necessary, to adopt a "zero-trust" approach to all devices no matter where they are located.

That login attempt coming from your office LAN — how do you know it isn't an automated request from a compromised device? If you are enough of a high-value target, do you think it's inconceivable that someone might try and hop on your wifi network from the parking lot?


I might have the MFA devise built into the Synology, were I designing it myself.

Just a little 7-segment LCD on the front of the cabinet. Those are what a buck or two, and my 8-bay cost about $1000... it's not a big additional cost.

If you can input the number on that, you're provably local. I don't know if that truly solves the problem, a high-value target might have someone posing as an outside contractor to get an eyeball on it, I guess. But for me at home, it'd be sufficient protection.


Some important context: about 6-ish years ago, Synology's OS got hacked from the wild. I think this was before ransomware, but either way, they got a black eye. Only to have another black eye a year or so later when a generation of their boards was hit by an Intel Atom defect, requiring a recall. My guess is the token is to assuage people's concerns who might not otherwise trust them. (I owned a DS1815 and it was an awesome turn-key NAS, even with the Intel defect the RMA was smooth and fast)


Somehow this reminds me about the famous "I can do rsync and FTP, why do I need Dropbox" comment:)

MFA is basically protecting the system when one of the factors (e.g. password) is compromised. It's not about how many network hops there are between you and the system.

A lot of folks think the house and LAN are private, but that's not always true. Your Wi-Fi signal can be picked up from outside the house, someone can unplug your outdoor camera and plug the ethernet cable into something else. When you have a guest, they may need to connect to your Wi-Fi with some random smartphone loaded with 200 random apps.

If you care about security, then you know you can't have it with network segments. Zero trust model is getting more popular in enterprise world. Maybe it still sounds like too much work for a homelab today, but technologies are getting more and more approachable every single day. Comparing to 20 years ago, now I have VM and containers to isolate processes, letsencrypt to encrypt HTTP, my NAS encrypt the whole RAID with a single click... and of course, a software that does MFA effortlessly.

Having a secure authentication system helps me sleep better at night, because I don't worry about something bad may happen just because I can't ensure my home network to be 100% free of malicious human/devices/processes...


> But I'm struggling to understand why you'd ever want to enable MFA for signing into a device on your LAN.

I concur wholeheartedly. I had it enabled on my Synology NAS and the damn thing required me to use MFA every time I logged in - zero memory of the device! Drove me crazy and my only option was to turn off MFA altogether.


> If it's on your LAN, you probably have physical access to it and that's basically a factor in itself.

Just because it's a LAN doesn't mean you have to be physically present to access the device. Another device on your network could be compromised, giving an attacker access to anything that the device can access. For example, say you get tricked into downloading something nasty on your laptop. Now they have remote access to your laptop, giving them access to your NAS. Ultimately it comes down to your threat model. For an average home user, it's probably not very likely that not having 2fa on LAN-only devices would be a huge risk. But for a business with thousands of employees that could plug who-knows-what into your network? Much higher risk.


If the laptop has access to the NAS, the attacker could wait til the user logs in with MFA and piggy back that authentication session. MFA is a bummer, not a defense in this case.


For a home network I think I agree with you, but in a corporate or SOHO environment there may be an open WiFi that requires at least some access to the server locked in the closet. You can't necessarily trust that people who have access to to the network should have full access to the server.

Add thats before accounting for "defense in depth" approaches.


I use Tor hidden services. All incoming Tor connections come from "localhost". Every single one of them.

In order to propey secure, say, a Tor hidden services sshd, you can't use fail2ban. You must use 2fa. Fail2ban would just nearly immediately ban you from ever logging in.

I also turn down the login delay to a few seconds, just to make bruting harder.


I've never set up a Tor server, but I would have assumed incoming traffic could only be sent out to the internet, and that it would be blocked from accessing anything on your LAN, anything in 198.0.1.x. Is that not the case?

Otherwise how could anyone even have a printer connected to their network without people constantly printing junk pages as a prank?

If you've got a setup where Tor traffic can send a packet to your NAS in the first place, you're a far braver soul than I... I trust a firewall far more than I'll trust 2FA.


Yes all onion service connections appear from 127.0.0.1 but the current v3 onions are not guess or enumerateable. If you don't publish your address you get 0 attacks ever. Have you put your onion address all over the Internet to get attacked?


That's not true.

When you make a HS, it is announced in the Tor's dHT. Now, it is true that the scope of the v3 dHT is limited, it is not 0.

(And more specifically, Tor is a 6-4 mixing network overlay with dHT DNS.)


>However, clients still need to ask the directory for information about a specific onion address, which would again allow mass collection of onion addresses. With V3 onion services, this is prevented by using key derivation to derive a daily-rotated identifier ("blinded public key"). [0]

This is the information I have. It's not possible for a relay to know an .onion address like it was with v2. Could you please link to something that proofs the contrary?

[0] https://blog.torproject.org/v3-onion-services-usage/


What if the person that has physical access to your LAN is not you? Example, a rogue physical visitor that you were unable to fully supervise.


That's precisely the OP's point. If a rogue physical visitor is a threat vector you need to protect against, then you have a different policy for that (e.g. how they get on your network or physically enter your IT homelab). But if you have a home NAS then what's the point of MFA?


MFA for my Synology failed recently because (I suspect) I let the drive go unused for a while and it lost sync with the time server. So all one time passwords my password manager generated were "wrong". Off in time by some unknown amount. No way to fix it, since I would need MFA to log in and fix it anyway.

I had to reset the administrative UI and lose my user settings/admin preferences, but it did hang on to the data on the drives, luckily.


Experienced this too. Having SSH access enabled on the Synology saved the day. There's no 2FA prompt on SSH, so you can SSH in and manually fix the time.


Kinda defeats the whole point of MFA if you can just bypass it like that.


SSH and the web UI are two different interfaces running on separate ports that can be firewalled differently. You might, for instance, expose the web UI on an external port on your router while restricting SSH access to the NAS's subnet. In that case, the MFA is a critical extra layer of security.


If the SSH key is password-protected, then SSH access is MFA.


Not sure that's true... If the key gets compromised, having it password protected does nothing.


If your private SSH key is password protected, it is encrypted symmetrically with that password.

If somebody steals your password protected private key file, having the password protection there means they have to bruteforce the password. It does not 'do nothing'. Its an extra layer. If your password is secure enough, it can protect you from having the ssh private key decrypted.


It's an extra layer, but is that really another 'factor'? MFA would prevent someone with a compromised key from logging in. Password protected keys do not.


Okay I see your point now


> If the key gets compromised, having it password protected does nothing

I apologize for my ignorance in advance: having a private key file password-protected does nothing?

I guess I'm not understanding what you mean by "compromised"?


I think the point is about parallel vs serial layers of security. In a typical website account that is protected by password and SMS OTP, both of them need to be compromised for a bad actor to gain access. If they have just the password, they'll get stuck at the SMS token, and if they intercept an SMS OTP, they won't be able to get to the form where they can enter it. In contrast, a password-protected SSH key isn't pure MFA. If they have the password, they still need to get the private key file before they can use it to get the private key. However, if they have the private key, then they don't need the password at all. The password only protects you from people stealing the file, not from the stealing the key itself.


Compromised, meaning someone has the key in an unprotected format, or they somehow got your password. Say someone manages to MITM you somehow and get your password to the file, or they manage to crack it, or phish it out of you. Then they can just take the key and use it freely to log into your things. With MFA, there's no way that any key can be used to log in as long as the other factor exists. If you have to push OK on your cell phone to log in for example, the key is useless without physical access to your phone.

I'm not saying the password protection does nothing, it makes the key harder to crack but it's not another factor. It's simply an extension of the existing key. In other words, it's just a longer password.


My SSH key is on a YubiKey. How many factors is that?


1 if it isn't password protected and 2 if it is


Which is why you have to manually enable SSH and it warns you that it's a big security risk.

You're entirely right -- the "proper" way is to login with MFA, enable SSH, do your thing, and then re-disable SSH.


Well, maybe, if you have an ssh key, instead of an ssh password, there's a lot less surface area there.


Old tricks are the best tricks!


Bank of New Zealand and Canadian government both issue a 4x4 grid of three character codes and ask for some of them when I login to online services, this proves the “something I physically have” component of the authentication. I have the grid either on a card or printed off. I like that it doesn’t depend on a phone.


I've seen some investment banks provide a small device that looks like a calculator and generate MFA codes. Not sure what they are called.


> I've seen some investment banks provide a small device that looks like a calculator and generate MFA codes. Not sure what they are called.

They're more than MFA codes though: they're actually performing cryptographic challenge/response.

I mean: the RSA tokens just generate a MFA code. But the bank "card readers", whether they actually read a card or not, do perform challenge/response.

I've got several banks using the same type of card reader: I put my bank card in and respond to a challenge: I enter the digits that the card reader shows me and I can log in.

I've got another bank which provided me a reader: it's not a card reader as its standalone (I don't need any bank card to use it) but it still perform challenge/response.

For example for any wire tx to a new recipient or above a certain amount, I need to enter the amount and part of the recipient's bank account number and sign the transaction.

It's more than just authentication.


RSA SecurID tokens?


Barclays calls it a PINsentry card reader


Cold wallet


I got a list of codes when i opened my first online bank account in the early 2000's.

In between there was a special key-generator given to each customer, that was abandoned about 10 years ago. (edit: it was the RSA SecurID's mentioned in this thread)

Since then we've got apps that need to be activated using a letter with a special qr-code, as well as the bank account-number (username) & password another qr-code from the website. I assume it generates a private key in the app which will not be transferred to a new device (but maybe somehow is). You need to login using your account-number (username) and password, after that you have to open the app (login with face-scan) scan a qr-code and enter the generated code.

Getting a printed out card or page with something on it to verify your identity seems like 20 years tech to me.


Unfortunately the thing they gave you is static and easily reproducible. If infinitely many copies can be made by someone snapping a photo, then the "something you physically have" is no more secure than a second password you have written on a piece of paper.


If someone breaks in to the locked filing cabinet in my house I have bigger problems than 1 factor being compromised


Meanwhile, I put the grid in my password manager because I'll never find that paper.

Defeats the purpose? Only kind of.


I can think of many reasons not to use a Synology router, and this is one of them.

Other ones:

* Depending on your settings they broadcast an IPv6 DNS when they shouldn't. I told them about the bug and they scoffed.

* Wrong setting when you DO broadcast an IPv6 DNS with just SLAAC. They send it via DHCPv6 without setting the right bits.

* Setting up 2 networks with 2 domains, like iot.wirelessgigabit.com & home.wirelessgigabit.com does not work. They don't attach the domain to the the network. Last one wins.

* No way of redirecting traffic, like all outgoing :53 to a port. All you can do is block which breaks things like a Google Mini.

* No WireGuard VPN

* No OIDC authentication. (DSM has it but for some arcane reason you need to join the domain before OIDC works...)


MFA always feels like such a kludge... Do we really have no better ideas to solve this?


There's an RFC for a standard that I believe the Google Authenticator uses (and other authenticators also?).

Basically (if I've understood it correctly), You get an random seed that you initiate the authenticator with, then for (each minute?) it combines the unix timestamp, the seed and calculate the time-based-one-time-password.

So combining a user-specific password, the TOTP (whose seed is on an initiated device for the user) you have 2 factors that can be hard for an attacker to retrieve both of.

https://datatracker.ietf.org/doc/html/rfc6238

(Kludgy? Maybe a bit but feels fairly secure in practice)


HOTP (one-time) and TOTP are typically used in conjunction with one another. HOTP provide backup codes should client or server time be out of sync.

https://www.rfc-editor.org/rfc/rfc4226

FIDO2 with a hardware key source provides a much stronger and more secure guarantee than a 6 digit hash and an unencrypted symmetric secret stored in an app.

https://fidoalliance.org/fido2/

Any and all 2FA approaches demand backup codes (and backup code management with confidentiality and durability) to protect against 2FA loss or inability of the 2FA to function. For example, there are some apps that insist on performing FIDO2 on a non-NFC platform. While I can use a Lightning to USB-C adapter to workaround this limitation, it's possible that I might not have it and would need some other 2FA "sufficient" mechanism.

By the way, there is "HSM"-like passkey functionality embedded in most modern Apple and Samsung devices that doesn't require a USB token. This has the downside of not being a dedicated hardware token, so it cannot be physically isolated offline and requires an additional piece of software to act as a FIDO2 authenticator.


What sucks is that password’s don’t need to be so insecure.

If you took WebAuthn and, instead of the private key, used one password, it’d be nearly as strong. Assuming that one password is sufficiently strong, and the password input could not be intercepted, and no one ever looked over your shoulder, or used a camera, and you never wrote it down somewhere others can find it, and you never typed it where someone had installed a key logger…

Actually, let’s bring on the passkeys.


passkey theoretically.

Passwords are so terrible with the way they're typically used and deployed (and now there is enough widespread value in compromising accounts at scale), that MFA went from a niche 'high security' edge case to, well, our current UX disaster.

For high security needs, you'd still want MFA + passkey.


Passkey is MFA isn’t it? Or at least, the site can demand it? You have to do the “fingerprint” or “pin” as the second factor when authenticating…


No. It’s an authentication method, any pin or whatever you enter is you unlocking your local auth store to allow the passkey to be used.

That pin or whatever is not part of the actual authentication.

Someone could have an auth flow with a password + passkey + say SMS mfa if they wanted to be a jerk. Or just use a passkey.


WebAuthn has the concept of “User Verification” (optional demand). https://developers.yubico.com/WebAuthn/WebAuthn_Developer_Gu...

Looks like multi-factor to me.



Or as a counterpoint: https://support.apple.com/en-au/102195#:~:text=Credential%20....

We are now literally arguing semantics.


Webauthn == one way to use passkeys. Which as you note can also, optionally, request other information. Which isn’t passkeys. Which is nice, but doesn’t mean passkeys do. Their scope is more limited.

Technically correct is the best kind of correct, no?

As I noted, if someone wanted to build a more complex flow they can - but it would be more of a jerk move than anything probably.


Passkey does not appear to be a well defined term - more a marketing term - but you cannot disconnect the concept from FIDO/WebAuthn. There is no way to use a “passkey” without using the WebAuthn protocol, and hence “User Verification” is an available option and hence it’s reasonable to assert that passkeys support MFA.

So, if I am a site that needs additional verification (previously used MFA), I use “User Verification” == “required”. Then I know two factors “something the user has” and “something the user knows/is” has been used.


Except it actually is well defined, and has a specific standard of what it actually means?

https://fidoalliance.org/passkeys/

But whatever.


But you agree with the rest of my statement and therefore that you were in error?


Hahaha, you need to learn to read man. There is literally nothing you added that wasn’t already there.

So no. But you’ve been very verbose in looking like an idiot though. So have fun with that?


Well, your "feeling" is misleading. TOTP relies on correct time within a certain interval of grace period. The problem is the fault of Synology lacking a correct internal timekeeping source and the fault of paying money for other people's closed-source, faulty hardware. I have a Linux NAS running on a type-1 VM with passthrough to the actual disks. It also has TOTP when connecting over ssh but never has a problem because the hypervisor maintains the clock for all VMs and the on-board hardware clock is operable and has a battery.


I had this router and it had issues passing data through the ethernet ports at gigabit speed. It would cap between 500-600mbps. I turned off all of the intrusion detection, monitoring, parental control, etc services and it didn't fix anything.

I had (2 PCs)<->DumbSwitch<->rt2600ac<->(3rd PC)

The PCs connected to the dumb switch could communicate at gigabit but anything that went through the rt2600ac was limited, very frustrating. Running a long cable from 3rd PC to the $20 dumb switch fixed it. Eventually just gave up and use google fibers stupid dot router/WAP which I hate but at least it works.

The dual WAN on the synology was nice though.


> The dual WAN on the synology was nice though.

If you only need the dual WAN for a lowish-bandwidth backup connection, Eero’s implementation works well enough, though does have some limitations. (Requires a subscription, only does backup over wi-fi.)


huh - normally there should be a set of "one use" codes for exactly this scenario.


Yeah it's weird they don't:

https://kb.synology.com/en-ca/DSM/help/DSM/SecureSignIn/2fac...

They support a backup e-mail address that it will send a verification code to if you lose your 2FA device. But, whoops -- that's not gonna work if it can't connect to the internet.


This needs a 2021 in the title.


tldr: TOTP doesn't work if one device doesn't know what time it is.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: