Otherwise, if the method of intercepting the traffic only manipulates the browser (for example a rogue extension or proxy setting you were not aware of), a standalone tool could not detect it. Right?
Also, I would generally avoid any security tools that do not come as source code. Mentioning that you are an infosec guy with 15 years of experience only makes this point hit even harder.
Right, banking trojans for example inject code into browser processes so the trojan can sniff decrypted traffic. Corporate snooping is also done by pushing root certs on to both system wide and browset trust stores.
Does anybody remember a project called "Perspectives"? It was a Firefox plugin that would verify your certificates and handshakes were the same as the ones other users were getting. Cool idea.
The addon cannot work any longer because since the XUL to WebExtensions transition Firefox no longer allows addons to make decisions about the validity of https certificates. This killed off Perspectives and a number of other addons implementing alternate trust models.
I hope the functionality could be added back with an API.
Honestly I had forgot all about that addon until Mizza mentioned it, but I remember being so impressed by the idea back when I first learned about it. I wish we could bring it back.
On the off-chance that this was said unironically, you should never trust someone just because they say you should. Especially if they aren't sharing the code.
So what that means is that this software expected to see some other certificate but instead it saw this one. huawei has had a considerable number of wildcard (*.huawei.com) certificates issued for whatever reason (configuration screw-up, somebody press the button too many times, different teams with same job, this happens) and you can see a bunch of them here:
The software's assumption is that (for some sites at least) the author can check what the "right" certificate is and if you see a different one that's wrong.
That clearly won't work for some sites any of the time, they use a CDN to present different behaviour including certificates in different places, and presumably the author weeds those out. But as we see here it can't work for _any_ site all the time, it will be inconsistent.
> a considerable number of wildcard (*.huawei.com) certificates issued for whatever reason...
Is there any downside to this? I mean, I have several wildcard certs issued for each of my personal domains mainly because it's more convenient to get separate certs on each host with certbot than trying to sync certs from one host to another. Is there any reason I shouldn't do this?
A bad guy who gets any of the private keys associated with any of these certificates can use that to impersonate any service with the corresponding name, even a quite different one.
So say you've got mail.oefrha.example that's a mail server using a *.oefrha.example cert, and the Dread Pirate Roberts breaks into it, they can use that when impersonating your web server www.oefrha.example or your Q&A site faq.oefrha.example even if those are on totally different hardware that Roberts wasn't able to penetrate.
For older TLS (or SSL) versions there's a trick called implied authentication used with RSA. After showing the certificate, instead of your server signing something to prove it knows the corresponding private key, the client sends something across which your server decrypts. Only the real server could decrypt it with the private key to continue the conversation so authentication is implied. However, in doing this your server has to be _extremely careful_, because it's easy to give away information when things go wrong. If it's not careful enough, a bad guy doesn't learn the key but they can use your answers to work out how you'd sign RSA messages.
This means if you've got old-crap.oefrha.example which does TLS 1.0 with crappy RSA implied auth enabled so as to make it work with some rotten turn of the century tech, and it has a wildcard certificate, some bad guys can maybe exploit that to pretend they are www.oefrha.example even though your actual www.oefrha.example web server only speaks TLS 1.2 or newer with elliptic curves.
You say a "personal domain", and I don't recognise your name, so chances are that this just doesn't matter. We're not talking about something a bored teenager can do, but if real bad guys with resources are attacking you, then it's probably not a smart idea to have so many wildcards.
Edited: Repeatedly to try to get HN's half-arsed parser to stop ruining everything. Gave up. HN use a parser that has working escapes, or remove the parser and just say the site only has text too bad.
Wildcard certificates are a greater risk since they can be used for things you didn’t intend, so a lot of it comes down to how scoped they are (a subdomain like *.app.eng.example.com is way less effective for phishing) and how hard it would be for an attacker to reuse them (e.g. there’s less risk if it’s generated on an HSM or something like AWS ACM which doesn’t allow the private key to be transferred).
For a large organization, this probably just says that they have a lot of different systems and groups operating relatively independently with poor practices, which isn’t an immediate problem but suggests that they’re an easier target than some.
Server-side MitM detection doesn't work. It tries to compare the attributes of the TLS connection (ciphersuites, etc.) with the expected attributes of the client software as determined by the User-Agent header.
So you'll get false positives if the server's database of TLS connection attributes is out-of-date, as is happening to several commenters here.
And you'll get false negatives if the MitM mimics the purported client software, which is easy for a malicious MitM to do.
It should be made to work better. A MITM attach changes the enciphered bits, because it re-encrypts with a different key. So the enciphered bits sent and the enciphered bits received are different. If you can compare a few bits somehow, you can detect MITM attacks.
The early STU-III secure phone displayed a 2-digit number at each end. You were supposed to verify by voice that those numbers were the same. That prevented most MITM attacks.
A web site could send something that says "The first N crypto bytes were 0xa34g", and the browser could check that. An attacker would have to know to fake that to evade the check.
It's possible to make the attacker work very hard to do such a fake. A nice trick would be to have the server send a MD5-type hash of the entire page plus the first encrypted bits early in the web page. Then, send almost all of the web page, but wait a few seconds before sending the last few bytes, which could just be a random HTML comment so rendering doesn't have to wait. To fake that, the attacker not only has to know what to do to fake it, it has to wait for the entire page to transmit before it can send any of the page. So the browser sees a substantial extra delay before the page starts if there's a MITM attack which tries to fake the "first N crypto bytes" check. That's detectable automatically.
I get the red page with Firefox Developer Edition with no extensions, Chrome and Safari are green on same machine. I have all of the anti-fingerprinting stuff turned on in FF though.
Would these be expected to change by country? I’m getting different results from grc for Facebook, Wikipedia and others in the UK, but the results are consistent between different connections. Results match for grc and paypal.
I get matching results for everything but wikipedia. I doubled checked with my VPS and got the same results. Could it be using region based routing to sites with different certs or something?
"Smaller web sites, like this one (GRC) and those others listed above, deploy only one security certificate on one or more web servers (For example, our wonderful certificate provider, DigiCert, specifically allows us to use the same single certificate on as many servers as necessary.)
"But companies with a massive and widely distributed web presence, such as Amazon or Google, may deploy many different security certificates across their many globally distributed servers and web sites. Multiple certificates may be easier for them to obtain and manage, and their security is not reduced. But it does mean that not every user of their servers (like you and this GRC page) would necessarily obtain the same security certificate.
"This means that a simple comparison of certificate fingerprints could erroneously lead people wishing to test these huge websites to conclude that their connections were being intercepted, when they have simply received a different valid certificate than the one received and shown by this web page.
"The best solution is to test smaller sites that are known to be using single certificates, or sites using the completely unspoofable extended validation (EV) certificates with an EV-honoring web browser such as Firefox or Chrome (but not Internet Explorer, which doesn't properly verify EV certificates)."
There is also another project that uses not just certificates but full TLS handshake fingerprints. The name escapes me, but it allows the server to determine whether it is talking to a browser, or the MitM proxy forwarding the connection. (You could of course employ a similar technique on the client side). Don't remember the name.
Both approaches have advantages and disadvantages (e.g. this one reports false positives if the certificates change, the other either reports false positives if the fingerprints change unexpectedly, or false negatives/inconclusive results if it encounters an unknown fingerprint).
If it's a tool to verify that there's not an active MITM, how do you detect if a MITM used a forged cert in lying that nobody is in the middle? In theory the MITM would see both the outbound request and the response, putting them in position to pull off a forgery like that.
That’s one answer of many and it’s wrong. A correct answer would discuss the various active and passive detection methods and their weaknesses, and especially how it’s easy to detect an unskilled attack but progressively harder to foil a sophisticated one.
Simple examples:
1. Analysis of TCP details could detect an intermediary proxy
2. TLS conflicts tell you a bad attacker is trying; use of a certificate from a different CA or an old one tells you someone has been compromised.
3. Attempts to block or throttle TLS, downgrade protocols, or block/degrade access to security updates tells you someone is trying to encourage you to act in an insecure manner.
4. If you send unique canary hostnames or URLs which are accessed, you know something has compromised your traffic.
5. Timing analysis can tell you that some target sites are being treated differently, which could be a sign that traffic is being more tightly monitored (IIRC this has been noticed with the great firewall).
6. HTTP pages can be requested from multiple sources and compared for modifications. I once learned about some JavaScript being injected into pages on Iranian college computers when their code triggered errors this way.
I find it somewhat ironic that the tool is served over HTTPS over some random domain. Certainly if an attacker has the resources to forge a certificate for eg. telegram.org doing the same for trustprobe.com won't be much harder.
I think the idea is that you already have this tool before you are mitm-ed. Also, you don't really think that attackers are gods right? As in that they know about every single website that could alert their victims of their attack?
bbc.co.uk and huawei.com gave "ALERT"s for me. Manual verification shows the cert chains to be sane. The application should give me more details about the error conditions it hit.
Otherwise, if the method of intercepting the traffic only manipulates the browser (for example a rogue extension or proxy setting you were not aware of), a standalone tool could not detect it. Right?
Also, I would generally avoid any security tools that do not come as source code. Mentioning that you are an infosec guy with 15 years of experience only makes this point hit even harder.