The weAudit VSCode extension [1] works pretty well. It's designed for security work, but there's no reason why you couldn't use it for general note-keeping.
This is great. I feel like E2EE has slowly fallen out of focus in recent years as the tech has stabilized, but important developments like this and the MLS standardization still continue to happen.
One specific area where I'd love to see more focus and attention is the web as a platform for E2EE applications. Currently, because of the inherent problems related to application delivery and trust relationships on the web, every step forward in E2EE adoption is a step away from webapps being first-class citizens -- even as PWAs keep becoming more viable for a wider range of use-cases otherwise. Even though an increasing number of companies maintain web implementations of their E2EE apps, these are always the fallback option when nothing "better" is available; the tech to make E2EE secure in webapps doesn't exist yet, but companies also have a unrelated incentives to push users to native apps. There are no serious efforts to remedy the situation and develop tech that would make it possible to deliver secure E2EE through the web.
The post mentions a couple of relevant goals:
> 3. Control over endpoints
> 8. Third-party scrutiny
They also mention the Code Verify extension[1], which may seem like a solution, but does not stand up to scrutiny: It only notifies the user of unexpected changes in the app, but does not prevent them. The detection logic it implements also seems trivially bypassable, and in more ways than one. Even if it was sufficiently enforcing application integrity, an extension like Code Verify is unlikely to ever become widely-adopted enough to make a dent. And of course it's not even available in all browsers on all host platforms.
There are also other similar extensions that suffer from similar shortcomings.
Browser vendors could solve the problem by providing APIs that allow the kind of integrity enforcement needed, akin to SRI[2], but that would mean you first have to agree on a standard and then implement it consistently everywhere and then webapps could slowly start adopting it. And because of past failures like HPKP[3], browser vendors would probably be hesitant to even start considering anything like it.
I believe a solution is possible using only the currently available web APIs, however, and for the past few months I've been prototyping something that's now at a stage where I can call it functional. The general idea is that using service worker APIs and a little bit of cryptography, a server and a client application can mutually agree to harden the application instance in a way that the server can no longer push new updates to it. After that, the client application can be inspected manually with no risk of it changing unannounced, and new versions of the app can be delivered in a controlled way. While my prototype is nowhere near production-grade at this point, it's nearing a stage where I'll be able to publish it for public scrutiny and fully validate the concept. Until then I'll be implementing tests and examples, documenting the API and threat model, and smoothing out the rough parts of the code.
If anyone's interested in collaborating on this or just hearing more details, feel free to reach out. I'd love some early feedback before going fully public.
I've actually been thinking quite a bit about this very issue. As it stands, it's not really possible to do E2E encryption on the web in a secure way, since the server can always just silently update the client side code for a particular user to steal any encrypted data. I'm kind of curious about what you're doing with service workers to lock the server out of being able to update its own client side application. That sounds almost like a bug.
My ideal solution to this problem would be Web Bundles[1] signed by the server's TLS key[2], combined with Binary Transparency[3] to make targeted attacks impossible to hide (and maybe independent Static Analysis[4] to make attacks impossible to carry out in the first place), but work on many of those standards seems to have died out in the last few years.
I've looked at web bundles and a variety of other solutions myself, but the service worker approach feels like a winner so far. There's no magic, nor any bug being abused, but the client does have to trust the server to behave nicely during initial setup. After the initial setup is done, the client never again has to trust the server again as long as the browser's local storage isn't purged manually; so if the server is compromised after the initial setup, the compromised server cannot compromise established clients. It's not perfect, there's still the need for initial point-in-time trust, but it's still a significant improvement on the standard way of serving webapps where a server can compromise any client at any time.
The way it works is the server returns a unique service worker script every time, and the script file itself contains an AES key. The user trusts the server not to store this key and the server never sees it again. This AES key is then used to encrypt all persisted local state and sign all cached source files. If the server replaces the service worker, the key is lost and local state cannot be accessed. If the server somehow replaces a source file, its integrity check will fail and the webapp will refuse to load it. If the server manages to skip the service worker and serve a malicious file directly (e.g. because the user did Shift+F5), the malicious file won't have access to any local state because the service worker will refuse to give it access. The server can destroy all local state and then serve a malicious application, but the user will immediately notice, hopefully before interacting with the app, because suddenly all their data is gone.
That's really clever! Fixes the "silently" part at least, though given that most applications typically require frequent updates and that this doesn't prevent targeted attacks, I'm not sure how useful it is in practice, at least for mainstream applications.
Signed web bundles with binary transparency and independent review would be far superior, if they actually existed. (Which sadly, they don't right now.)
Thanks! Automatic updates are still possible; you can implement a code signing-based flow on top of this, or fetch hashes from GitHub releases, or anything, really. Attacks are only possible during setup, and targeting at that point in time is difficult because the client won't have authenticated yet. Anything else (attacks that rely on clearing the local state) can be mitigated using careful UI design.
The big problem with transparency logs is that they can't prevent attacks in real time because of the merge delay. You'll only find out afterwards if you've been attacked. It significantly raises the bar for an attack, but can't stop one from happening.
I'm trying to solve the problem of "how can I trust an e2ee messaging app on the web". Basically, the issue is that while e2ee messaging apps (think WhatsApp, Signal) assume no trust in the server, the user still has to trust the client -- and on the web, the server controls the client. Desktop and mobile platforms solve the trust issue in multiple ways: code signing, app stores, reproducible builds, publicly available hashes etc. On the web, none of that's possible. That's why Signal doesn't have a web client. WhatsApp does, but using it defeats the point of e2ee.
My proposed solution is to use Service Workers to cache a web app in the browser and employ clever tricks to prevent the server from pushing updates to either the Service Worker or the caches. This way the user can then control any updates and verify new versions using means that are already familiar from other ecosystems: comparing hashes, trusting only signed code, etc.
The goal isn't to develop a new e2ee messaging app. Instead I'm prototyping something that resembles an auto-update framework like Squirrel [1], only for web apps. Ideally it will be largely "plug and play", i.e. you could take the existing WhatsApp web app, serve it using the updater framework, and your users would now have a trustworthy version of WhatsApp.
So far I have a small amount of PoC-level code validating several small parts of the larger concept. For instance, I'm fairly confident that I will be able to reliably prevent forceful server-controlled updates, which is a core requirement. Right now I'm in the process of formalizing a threat model, hoping to spot any gaps before I move forward with the implementation.
Feedback on the idea in general would be highly appreciated, but I'd also love to hear any more specific concerns regarding technical solutions, UX, etc.
There's an even more ubiquitous app that also usually has mic and camera permissions and suffers from a similar (but technically unrelated) local code injection issue: Chrome. The bug is described here [0] and was closed as WontFix because "if your machine is compromised, it's beyond the scope of anything Chrome can do about it".
Even if you don't use Chrome, you probably have at least a few Electron apps installed; they all suffer from the same issue.
The only logical conclusion is the macOS privacy model, TCC, is doomed. There's always an app that has non-default TCC permissions and is vulnerable to some type of local code injection, and at that point any malicious app can also access those TCC-protected features.
Noteworthy in this security release: 7 out of the 9 issues fixed are stack exhaustion bugs, meaning something in the stdlib is recursing too deeply and with a large enough input the runtime hits its 2 GB stack limit. Unlike it says on the announcement, though, the resulting crashes are not actual panics, but fatal errors that you can't recover from.
Most of these are pretty easy to hit, too: App taking in XML files larger than a couple of megabytes? Probably affected. Decompressing untrusted gzip files? Yeah pretty likely also affected. Doing static analysis or linting on Go source code? Definitely affected.
Blog author here; Russell's implementation is backed by github.com/beevik/etree, but like you said, it's just an interface. The tokenizer is still encoding/xml.
Adding better support for namespaces and providing APIs compatible with dsig doesn't remove the underlying vulnerabilities.
Ugh. That's disappointing. I loathe SAML, but also think the right thing to do here is to make sure nobody uses encoding/xml as part of their SAML stack.
I don't know about that. libxml certainly doesn't round-trip XML documents in general (though I don't think it breaks namespaces at least), whether that breaks SAML or not I have no idea.
Anyway from tptacek's other comments it looks like general-purpose XML libraries should not be assumed suitable for SAML, instead they should have purpose-built implementation for the SAML bit, then once the document has been properly validated and the SAML bits stripped off I guess that can be passed onto a general-purpose library:
> SAML libraries should include purpose-built, locked-down, SAML-only XMLDSIGs, and those XMLDSIGs should include purpose-built, stripped-down XMLs.
I would go out of my way to avoid libxmlsec1 and libxml. I honestly don't understand why it's so hard for a SAML implementation to just bring its own hardened stripped-down XML.
If I had to hazard a guess, bespoke implementation is usually recommended against, especially for complex formats. That it would be the best practice for saml does sound counter-intuitive.
This is like saying that variable name scoping is a semantic convention on top of the C language grammar and that a lexer can't really implement it. In the case of C, it turns out that the lexer must implement it. In the case of XML, processing name spaces directives during lexing is the right thing to do in nearly all cases. But it's not what these SAML libraries needed.
In Finland, most online stores allow you to pay for your shopping directly using your online bank. The way it works is the online store calls the bank's e-payment API, which in turn lets the user authenticate using their normal online bank credentials and accept the payment.
A few months back I did some research [1] on these e-payment APIs and noticed that one of the major banks had a serious flaw in their API implementation. It was possible for the end-user to manipulate the signed API calls to change the payment amount, effectively paying less than the actual price for products they buy.
I reported the issue to the bank and got a swift response where they acknowledged my report and said they were looking into it more closely. A few days later I got another email where they basically said "ok, this looks bad, and we can see it's pretty trivial to exploit, but... it's too expensive to fix, so we won't do anything".
I wasn't comfortable with this, so next I reported it to NCSC-FI/CERT-FI. They also agreed that it looked bad, but said that they had no way of forcing the bank to take action. So that got me nowhere either. I haven't heard from either NCSC-FI or the bank since, but the issue does appear to be partially mitigated now.
I've since found several other issues in the same bank's systems but haven't bothered to report them since they don't really seem to care.
Unless you think this would actually lead to banks taking such vulnerabilities more seriously in general--which I don't believe is the case--taking an action like that is pure spite. Consider the possible outcomes for this particular vulnerability: [1] nothing happens, [2] it gets heavily exploited, customers lose money, and it doesn't get fixed, [3] the same thing happens and it does get fixed. In all three cases, the outcome is at least as bad as it would have been had you done nothing, except possibly earlier and worse.
I really take issue with the notion that security is important, so you're fully justified in screwing people and companies over as much as possible to prove a point. That seems to be a common attitude in the security community. I get the frustration people have with the intransigence of corporations and programmers, and people's general stubborn unwillingness to understand the severe impact of vulnerabilities, but if just security-shaming companies into fixing bugs actually worked we would have a much more secure internet today than we actually do. Unless you can get regulatory agencies to start holding companies and individuals legally accountable for security issues (that is, making it more expensive not to fix than to fix), nothing will change, even if you have all the technical solutions and social pressure in the world.
Also a big issue here, as with many software vulnerabilities, is that the people the public disclosure would actually damage are the users, not the company making the vulnerable software. The bank would only start losing money if the users (personal customers, business customers using their APIs) would notice the hack and start demanding their money back.
It would be very nice if your security disclosure report included a section about how you have provide good faith upfront notice to the vendor and that based on research and belief it would be negligent for the company to not fix the issue by X date.
The wording you choose should be cognizant of your state's laws and the company's user agreement in such a way that the company is actually at risk if they ignore you.
When talking to people, "Reason is, and ought only to be the slave of the passions".
When talking to companies it is only necessary to discuss the impact on their profit.
Just to be clear, I haven't really disclosed anything publicly, not regarding the e-payment API issue or any other issues for that matter. The SlideShare from my comment references the e-payment API vulnerability but doesn't disclose any technical details. It's not possible to reproduce the attack based on the slides alone.
My credit card may be used for an online payement because it tells on a few information (number, cvv, etc.). This is obviously a security problem. Nobody cares: neither me as any payments which are not mine are immediately reverted (and then, maybe, the bank investigates), nor the bank for wom it is cheaper to write off this money then to fix the system.
So no, publicly exposing an issue does not always work if there are no incentives for anyone to fix it.
You have reached zugzwang in game theory parlance.
The correct solution before this was to make an announcement:
"Here is the announcement I have made disclosing the problem. It is in both our best interest that it get fixed before publication. I have irrevocably given it to a blind drop that will publish it on DATE. And I believe that is a reasonable DATE that you could fix the problem. Let's work together to fix the problem."
What do you think about this type of approach? There is probably a name for it in Art of the Deal. (Whatever you think of the man, the book is worth reading.)
The thing about setting deadlines like that (blind drop or not) is that it's very easy to look at it as some form of extortion. "This guy has cyberweapons, and unless we do what he tells us, he's going to release them on DATE. Better call the lawyers."
[1] https://blog.trailofbits.com/2024/03/19/read-code-like-a-pro...