Yeah, i think it is a feature, and a very beneficial one for the people this system was designed for - those who want a lot of privacy-desiring users to settle on using an encryption solution which isn’t too difficult to circumvent.
This you need to prove somehow.
I said "i think" because, unlike many of the other things I'm saying here which are statements of fact, my suggestion that ProtonMail specifically is designed for this attack to be possible is merely well-informed speculation.
Has there be any attack that happened like this?
See the links in my earlier comments for evidence of this kind of attack happening against all three of the other largest email providers with similar architectures as ProtonMail (Tuta, Hushmail, and Lavabit).
Also, I mentioned the potential to use the bridge. That is a fully client-side tool which does not run in the browser, does that satisfy your risk appetite?
If both users are using the bridge (assuming it is designed how I think it is), they would certainly be better off than if one or both of them is using the webmail e2ee. However, I would never use or recommend using protonmail, even with the bridge, because it is very likely that the people I'm writing to would often not be using the bridge. Also, because ProtonMail e2ee doesn't interoperate with anything else, and by using it I'd be endorsing it and encouraging others to use it ("it" being ProtonMail, which for most users is this webmail snakeoil).
Also, I don't know in detail how the bridge actually works, and, like most of the people I know who sometimes audit things like this... the open source bits from Proton like their bridge aren't interesting enough to be worth auditing for free (except perhaps by a security company, for their own marketing purposes) because, even if it turns out to be soundly implemented itself, it is a component of a non-interoperable proprietary snakeoil platform.
Yep. But no matter how tight their processes are, there are still single points of failure that can be coerced to gain access to anyone’s email.
They are a point of failure, not a single point of failure necessary (as in a single person).
From your earlier comments I think you're working from a mental model where an individual employee performing the attack would need to check something in to git or something like that, but, don't you think anyone with root on, say, one of the caching frontend webservers do this? I suggest that you try to think about how you would design their system to prevent a single person from unilaterally doing it, and then figure out how you can break your design.
I am saying that particular vector does not apply, because your browser will actually refuse to serve Proton without a valid certificate due to HSTS.
Yes, I get that you are saying that, but it's because you have not been hearing me saying that HTTPS has been circumvented numerous ways over the years and will continue to be. Do you think we've seen the last rogue certificate authority? Or the last HSM where (oops!) they key can actually be extracted?
Don't you think there is a reason why most modern software update mechanisms don't rely solely on HTTPS for authenticity of their updates?
🤔 I actually wonder why ProtonMail lists Digicert and Comodo alongside LetsEncrypt in their CAA DNS records. (Fwiw, they currently have a cert from LetsEncrypt, from my network perspective at least). Doesn't that mean that, against a browser supporting DNSSEC and CAA records, a rogue employee at any of those 3 companies can issue a cert that would allow this attack to be performed? (Of course, against a browser that isn't validating CAA with DNSSEC, anybody at any one of thousands of sub-CAs can also do it...)
at-risk people have technical consultants and are (hopefully?) aware of the risks, and can apply additional controls
As someone who has been one of those technical consultants, let me tell you, arguing with at-risk people about the veracity of posts on privacy forums singing the praises of things like protonmail is part of the job. 😭
If the NSA goes to https://www.gnupg.org and says “you know what, the next time you serve your software to IP x.x.x.x”, you serve this package, you will never know and your encryption is toast. Would you say that the folks behind GnuPG “have the ability to read your emails”? I wouldn’t, because they are not backdooring the software, although the possibility for them, contributors and national actors to do that exists.
This is a false equivalence in several ways:
- Targeting an IP address is much less useful than targeting a user by their username and password
- Careful users have the ability to (and many do) verify hashes and signatures of a downloaded program before they run it, unlike javascript on a web page
- Users retain a copy of the program after downloading it and so often have evidence if an attack took place
- Many users obtain their GPG binaries from some distribution rather than the GnuPG website (read on about that...)
Again, these software distribution channels (eg, Linux distros, etc) have many of their own problems, but they are in a different league than javascript in a browser. Ways they're better include:
- These days, in many/most cases, at least two keys/people are required to compromise them. This isn't nearly enough but it is better than one.
- Other than by IP, users aren't identifying themselves before downloading things
- Users can access them from many different mirrors; there isn't a single server from which to target all users of a given distribution
rather you are just saying that you think it is very unlikely that they would ever abuse that capability and that you assume their procedures make it so that one rogue employee couldn’t do it alone. You do seem to understand that, contrary to what they’ve written in the screenshot above, ProtonMail as a company technically could decide to.
I do believe that they have no interest whatsoever to abuse this architectural feature, but I agree that they could be coerced to.
But, do you think most of their customers understand that?
No, I think most people don’t.
Isn't that because their web page says something to the contrary?
I have no idea. I would say 1 or 3 are the most likely.
Really? Scenario 1 is possible? You think a privacy-touting email service with 100M users might have never had a request to circumvent their encryption, despite being able to?
It seems a very unnecessary way (if I were a certain 3 letter agency) to gain access to a small set of data, when I can compromise the whole device and maintain persistence much more conveniently (for example coercing the ISP to give me access to the router and go from there, or ask directly Apple and Microsoft, etc.).
Again, I'm not just talking about 3 letter agencies, but anyone who wants to read someone's mail. And often there is a point where the email address is all that is known about the target.
Do you think that it’s possible that any of the 3-letters agencies could coerce a software author (or some collaborator) and produce a malicious release for the code that is served only to you (for example, by IP, fingerprint or other identifier)
I use some mitigations I won't go into, but, yeah, on the system I'm typing this on I do sadly use a distribution which relies on a single archive signing key, so, if you compromise that key (or the people with access to it), and obtain a valid HTTPS certificate for the particular mirror I use, and you know the IP address I'm using at the moment I'm doing an OS update, you can serve me a targeted (by IP) malicious software update. 😢
that it activates only for you (device ID etc.)? For example go to Kevin McCarthy and force him to produce a backdoored version of Mutt (http://www.mutt.org/download.html) which is backdoored to leak your keys.
I think the vast majority of Mutt users don't get their Mutt binaries from Kevin McCarthy, and having him put a targeted backdoor in the source code would be foolish as it would be likely to be noticed by one of the mutt distributors who builds it before it gets distributed. Since reproducible builds still aren't ubiquitous, the best place to insert a widely-distributed-but-targeted-in-code backdoor would be at the victim's distributor's buildserver.
Do you think that alternatively Github/Bitbucket for example could be coerced by said agencies to backdoor the version (and signature) you get for a given code, say https://bitbucket.org/mutt/mutt/downloads/mutt-2.2.12.tar.gz (maybe after graciously “asking” Kevin for his key to sign the software).
Yes, but unlike the ProtonMail case there is a chance of being caught so it is a much higher risk for the attacker.
If you think the above is possible, do you think there is any distributor for software that could not be coerced? And how this vector is actually different from Proton being forced to break their own encryption?
There are a wide variety of software distribution paradigms, on a spectrum of difficulty to attack. At one end of the spectrum you have things like Bitcoin Core, where binaries are deterministically built and signed by multiple people, and many users actually verify the signatures to confirm that multiple builders (with strong reputations) have independently built an identical binary artifact. At the other end of the spectrum you have things like ProtonMail with zero auditability, users identifying themselves and re-downloading the software at each use, and numerous single points of failure that can be exploited to attack a specific user. Things like mainstream free software operating system distributions, macOS, Windows Update, etc sit somewhere in the middle of that spectrum.
If you agree that the above is possible, would you say that any claim about Mutt using PGP to e2e encrypt/decrypt your emails are snakeoil?
No. See previous answers for the massive differences.