XZ backdoor in a nutshell
XZ backdoor in a nutshell
XZ backdoor in a nutshell
Don't forget all of this was discovered because ssh was running 0.5 seconds slower
Its toooo much bloat. There must be malware XD linux users at there peak!
Tbf 500ms latency on - IIRC - a loopback network connection in a test environment is a lot. It's not hugely surprising that a curious engineer dug into that.
Half a second is a really, really long time.
reminds of Data after the Borg Queen incident
If this exploit was more performant, I wonder how much longer it would have taken to get noticed.
Technically that wasn't the initial entrypoint, paraphrasing from https://mastodon.social/@AndresFreundTec/112180406142695845 :
It started with ssh using unreasonably much cpu which interfered with benchmarks. Then profiling showed that cpu time being spent in lzma, without being attributable to anything. And he remembered earlier valgrind issues. These valgrind issues only came up because he set some build flag he doesn't even remember anymore why it is set. On top he ran all of this on debian unstable to catch (unrelated) issues early. Any of these factors missing, he wouldn't have caught it. All of this is so nuts.
Postgres sort of saved the day
RIP Simon Riggs
Is that from the Microsoft engineer or did he start from this observation?
From what I read it was this observation that led him to investigate the cause. But this is the first time I read that he's employed by Microsoft.
I know this is being treated as a social engineering attack, but having unreadable binary blobs as part of your build/dev pipeline is fucking insane.
Is it, really? If the whole point of the library is dealing with binary files, how are you even going to have automated tests of the library?
The scary thing is that there is people still using autotools, or any other hyper-complicated build system in which this is easy to hide because who the hell cares about learning about Makefiles, autoconf, automake, M4 and shell scripting at once to compile a few C files. I think hiding this in any other build system would have been definitely harder. Check this mess:
undefined
dnl Define somedir_c_make. [$1]_c_make=`printf '%s\n' "$[$1]_c" | sed -e "$gl_sed_escape_for_make_1" -e "$gl_sed_escape_for_make_2" | tr -d "$gl_tr_cr"` dnl Use the substituted somedir variable, when possible, so that the user dnl may adjust somedir a posteriori when there are no special characters. if test "$[$1]_c_make" = '\"'"${gl_final_[$1]}"'\"'; then [$1]_c_make='\"$([$1])\"' fi if test "x$gl_am_configmake" != "x"; then gl_[$1]_config='sed \"r\n\" $gl_am_configmake | eval $gl_path_map | $gl_[$1]_prefix -d 2>/dev/null' else gl_[$1]_config='' fi
It's not uncommon to keep example bad data around for regression to run against, and I imagine that's not the only example in a compression library, but I'd definitely consider that a level of testing above unittests, and would not include it in the main repo. Tests that verify behavior at run time, either when interacting with the user, integrating with other software or services, or after being packaged, belong elsewhere. In summary, this is lazy.
and would not include it in the main repo
Tests that verify behavior at run time belong elsewhere
The test blobs belong in whatever repository they're used.
It's comically dumb to think that a repository won't include tests. So binary blobs like this absolutely do belong in the repository.
I agree that in most cases it's more of an E2E or integratiuon test, not sure of the need to split into different repo, and well in the end I'm not sure that would have made any big protection anyhow.
As mentioned, binary test files makes sense for this utility. In the future though, there should be expected to demonstrate how and why the binary files were constructed in this way, kinda like how encryption algorithms explain how they derived any arbitrary or magic numbers. This would bring more trust and transparency to these files without having to eliminate them.
You mean that instead of having a binary blob you have a generator for the data?
Yep, I consider it a failure of the build/dev pipeline.
Thank you open source for the transparency.
And thank you Microsoft.
Shocking, but true.
They just pay some dude that is doing good work
This is informative, but unfortunately it doesn't explain how the actual payload works - how does it compromise SSH exactly?
It allows a patched SSH client to bypass SSH authentication and gain access to a compromised computer
From what I've heard so far, it's NOT an authentication bypass, but a gated remote code execution.
There's some discussion on that here: https://bsky.app/profile/filippo.abyssdomain.expert/post/3kowjkx2njy2b
But it would be nice to have a similar digram like OP's to understand how exactly it does the RCE and implements the SSH backdoor. If we understand how, maybe we can take measures to prevent similar exploits in the future.
There is RedHat's patch for OpenSSH that adds something for systemd, which adds libsystemd as dependency, which has liblzma as its own dependency.
I do believe it does
If this was done by multiple people, I'm sure the person that designed this delivery mechanism is really annoyed with the person that made the sloppy payload, since that made it all get detected right away.
I hope they are all extremely annoyed and frustrated
Inconvenienced, even.
I like to imagine this was thought up by some ambitious product manager who enthusiastically pitched this idea during their first week on the job.
Then they carefully and meticulously implemented their plan over 3 years, always promising the executives it would be a huge pay off. Then the product manager saw the writing on the wall that this project was gonna fail. Then they bailed while they could and got a better position at a different company.
The new product manager overseeing this project didn't care about it at all. New PM said fuck it and shipped the exploit before it was ready so the team could focus their work on a new project that would make new PM look good.
The new project will be ready in just 6-12 months, and it is totally going to disrupt the industry!
I see a dark room of shady, hoody-wearing, code-projected-on-their-faces, typing-on-two-keyboards-at-once 90's movie style hackers. The tables are littered with empty energy drink cans and empty pill bottles.
A man walks in. Smoking a thin cigarette, covered in tattoos and dressed in the flashiest interpretation of "Yakuza Gangster" imaginable, he grunts with disgust and mutters something in Japanese as he throws the cigarette to the floor, grinding it into the carpet with his thousand dollar shoes.
Flipping on the lights with an angry flourish, he yells at the room to gather for standup.
I have been reading about this since the news broke and still can't fully wrap my head around how it works. What an impressive level of sophistication.
And due to open source, it was still caught within a month. Nothing could ever convince me more than that how secure FOSS can be.
Idk if that's the right takeaway, more like 'oh shit there's probably many of these long con contributors out there, and we just happened to catch this one because it was a little sloppy due to the 0.5s thing'
This shit got merged. Binary blobs and hex digit replacements. Into low level code that many things use. Just imagine how often there's no oversight at all
Can be, but isn't necessarily.
Yea, but then heartbleed was a thing for how long that no-one noticed?
The value of foss is so many people with a wide skill set can look at the same problematic code and dissect it.
In a nutshell you say...
Coconut at least...
I'm going to read it later, but if I don't find a little red Saddam Hussein hidden in there I'll be disappointed
edit: eh my day wasn't good anyway
I think going forward we need to look at packages with a single or few maintainers as target candidates. Especially if they are as widespread as this one was.
In addition I think security needs to be a higher priority too, no more patching fuzzers to allow that one program to compile. Fix the program.
I'd also love to see systems hardened by default.
In the words of the devs in that security email, and I'm paraphrasing -
"Lots of people giving next steps, not a lot people lending a hand."
I say this as a person not lending a hand. This stuff over my head and outside my industry knowledge and experience, even after I spent the whole weekend piecing everything together.
You are right, as you note this requires a set of skills that many don't possess.
I have been looking for ways I can help going forward too where time permits. I was just thinking having a list of possible targets would be helpful as we could crowdsource the effort on gitlab or something.
I know the folks in the lists are up to their necks going through this and they will communicate to us in good time when the investigations have concluded.
Packages or dependencies with only one maintainer that are this popular have always been an issue, and not just a security one.
What happens when that person can't afford to or doesn't want to run the project anymore? What if they become malicious? What if they sell out? Etc.
What if the repository becomes stupid and takes a package away from a developer and said developer deletes his other packages. See leftpad.
no more patching fuzzers to allow that one program to compile. Fix the program
Agreed.
Remember Debian's OpenSSL fiasco? The one that affected all the other derivatives as well, including Ubuntu.
It all started because OpenSSL did add to the entropy pool a bunch uninitialized memory and the PID. Who the hell relies on uninitialized memory ever? The Debian maintainer wanted to fix Valgrind errors, and submitted a patch. It wasn't properly reviewed, nor accepted in OpenSSL. The maintainer added it to the Debian package patch, and then everything after that is history.
Everyone blamed Debian "because it only happened there", and definitely mistakes were done on that side, but I surely blame much more the OpenSSL developers.
OpenSSL did add to the entropy pool a bunch uninitialized memory and the PID.
Did they have a comment above the code explaining why it was doing it that way? If not, I'd blame OpenSSL for it.
The OpenSSL codebase has a bunch of issues, which is why somewhat-API-compatible forks like LibreSSL and BoringSSL exist.
This has always been the case. Maybe I work in a unique field but we spend a lot of time duplicating functionality from open source and not linking to it directly for specifically this reason, at least in some cases. It's a good compromise between rolling your own software and doing a formal security audit. Plus you develop institutional knowledge for that area.
And yes, we always contribute code back where we can.
We run our forks not because of security, but because pretty much nothing seems to work for production use without some source code level mods.
There's gotta be a better way to verify programs then just what the devs do. For example patching the fuzzer, that should be seen as a clear separation of duties problem.
That constant issue of low Dev/high use dependencies is awful and no one I've met on the business end can seem to figure out that need to support those kind of people or accept, what should frankly be, legal liability for what goes wrong. This isn't news its just a cover song. And its not an open source problem, its just a software problem. (
A small blurb from The Guardian on why Andres Freund went looking in the first place.
So how was it spotted? A single Microsoft developer was annoyed that a system was running slowly. That’s it. The developer, Andres Freund, was trying to uncover why a system running a beta version of Debian, a Linux distribution, was lagging when making encrypted connections. That lag was all of half a second, for logins. That’s it: before, it took Freund 0.3s to login, and after, it took 0.8s. That annoyance was enough to cause him to break out the metaphorical spanner and pull his system apart to find the cause of the problem.
The post on the oss is more detailed and informative
Give this guy a medal and a mastodon account
He already has a mastodon account : https://infosec.exchange/@fr0gger/112189232773640259
Give him another one!
Hopefully the later
Why not both.
The scary thing about this is thinking about potential undetected backdoors similar to this existing in the wild. Hopefully the lessons learned from the xz backdoor will help us to prevent similar backdoors in the future.
I think we need focus on zero trust when it comes to upstream software
exactly, stop depending on esoteric libraries
this was one hell of an april fools joke i tell you what.
Imagine
i mean, to some degree, it is.
I have heard multiple times from different sources that building from git source instead of using tarballs invalidates this exploit, but I do not understand how. Is anyone able to explain that?
If malicious code is in the source, and therefore in the tarball, what's the difference?
Because m4/build-to-host.m4, the entry point, is not in the git repo, but was included by the malicious maintainer into the tarballs.
Tarballs are not built from source?
I don’t understand the actual mechanics of it, but my understanding is that it’s essentially like what happened with Volkswagon and their diesel emissions testing scheme where it had a way to know it was being emissions tested and so it adapted to that.
The malicious actor had a mechanism that exempted the malicious code when built from source, presumably because it would be more likely to be noticed when building/examining the source.
Edit: a bit of grammar. Also, this is my best understanding based on what I’ve read and videos I’ve watched, but a lot of it is over my head.
it had a way to know it was being emissions tested and so it adapted to that.
Not sure why you got downvoted. This is a good analogy. It does a lot of checks to try to disable itself in testing environments. For example, setting TERM will turn it off.
The malicious code is not on the source itself, it's on tests and other files. The building process hijacks the code and inserts the malicious content, while the code itself is clean, So the co-manteiner was able to keep it hidden in plain sight.
The malicious code wasn’t in the source code people typically read (the GitHub repo) but was in the code people typically build for official releases (the tarball). It was also hidden in files that are supposed to be used for testing, which get run as part of the official building process.
The malicious code was written and debugged at their convenience and saved as an object module linker file that had been stripped of debugger symbols (this is one of its features that made Fruend suspicious enough to keep digging when he profiled his backdoored ssh looking for that 500ms delay: there were no symbols to attribute the cpu cycles to).
It was then further obfuscated by being chopped up and placed into a pure binary file that was ostensibly included in the tarballs for the xz library build process to use as a test case file during its build process. The file was supposedly an example of a bad compressed file.
This "test" file was placed in the .gitignore seen in the repo so the file's abscense on github was explained. Being included as a binary test file only in the tarballs means that the malicious code isn't on github in any form. Its nowhere to be seen until you get the tarball.
The build process then creates some highly obfuscated bash scripts on the fly during compilation that check for the existence of the files (since they won't be there if you're building from github). If they're there, the scripts reassemble the object module, basically replacing the code that you would see in the repo.
Thats a simplified version of why there's no code to see, and that's just one aspect of this thing. It's sneaky.
I think it is the other way around. If you build from Tarball then you getting pwned
did we find out who was that guy and why was he doing that?
We probably never will.
If we ever do, it'll be 40 or 50 years from now.
It was Spez trying to collect more user data to make Reddit profitable
Probably a state actor
The CIA will know, we will most likely not.
Any additional information been found on the user?
as long as you're up to date on everything here: https://boehs.org/node/everything-i-know-about-the-xz-backdoor
the only additional thing i've seen noted is a possibilty that they were using Arch based on investigation of the tarball that they provided to distro maintainers
Probably Chinese?
Can't confirm but unlikely.
Via https://boehs.org/node/everything-i-know-about-the-xz-backdoor
They found this particularly interesting as Cheong is new information. I’ve now learned from another source that Cheong isn’t Mandarin, it’s Cantonese. This source theorizes that Cheong is a variant of the 張 surname, as “eong” matches Jyutping (a Cantonese romanisation standard) and “Cheung” is pretty common in Hong Kong as an official surname romanisation. A third source has alerted me that “Jia” is Mandarin (as Cantonese rarely uses J and especially not Ji). The Tan last name is possible in Mandarin, but is most common for the Hokkien Chinese dialect pronunciation of the character 陳 (Cantonese: Chan, Mandarin: Chen). It’s most likely our actor simply mashed plausible sounding Chinese names together.
They're more likely to be based in Eastern Europe based on the times of their commits (during working hours in Eastern European Time) and the fact that while most commits used a UTC+8 time zone, some of them used UTC+2 and UTC+3: https://rheaeve.substack.com/p/xz-backdoor-times-damned-times-and
Just because somebody picked a vaguely Chinese-sounding handle doesn't mean much about who or where.
This whole situation just emphasizes the fact that rebasing >>>>>>>>>> merge squashing.
The tukaani github repos are gone, is there a mirror somewhere?
Tukaani main website
Though unfortunately (or I guess for most use-cases fortunately) you can't find the malicious m4/build-to-host.m4 file on there afaik. The best way to find that now, should you really want to, is by looking through the commit history of the salsa.debian.org/debian/xz-utils repository which is, as far as I understand it, the repository that the debian packages are built from and consequently also what the compromised packages were built from.