From what I've read, it sounds like the update file that was causing the problems was entirely filled with zeros; the patched file was the same size but had data in it.
My entirely speculative theory is that the update file that they intended to deploy was okay (and possibly passed internal testing), but when it was being deployed to customers there was some error which caused the file to be written incorrectly (or somehow a blank dummy file was used). Meaning the original update could have been through testing but wasn't what actually ended up being deployed to customers.
I also assume that it's very difficult for them to conduct UAT given that a core part of their protection comes from being able to fix possible security issues before they are exploited. If they did extensive UAT prior to deploying updates, it would both slow down the speed with which they can fix possible issues (and therefore allow more time for malicious actors to exploit them), but also provide time for malicious parties to update their attacks in response to the upcoming changes, which may become public knowledge when they are released for UAT.
There's also just an issue of scale; they apparently regularly release several updates like this per day, so I'm not sure how UAT testing could even be conducted at that pace. Granted I've only ever personally involved with UAT for applications that had quarterly (major) updates, so there might be ways to get it done several times a day that I'm not aware of.
None of that is to take away from the fact that this was an enormous cock up, and that whatever processes they have in place are clearly not sufficient. I completely agree that whatever they do for testing these updates has failed in a monumental way. My work was relatively unaffected by this, but I imagine there are lots of angry customers who are rightly demanding answers for how exactly this happened, and how they intend to avoid something like this happening again.