I agree with the sentiment regarding being woken up, but I used to look forward to being on call. I could go to bed happily, knowing I was earning a significant premium and I'd still get a good night's sleep because the systems just didn't go down. I had the advantage that most of the customers I supported had similar requirements, so I had their systems locked up pretty well. Minor problems (disk space. Why is it always disk space?) would self heal, catastrophic failures (hardware failures or the engineer who supposed to replace a component unplugging the wrong server) would fail over to the rest of the cluster. I never had much trouble with logging either, it was typically one of the first things set up, and I had most of the setup automated to avoid missing anything. I suppose the thing was I was supporting systems I'd built, and I'd built them to ensure I didn't have to be woken up.
I do a lot more troubleshooting and rescue type work nowadays, and the number of times I run into systemd components just not doing what they should is frustrating to say the least. Being able to pull the logs by knowing the service name would be nice, but a) you could already do that because you set up different services to log to different places and b) you don't always know the service name in question. Being able to just grep the log directory is a lifesaver. You can still do that, but only because distros set systemd up to log to file as well as it's binary format. I loathe the way systemd ends up spreading it's unit files over about a dozen different directories, with overrides increasing that even further. I just want to know what services I've got and what will start up, in exactly what order, on the next reboot, dammit! The last one is particularly tricky as, due to services being started in parallel, you can't predict exactly what order things will actually start between targets. That shouldn't matter, units should have all their dependencies properly listed, but it's no fun tracking down a race condition that only happens once every x reboots when a particular network service takes a few hundred milliseconds longer to come up. Give me sequential boot any day. It might take a few tens of seconds longer, but it happens the same way each time, and I only need to look in one place to know what that is.
As to systemd's dominance, once Redhat, where Mr Pottering worked, chose it, it became hard for other distros not to. Derivative distros obviously went with it, and if you look back through the various email discussions, it was far from a unanimous choice for distros like Debian to choose it. They did so eventually mostly, as far as I can see, because it would theoretically make packaging easier. Fortunately they still support sysvinit, so all is not lost for those of us who want a mainstream distro without systemd bloat.
Shifting stuff to kube is definitely goot for making things more robust, so long as you've got the underlying clustering working, and I quite like working with it too. Once you realise it's basically just a database and message queue with a bunch of controllers for managing storage, networking, containers and the like, and the ability to extend that, you can do all sorts of fun things with it.
Anyway, I've gone on for long enough. If you're a sysadmin and the number of trouble calls is going down, then you probably don't hear this often enough: well done, you're doing a great job.