If you stay on X, you can keep using the same window manager for longer. My XMonad config is over a decade old, and I bet my old dwm config.h still compiles.
The relative size of the double handling is the potential problem. I think Nvidia is just trying to extend the gold rush for a bit longer.
Agreed, it’s not perfect, especially not with regards to drivers from some of them. But:
I expect that the ability of B2C-products to keep their code somewhat closed keeps them from moving to other platforms, while simultaneously pumping money upstream to their suppliers, expecting them to contribute to development. The linked list is dominated by hardware vendors, cloud vendors and B2B-vendors.
Linux didn’t win on technical merit, it won on licensing flexibility. Devs and maintainers are very happy with GPL2. Does it suck if you own a Tivo? Yes. Don’t buy one. On the consumer side, we can do some voting with our wallets, and some B2C vendors are starting to notice.
Do this:
- Calculate the total power cost of running it at 100% load since 2014
- Calculate Flops/Watt and compare with modern hardware
- Calculate MTTF when running at 100% load. Remember that commercial support agreements are 4-5 years for a GPU, and if it dies after that, it stays dead.
- In AI, consider the full failure domain (1 broken GPU = 7+ GPUs out of commission) for the above calculation.
You’ll probably end up with 4-6 years as the usable lifetime of your billion dollar investment. This entire industry is insane. (GTX 1080 here. Was considering an upgrade until the RAM prices hit.)
Nvidia sells plenty of GPUs for actual money, they are good for it.
No, the real issue is the depreciation for the people owning GPUs. Your GPU will be usable for 4-6 years, and 2-4 of those years will be spent as ”the cheap old GPU. After that time, you need new GPUs. (And as the models are larger by then, you need moahr GPU)
How the actual fuck do these people expect to get any ROI on that scale with those timeframes? With training, maybe the trained model can be an asset (lol), but for inference there are basically no residual benefits.
I agree with your morals and your end goal.
How do you want to fund the development of Open Source? Because currently most of it is funded by corporations, in turn funded by ”corporatist simping”. The expectations of the average user simply can’t be fulfilled by hobbyist developers, and then we need funding. How do we get the Windows user ”John Smith” to personally fork over money to the correct developers?
Proton/Wine/KDE would not be in their current state unless they got that sweet proprietary Valve money. In our current world we need to use corporate money to further open source, not fight it. Follow the stream and steer the flow. Given time, we can diversify funding and control.
Yes. Kinda.
How do you think Linux devs get paid? The devices are locked down, sure, but there are strong incentives to upstream code and fund further development upstream. Linux ”won” because of this. You can’t build and develop Linux for such a wide audience and hardware flora with a bunch of hobbyists.
As Linus himself said plenty of times - GPL2 was the correct choice. Roku, Tizen, Chromebooks and Amazon garbage are absolutely within what the developers intended, and the devs are doing the work after all.
From a consumer standpoint, I absolutely agree with you, open everything is wonderful. However - commercial interests currently fund most OSS development. Without those funds, development stops and developers must take other paying jobs (probably closed source). Would be nice to change this, but then we need to completely pivot our funding model. You need to pay devs, either directly or indirectly (taxes, foundations, etc).
So far, the open source community hasn’t been very good at figuring out funding models for consumer products. It usually ends with the development team needing to put food on the table, so they add a subscription and close down parts of the project. About two seconds later, the project has ten forks and the original author can’t buy groceries.
”Buy me a beer” simply isn’t s viable mechanism to fund open source. How should we do it?
Personal preference: Slowly move the public sector towards open source, and require them to provide financial aid to products they use. Not perfect, but something that could happen gradually, without shocking the system.
tl;dr: yes, but also no.
How else would you be webscale?
Look, I’m not saying BitLocker isn’t flawed. I’n m saying the alternatives on Linux are shit. All the primitives are there, and you can do it on Linux, with lots of work, testing and QC of all software updates on all your hardware (or else you’ll do manual entry of disaster recovery keys for the next decade). But on Windows it’s a checkbox to encrypt the entire fleet, along with management of recovery keys.
Also, on audits: for people doing checkbox security (i.e. most regulated industries), this is very easy to audit. You just smack in ”Bitlocker” and you are done. For some, the threat isn’t really information loss, it’s loss of compliance (and therefore revenue). Stupid, but here we are. If you mean actual security, then you are probably correct.
A smart cart only authenticates and identifies the user - it can’t do attestation of the boot chain. If we use a smart card for disk encryption, a malicious or compromised user can just pop out the SSD, mount and decrypt (using the smart card) on a separate machine and extract/modify data without a trace. If you use SB, the TPM and disk encryption as intended, you can trust both the user (via smart card) and the machine (probably via a Kerberos machine key). Basically, this method prevents the user from accessing or modifying data on their own machine.
Again, on Windows this is basic shit any Windows sysadmin can roll out easily following a youtube tutorial or something. Providing those same security controls on Linux will yield a world of pain.
We really need to make this easy on Linux. systemd-boot and UKIs are trying, but are not even close to enough.
You need to have secure boot in order to have the disk decrypt without user input, otherwise the chain is untrusted. You can (and probably should) load your own keys into the firmware and sign everything yourself. MS has nothing to do with it, except that BitLocker is much better than anything any Linux distro has to offer today.
You need to have the disk decrypt without user input, and you can’t have the secret with the user. (As the user is untrusted - could be someone stealing the laptop.) The normal Linux user mantra of ”I own the machine” does not apply here. In this threat model, the corporation owns the machine, and in particular any information on it.
As for sudo, this is why we have polkit. (Yes, technically root, but you get my point)
And as for number 7 - this is why most Windows fleets use ”Software Center” or similar. No reason you can’t do the same on Linux, just that no one has done it yet. (I mean, you can, with pull requests into a puppet repo, but that’s not very user friendly)
Hate RHEL all you want, but first take a look at what distros have any kind of commercial support at all from software vendors. This is the complete list: RHEL, sometimes Rocky, sometimes Ubuntu. Go ask your vendor about Fedora Silverblue and see what happens. The primary reason to run Linux like this is usually to use a specific (and probably very expensive) software that works best on Linux, so distro choice is usually very limited to what that software vendor supports. (And when they say Linux, they are really saying ”the oldest still supported RHEL.)
Basically, corporate requirements go completely against the requirements of enthusiasts and power users. You don’t need Secure Boot to protect your machine from thieves, but a corporation needs Secure Boot to protect the machine from you.
I’ve managed Linux desktop fleets in enterprise-like environments. I’ll modify your list a bit:
- Use Rocky or RHEL (because the commercial software you want to use only has support for RHEL and/or Ubuntu)
- disallow root completely without exception
- do additional hardening
- don’t allow sudo for fucking anything
- run centrally controlled configuration management (most likely Puppet)
- Ironically - disallow any use of Flatpak, Snap and AppImage. They don’t play that well with Kerberized NFS-mounted home directories, which you absofuckinglutely will be required to use. (Might have improved since I tried last time, but probably not. Kerberos and network mounted directories,home or otherwise, are usually a hard requirement.)
- Install and manage all software via configuration management (again, somewhat ironically, this works very well with RPMs and DEBs, but not with Flatpak/Snap/Appimage). Update religiously, but controlled (i.e. Snap is out).
- A full reprovision of everything fairly regularly.
- You most likely want TPM-based unlocking of your LUKS encrypted drives, with SecureBoot turned on. This is very fun to get working properly in a Linux environment, but super simple to do on Windows.
And as you have guessed, on Windows this requires a bit of point and click in SCCM to do decently.
On Linux, you’ll wanna start by getting a few really good sysadmins to write a bunch of Puppet for a year or so.
(If we include remote desktop capabilities in the discussion, I’ll do my yearly Wayland-rant.)
X11 still works fine, despite the FUD.
Xfce4 is one option, several others exist.
Software compatibility is a problem on X as well, so I’m extrapolating. I don’t expect the situation to get better though. I’ve managed software that caused fucking kernel panics unless it ran on Gnome. The support window for this type of software is extremely narrow and some vendors will tell you to go pound sand unless you run exactly what they want.
I’m no longer working with either educational or research IT, so at least it’s someone else’s problem.
As for ThinLinc, their customers have asked about what their plan is for the past decade, but to quote them: ”Fundamentally, Wayland is not compatible with remote desktops in its core design.” (And that was made clear by everyone back in 2008)
Edit: tangentially related, the only reasonable way to run VNC now against Wayland is to use the tightly coupled VNC-server within the compositor (as you want intel on window placements and redraws and such, encoding the framebuffer is just bad). If you want to build a system on top of that, you need to integrate with every compositor separately, even though they all support ”VNC” in some capacity. The result is that vendors will go for the common denominatior, which is running in a VM and grabbing the framebuffer from the hypervisor. The user experience is absolute hot garbage compared to TigerVNC on X.
It’s great that most showstoppers are fixed now. Seventeen years later.
But I’ll bite: Viable software rendered and/or hardware accelerated remote deskop support with load balancing and multiple users per server (headless and GPU-less). So far - maybe possible. But then you need to allow different users to select different desktop environments (due to either user preferences or actual business requirements). All this may be technically possible, but the architecture of Wayland makes this very hard to implement and support in practice. And if you get it going, the hard focus on GPU acceleration yields an extreme cost increase, as you now need to buy expensive Nvidia-GPUs for VDI with even more expensive licenses. Every frame can’t be perfect over a WAN link.
This is trivial with X, multiple commercially supported solutions exist, see for example Thinlinc. This is deployable in literally ten minutes. Battle tested and works well. I know of multiple institutional users actively selecting X in current greenfield deployments due to this, rolling out to thousands of users in well funded high profile projects.
As for the KDE showstopper list - that’s exactly my point. I can’t put my showstoppers in a single place, I need to report to KDE, Gnome and wlroots and then track all of them, that’s the huge architectural flaw here. We can barely get commercial vendors to interact with a single project, and the Wayland architecture requires commercial vendors to interact with a shitton of issue trackers and different APIs (apparently also dbus). Suddenly you have a CAD suite that only works on KDE and some FEM software that only runs on a particular version of Gnome, with a user that wants both running at the same time. I don’t care about how well KDE works. I care that users can run the software they need, the desktop environment is just a tool to do that. The fragmentation between compositors really fucks this up by coupling software to display manager. Eventually, this will focus commercial efforts on the biggest commercial desktop environment (i.e. whatever RHEL uses), leaving the rest behind.
(Fun story, one of my colleagues using Wayland had a postit with ”DO NOT TURN OFF” on his monitor the entire pandemic - his VNC session died if the DisplayPort link went down.)
In principle I agree with you. But have you seen the state of the rest of the industry? Framework stands out as a bastion of repairability, the rest is mostly garbage.
I’d honestly expect a longer lifetime from a Macbook than almost anything else on the market at this point, especially if we are talking about high performance laptops for ”creative” work. You know, apart from an old Thinkpad, those machines are invincible.
Permanently Deleted
Oh no, people wrote lists like that 17 years ago. That’s the fun part. We have been complaining all along.
It’s hilarious that all of this was foreseen 17 years ago by basically everyone, and here is a nice list providing just those exact points. I’ve never seen a better structured ”told ya so” in my life.
The point isn’t that the features are there or not, but how horrendously fragmented the ecosystem is. Implementing anything trying to use that mess of API surface would be insane to attempt for any open source project, even when ignoring that the compositors are still moving targets.
(Also, holy shit the Gnome people really wants everyone to use dbus for everything.)
Edit: 17 years. Seventeen years. This is what we got. While the list is status quo, it’s telling that it took 17 years to implement most of the features expected of a display server back in the last millenium. Most features, but not all.