Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)E
Posts
0
Comments
138
Joined
1 yr. ago

  • They most likely run smaller pools and have their redundancy and replication provided by the application layers on top, replicating everything globally. The larger you go in scale, the further up in the stack you can move your redundancy and the less you need to care about resilience at the lower levels of abstraction.

    ZFS is fairly slow on SSDs and BTRFS will probably beat it in a drag race. But ZFS won’t loose your data. Basically, if you want between a handful TB and a few PB stored with high reliability on a single system, along with ”modest” performance requirements, ZFS is king.

    As for the defaults - BTRFS isn’t licence encumbered like ZFS, so BTRFS can be more easily integrated. Additionally, ZFS performs best when it can use a fairly large hunk of RAM for caching - not ideal for most people. One GB RAM per TB usable disk is the usual recommendation here, but less usually works fine. It also doesn’t use the ”normal” page cache, so the cache doesn’t behave in a manner people are used to.

    ZFS is a filesystem for when you actually care about your data, not something you use as a boot drive, so something else makes sense as a default. Most ZFS deployments I’ve seen just boot from any old ext4 drive. As I said, BTRFS plays in the same league as Ext4 and XFS - boot drives and small deployments. ZFS meanwhile will happily swallow a few enclosures of SAS-drives into a single filesystem and never loose a bit.

    tl;dr If you want reasonable data resilience and want raid 1 - BTRFS should work fine. You get some checksumming and modern things. As soon as you go above two drives and want to run raid5/6 you really want to use ZFS.

  • Look, there is a reason everyone who actually knows this stuff use ZFS. A good reason. ZFS is really fucking good and BTRFS has absolutely nothing on it. It’s a toy in comparison. ZFS is the gold standard in this class.

    You have four sane options:

    • mdraid raid5 with BTRFS on top. Raid5 on BTRFS still isn’t stable as far as I know, not even in 2026.
    • Mirror or triple mirror with mdraid. Have the third drive in the pool as more redundancy or outside the pool as separate unraided filesystem.
    • Same as above, but BTRFS. Raid1 is stable.
    • ZFS RaidZ1 (=raid5)

    (Not sure about bit rot recovery when running BTRFS on mdraid. All variants should at least have bit rot detection.)

    To reiterate, every storage professional I know has a ZFS-pool at home (and probably everywhere else they can have it, including production pools). They group BTRFS with Ext3, if they even know about it. When I built my home server, the distro and hardware was selected around running ZFS. Distros without good support for ZFS were disregarded right away.

  • I started experimenting with the spice the past week. Went ahead and tried to vibe code a small toy project in C++. It’s weird. I’ve got some experience teaching programming, this is exactly like teaching beginners - except that the syntax is almost flawless and it writes fast. The reasoning and design capabilities on the other hand - ”like a child” is actually an apt description.

    I don’t really know what to think yet. The ability to automate refactoring across a project in a more ”free” way than an IDE is kinda nice. While I enjoy programming, data structures and algorithms, I kinda get bored at the ”write code”-part, so really spicy autocomplete is getting me far more progress than usual for my hobby projects so far.

    On the other hand, holy spaghetti monster, the code you get if you let it run free. All the people prompting based on what feature they want the thing to add will create absolutely horrible piles of garbage. On the other hand, if I prompt with a decent specification of the code I want, I get code somewhat close to what I want, and given an iteration or two I’m usually fairly happy. I think I can get used to the spicy autocomplete.

  • Ahh, good old /opt/

  • I wonder how much that high cost could be reduced by modern manufacturing. Same/similar designs, but modern tooling and logistics.

    I mean, they did not have CNC mills back then.

  • Fairly significant factor when building really large systems. If we do the math, there ends up being some relationships between

    • disk speed
    • targets for ”resilver” time / risk acceptance
    • disk size
    • failure domain size (how many drives do you have per server)
    • network speed

    Basically, for a given risk acceptance and total system size there is usually a sweet spot for disk sizes.

    Say you want 16TB of usable space, and you want to be able to lose 2 drives from your array (fairly common requirement in small systems), then these are some options:

    • 3x16TB triple mirror
    • 4x8TB Raid6/RaidZ2
    • 6x4TB Raid6/RaidZ2

    The more drives you have, the better recovery speed you get and the less usable space you lose to replication. You also get more usable performance with more drives. Additionally, smaller drives are usually cheaper per TB (down to a limit).

    This means that 140TB drives become interesting if you are building large storage systems (probably at least a few PB), with low performance requirements (archives), but there we already have tape robots dominating.

    The other interesting use case is huge systems, large number of petabytes, up into exabytes. More modern schemes for redundancy and caching mitigate some of the issues described above, but they are usually onlu relevant when building really large systems.

    tl;dr: arrays of 6-8 drives at 4-12TB is probably the sweet spot for most data hoarders.

  • Oh, I fully agree that the tech behind X is absolute garbage. Still works reasonably well a decade after abandonment.

    I’m not saying we shouldn’t move on, I’m saying the architecture and fundamental design of Wayland is broken and was fundamentally broken from the beginning. The threads online when they announced the projects were very indicative of the following decade. We are replacing a bit unmaintainable pile of garbage with 15 separate piles of hardware accelerated soon-to-be unmaintainable tech debt.

    Oh, and a modern server doesn’t usually have a graphics card. (Or rather, the VM you want to host users within). I won’t bother doing the pricing calculations, but you are easily looking at 2-5x cost per seat, pricing in GPU hardware, licensing for vGPUs and hypervisors.

    With Xorg I can easily reach a few hundred active users per standard 1U server. If you make that work on Wayland I know some people happy to dump money on you.

  • I can only offer you garbage specs along with really shitty build quality for 1kEUR. It’s certified as ”not a direct fire hazard”. Take it or leave it. (/s)

    I have high hopes for framework, but last time I was in the market they still had some things to resolve.

  • The fundamental architectural issue with Wayland is expecting everyone to implement a compositor for a half baked, changing, protocol instead of implementing a common platform to develop on. Wayland doesn’t really exist, it’s just a few distinct developer teams playing catch-up, pretending to be compatible with each other.

    Implementing the hard part once and allowing someone to write a window manager in 100 lines of C is what X did right. Plenty of other things that are bad with X, but not that.

  • Tell me you never deployed remote linux desktop in an enterprise environment without telling me you never deployed remote desktop linux in an enterprise environment.

    After these decades of Wayland prosperity, I still can’t get a commercially supported remote desktop solution that works properly for a few hundred users. Why? Because on X, you could highjack the display server itself and feed that into your nice TigerVNC-server, regardless of desktop environment. Nowadays, you need to implement this in each separate compositor to do it correctly (i.e. damage tracking). Also, unlike X, Wayland generally expects a GPU in your remote desktop servers, and have you seen the prices for those lately?

  • Programmers use butterflies.

    Real sysadmins use programmers.

  • The M-series hardware is locked down and absofuckinglutely proprietary and locked down and most likely horrible to repair.

    But holy shit, every other laptop I’ve ever used looks and feels like a cheap toy in comparison. Buggy firmware that can barely sleep, with shitty drivers from the cheapest components they could find. Battery life in low single digits. The old ThinkPads are kinda up there in perceived ”build quality”, but I haven’t seen any other laptop that’s even close to a modern macbook. Please HP, Dell, Lenovo, Framework or whoever , just give me a functional high quality laptop. I’ll pay.

  • Moving people from closed commercial offerings onto something self hosted is enough work without gatekeeping US open source projects, even if they are flawed. If we want to move normal people away from the commercial offerings onto something better, we can’t do things like that. Better save such warnings for when they are actually needed (”Project X has been dead for five years and is full of security holes, you should migrate to project Y instead”). Keep the experience positive regardless.

    You do you, but different people have differing requirements and preferences. Don’t scare them away please.

  • Because dockers record with regards to security is questionable, and some people like to get automatic updates from their distro. For me personally, I think the design of Docker is absolute garbage. Containers are fine, but Docker is not the correct mechanism for it. (It’s also nothing new, see BSD jails and Solaris zones.)

    Immich on Nixos works perfectly, and I also get automatic updates.

  • If you stay on X, you can keep using the same window manager for longer. My XMonad config is over a decade old, and I bet my old dwm config.h still compiles.

  • The relative size of the double handling is the potential problem. I think Nvidia is just trying to extend the gold rush for a bit longer.

  • Agreed, it’s not perfect, especially not with regards to drivers from some of them. But:

    https://insights.linuxfoundation.org/project/korg/contributors?timeRange=past365days&start=2024-12-31&end=2025-12-31

    I expect that the ability of B2C-products to keep their code somewhat closed keeps them from moving to other platforms, while simultaneously pumping money upstream to their suppliers, expecting them to contribute to development. The linked list is dominated by hardware vendors, cloud vendors and B2B-vendors.

    Linux didn’t win on technical merit, it won on licensing flexibility. Devs and maintainers are very happy with GPL2. Does it suck if you own a Tivo? Yes. Don’t buy one. On the consumer side, we can do some voting with our wallets, and some B2C vendors are starting to notice.

  • Do this:

    • Calculate the total power cost of running it at 100% load since 2014
    • Calculate Flops/Watt and compare with modern hardware
    • Calculate MTTF when running at 100% load. Remember that commercial support agreements are 4-5 years for a GPU, and if it dies after that, it stays dead.
    • In AI, consider the full failure domain (1 broken GPU = 7+ GPUs out of commission) for the above calculation.

    You’ll probably end up with 4-6 years as the usable lifetime of your billion dollar investment. This entire industry is insane. (GTX 1080 here. Was considering an upgrade until the RAM prices hit.)

  • Nvidia sells plenty of GPUs for actual money, they are good for it.

    No, the real issue is the depreciation for the people owning GPUs. Your GPU will be usable for 4-6 years, and 2-4 of those years will be spent as ”the cheap old GPU. After that time, you need new GPUs. (And as the models are larger by then, you need moahr GPU)

    How the actual fuck do these people expect to get any ROI on that scale with those timeframes? With training, maybe the trained model can be an asset (lol), but for inference there are basically no residual benefits.

  • I agree with your morals and your end goal.

    How do you want to fund the development of Open Source? Because currently most of it is funded by corporations, in turn funded by ”corporatist simping”. The expectations of the average user simply can’t be fulfilled by hobbyist developers, and then we need funding. How do we get the Windows user ”John Smith” to personally fork over money to the correct developers?

    Proton/Wine/KDE would not be in their current state unless they got that sweet proprietary Valve money. In our current world we need to use corporate money to further open source, not fight it. Follow the stream and steer the flow. Given time, we can diversify funding and control.