Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)EN
enumerator4829 @sh.itjust.works
Posts 0
Comments 27
Framework’s first desktop is a strange—but unique—mini ITX gaming PC
  • The thing is, consumers didn’t push Nvidias stock sky high, AI did. Microsoft isn’t pushing anything sane to consumers, Microsoft is pushing AI. AMD, Intel, Nvidia and Qualcomm are all pushing AI to consumers. Additionally, on the graphics side of things, AMD is pushing APUs to consumers. They are all pushing things that require higher memory bandwidth.

    Consumer will get ”trickle down silicon”, like it or not. Out of package memory will die. Maybe not with you next gaming rig, but maybe the one after that.

  • Framework’s first desktop is a strange—but unique—mini ITX gaming PC
  • Wrote a longer reply to someone else, but briefly, yes, you are correct. Kinda.

    Caches won’t help with bandwidth-bound compute (read: ”AI”) it the streamed dataset is significantly larger than the cache. A cache will only speed up repeated access to a limited set of data.

  • Framework’s first desktop is a strange—but unique—mini ITX gaming PC
  • Yeah, the cache hierarchy is behaving kinda wonky lately. Many AI workloads (and that’s what’s driving development lately) are constrained by bandwidth, and cache will only help you with a part of that. Cache will help with repeated access, not as much with streaming access to datasets much larger than the cache (i.e. many current AI models).

    Intel already tried selling CPUs with both on-package HBM and slotted DDR-RAM. No one wanted it, as the performance gains of the expensive HBM evaporated completely as soon as you touched memory out-of-package. (Assuming workloads bound by memory bandwidth, which currently dominate the compute market)

    To get good performance out of that, you may need to explicitly code the memory transfers to enable prefetch (preferably asynchronous) from the slower memory into the faster, á la classic GPU programming. YMMW.

  • Framework’s first desktop is a strange—but unique—mini ITX gaming PC
  • All your RAM needs to be the same speed unless you want to open up a rabbit hole. All attempts at that thus far have kinda flopped. You can make very good use of such systems, but I’ve only seen it succeed with software specifically tailored for that use case (say databases or simulations).

    The way I see it, RAM in the future will be on package and non-expandable. CXL might get some traction, but naah.

  • Framework’s first desktop is a strange—but unique—mini ITX gaming PC
  • I don’t think you are wrong, but I don’t think you go far enough. In a few generations, the only option for top performance will be a SoC. You’ll get to pick which SoC you want and what box you want to put it in.

  • Framework’s first desktop is a strange—but unique—mini ITX gaming PC
  • Apparently AMD couldn’t make the signal integrity work out with socketed RAM. (source: LTT video with Framework CEO)

    IMHO: Up until now, using soldered RAM was lazy and cheap bullshit. But I do think we are at the limit of what’s reasonable to do over socketed RAM. In high performance datacenter applications, socketed RAM is on it’s way out (see: MI300A, Grace-{Hopper,Blackwell},Xeon Max), with onboard memory gaining ground. I think we’ll see the same trend on consumer stuff as well. Requirements on memory bandwidth and latency are going up with recent trends like powerful integrated graphics and AI-slop, and socketed RAM simply won’t work.

    It’s sad, but in a few generations I think only the lower end consumer CPUs will be possible to use with socketed RAM. I’m betting the high performance consumer CPUs will require not only soldered, but on-board RAM.

    Finally, some Grace Hopper to make everyone happy: https://youtube.com/watch?v=gYqF6-h9Cvg

  • Immich: opinion revised
  • If you’ve taken care to properly isolate that service, sure. You know, on a dedicated VM in a DMZ, without access to the rest of your network. Personally, I’d avoid using containers as the only barrier, but your risk acceptance is yours to manage.

  • Immich: opinion revised
  • Well, I’d just go for a reverse proxy I guess. If you are lazy, just expose it as an ip without any dns. For working DNS, you can just add a public A-record for the local IP of the Pi. For certs, you can’t rely on the default http-method that letsencrypt use, you’ll need to do it via DNS or wildcards or something.

    But the thing is, as your traffic is on a VPN, you can fuck up DNS and TLS and Auth all you want without getting pwnd.

  • Immich: opinion revised
  • Then you expose your service on your local network as well. You can even do fancy stuff to get DNS and certs working if you want to bother. If the SO lives elsewhere, you get to deploy a raspberry to project services into their local network.

  • Immich: opinion revised
  • I’d recommend setting up a VPN, like tailscale. The internet is an evil place where everyone hates you and a single tiny mistake will mess you up. Remove risk and enjoy the hobby more.

    Some people will argue that serving stuff on open ports to the public internet is fine. They are not wrong, but don’t do it until you know, understand and accept the risks.(’normal_distribution_meme.pbm’)

    Remember, risk is ’probability’ times ’shitshow’, and other people can, in general, only help you determine the probability.

  • Linux royalty backs adoption of Rust for kernel code
  • I don’t believe those MBA types should be in the discussion at this level at all.

    That’s the thing. They are in the discussion. It doesn’t matter what we think about it. If touching Rust risks yielding lower profits this quarter, it’s an automatic ”fuck off you filthy hobbyists”. Even having the discussion costs money.

    Rust in the kernel isn’t about technology, it’s about economics and risk management. I’d like to see the discussion move on from ”C bad unsafe rust gud typesaf” to a level where the suggested benefits of Rust are made clear to the people holding the bags of money, preferably presenting some actual monetary benefits. (Oh, and to make things worse, there are thousands of different stakeholders, with different interests, many of which are in conflict. Good luck!)

    So yeah, I get that you don’t care about it. But you probably should.

  • Linux royalty backs adoption of Rust for kernel code
  • I’m still kind of on the fence about Rust in the kernel. Linux isn’t some random hobby project, there are serious people working for serious companies in the project. Rust has a clear value proposition w.r.t. it’s qualities as a language, but I don’t think it’s as clear on a system level.

    Say I’m working for a large company as a dev, maintaining a subsystem (let’s say a driver). Letting other people (filthy casual hobbyists) mess around with their filthy type safety will eventually spill into my subsystem and cause extra work. I don’t want the extra work, I just want to have my driver working and then go home. And even if I’m okay with the extra work, my boss won’t be. Even the risk of extra costs down the line will be enough for some to shut it down completely.

    There are boring people working for huge corporations with huge stakes in the Linux kernel. I don’t think they see that much value in Rust at the moment, and I think the Rust crowd might need to hire some MBAs if they want to expand their presence in the kernel.