Skip Navigation

User banner
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)A
Posts
1
Comments
30
Joined
2 yr. ago

  • But 22301 isn't prime? It's 29*769.

  • A piece of plastic broke off from my laptop once. It was supposed to hold one of the two screws fixing the cover of the RAM & drive section and now there was just a larger round hole. I've measured the hole and the screw, designed a replacement in Blender (not identical, I wanted something more solid and reliable) and printed it; took two attempts to get the shape perfectly right. Have had zero issues with it in all these years.

  • Thanks! I now see that Tai Chi is mentioned frequently online in context of the film unlike yoga so that should be right; it narrows things down.

  • What is this thing? @lemmy.world

    What is this gesture?

  • That's the ones, the 0414 release.

  • QWQ-32B for most questions, llama-3.1-8B for agents. I'm looking for new models to replace them though, especially the agent one.

    Want to test the new GLM models, but I'd rather wait for llama.cpp to definitely fix the bugs with them first.

  • What I've ultimately converged to without any rigorous testing is:

    • using Q6 if it fits in VRAM+RAM (anything higher is a waste of memory and compute for barely any gain), otherwise either some small quant (rarely) or ignoring the model altogether;
    • not really using IQ quants - afair they depend on a dataset and I don't want the model's behaviour to be affected by some additional dataset;
    • other than the Q6 thing, in any trade-offs between speed and quality I choose quality - my usage volumes are low and I'd better wait for a good result;
    • I load as much as I can into VRAM, leaving 1-3GB for the system and context.
  • Haven't heard of all-in-one solutions, but once you have a recording, whisper.cpp can do the transcription:

    The underlying Whisper models are MIT.

    Then you can use any LLM inference engine, e.g. llama.cpp, and ask the model of your choice to summarise the transcript:

    You can also write a small bash/python script to make the process a bit more automatic.

  • Because we have tons of ground-level sensors, but not a lot in the upper layers of the atmosphere, I think?

    Why is this important? Weather processes are usually modelled as a set of differential equations, and you want to know the border conditions in order to solve them and obtain the state of the entire atmosphere. The atmosphere has two boundaries: the lower, which is the planet's surface, and the upper, which is where the atmosphere ends. And since we don't seem to have a lot of data from the upper layers, it reduces the quality of all predictions.

  • If config prompt = system prompt, its hijacking works more often than not. The creators of a prompt injection game (https://tensortrust.ai/) have discovered that system/user roles don't matter too much in determining the final behaviour: see appendix H in https://arxiv.org/abs/2311.01011.

  • xkcd.com is best viewed with Netscape Navigator 4.0 or below on a Pentium 3±1 emulated in Javascript on an Apple IIGS at a screen resolution of 1024x1. Please enable your ad blockers, disable high-heat drying, and remove your device from Airplane Mode and set it to Boat Mode. For security reasons, please leave caps lock on while browsing.

  • CVEs are constantly found in complex software, that's why security updates are important. If not these, it'd have been other ones a couple of weeks or months later. And government users can't exactly opt out of security updates, even if they come with feature regressions.

    You also shouldn't keep using software with known vulnerabilities. You can find a maintained fork of Chromium with continued Manifest V2 support or choose another browser like Firefox.

  • Very cool and impressive, but I'd rather be able to share arbitrary files.

    And looks like you can only send images in DMs, but not in groups/forums.

  • If your CPU isn't ancient, it's mostly about memory speed. VRAM is very fast, DDR5 RAM is reasonably fast, swap is slow even on a modern SSD.

    8x7B is mixtral, yeah.

  • Mostly via terminal, yeah. It's convenient when you're used to it - I am.

    Let's see, my inference speed now is:

    • ~60-65 tok/s for a 8B model in Q_5_K/Q6_K (entirely in VRAM);
    • ~36 tok/s for a 14B model in Q6_K (entirely in VRAM);
    • ~4.5 tok/s for a 35B model in Q5_K_M (16/41 layers in VRAM);
    • ~12.5 tok/s for a 8x7B model in Q4_K_M (18/33 layers in VRAM);
    • ~4.5 tok/s for a 70B model in Q2_K (44/81 layers in VRAM);
    • ~2.5 tok/s for a 70B model in Q3_K_L (28/81 layers in VRAM).

    As of quality, I try to avoid quantisation below Q5 or at least Q4. I also don't see any point in using Q8/f16/f32 - the difference with Q6 is minimal. Other than that, it really depends on the model - for instance, llama-3 8B is smarter than many older 30B+ models.

  • Have been using llama.cpp, whisper.cpp, Stable Diffusion for a long while (most often the first one). My "hub" is a collection of bash scripts and a ssh server running.

    I typically use LLMs for translation, interactive technical troubleshooting, advice on obscure topics, sometimes coding, sometimes mathematics (though local models are mostly terrible for this), sometimes just talking. Also music generation with ChatMusician.

    I use the hardware I already have - a 16GB AMD card (using ROCm) and some DDR5 RAM. ROCm might be tricky to set up for various libraries and inference engines, but then it just works. I don't rent hardware - don't want any data to leave my machine.

    My use isn't intensive enough to warrant measuring energy costs.

  • I see!

    And it was a stable OS version, not a beta or something? That's the worst kind of bugs. Hopefully manufacturers start formally verifying hardware and firmware as a standard practice in the future.

  • Other than what I said in the other reply:

    I live in the USA so getting one would be problematic but I hear perhaps not entirely impossible for me.

    Looks like it has a US release? If you're unsure or getting a European version, double-check it's compatible with American wireless network frequencies &c. Specific operators might also have their own shenanigans.

    Do you know how it compares to e.g. Fairphone?

    Nope, never tried Fairphone.

  • Very solid, I think (except water protection, but my previous OnePlus also didn't have good water protection anyway; and I'm careful enough).

    I don't tend to use glyphs or the default launcher (and therefore its special widgets that only work there; but the ability to have apps in folders on my main screen while being hidden from the app menu is more important for me than a handful of widgets, so Neo Launcher it is).

    A recent OS update added configurable swap (up to 8GB), calling it "RAM booster". I don't use it, but if you want to run a local LLM (or rather a SLM), you could try making use of it? As long as you figure out how to make the model use main RAM and not the swap.

    I like the battery life (or maybe it's just because it's the first phone where I started charging at 20% and stopping at 80% semi-consistently).

    Termux still works despite the new Android versions becoming more hostile to apps executing binaries they didn't have included already.

    One thing I miss from OnePlus is the ability to deny some apps network access entirely. (I think it was removed in later versions of Oxygen OS?)

  • Also was a OnePlus user - now switched to Nothing Phone (2).