Skip Navigation
166 comments
  • Google, Facebook, etc. have been burning money to gain market share and "good will" from users knowing that when the money faucet stopped or if they found a way to make money, they'd abuse their market share and squeeze their users for profit.

    Once interest rates increased and the VC infinite money glitch went away (borrow at low interest rates, gamble on companies, repeat), the masks came off and the screws started turning, hard. Anything they can do to monetize anyone else involved, they're trying.

    The same story has been happening with AI but without the infinite money glitch - just investors desperate for a good bet getting hyped to hell and back. They need adoption and they need business to become dependent on their product. Each of these companies are basically billions in the hole on AI.

    Users, especially technical users, should know that not only is the product failing to live up to the hype but that embracing AI is basically turning the other cheek for these companies to have their way with your wallet even faster and more aggressively than they already are with everything else they've given away.

  • Oh hey, I got one of those buttons on my new laptop that literally never booted into Windows. Pressing it Linux says it's "Meta + CTRL" (I think), which is pretty useful. Got it for the good price/performance/build-quality ratio.

    Didn't yet find a good use for that fancy NPU, the XDNA driver just arrived a month ago or so. Perhaps for use with Upscayl or something actually useful.

  • Gen AI should be private, secure, local and easier to train by it's users to fit their own needs. Closest thing to this at the moment seems to be Kobold.

  • Ai bro here. The reason there shit aint selling is because its useless for any actual ai aplication. Ai runs on gpus even an ai cpu will be so much slower than what an nvidea gpu can do. Of course no one buys it. Nvideas gpus still sell very well, and not just because of the gamers.

    • ah yes the only way to make LLMs, a technology built on plagiarism with no known use case, “useful for any actual ai application” is to throw a shitload of money at nvidia. weird how that works!

    • A lot of these systems are silly because they don't have a lot of RAM and things don't begin to get interesting with LLMs until you can run 70B and above

      The Mac Studio has seemed an affordable way to achieve running 200B+ models mainly due to the unified memory architecture (compare getting 512GB of RAM in a Mac Studio to building a machine with enough GPU to get there)

      If you look the industry in general is starting to move towards that sort of design now

      https://frame.work/desktop

      The framework desktop for instance can be configured with 128GB of RAM ($2k) and should be good for handling 70B models while maintaining something that looks like efficiency.

      You will not train, or refine models with these setups (I think you would still benefit from the raw power GPUs offer) but the main sticking point in running local models has been VRAM and how much it costs to get that from AMD / Nvidia

      That said, I only care about all of this because I mess around with a lot of RAG things. I am not a typical consumer

      • ah yes the only way to make LLMs, a technology built on plagiarism with no known use case, “not silly” is to throw a shitload of money at Apple or framework or whichever vendor decided to sell pickaxes this time for more RAM. yes, very interesting, thank you, fuck off

166 comments