Skip Navigation

Survey shows most people wouldn't pay extra for AI-enhanced hardware | 84% of people said no

Companies are going all-in on artificial intelligence right now, investing millions or even billions into the area while slapping the AI initialism on their products, even when doing so seems strange and pointless.

Heavy investment and increasingly powerful hardware tend to mean more expensive products. To discover if people would be willing to pay extra for hardware with AI capabilities, the question was asked on the TechPowerUp forums.

The results show that over 22,000 people, a massive 84% of the overall vote, said no, they would not pay more. More than 2,200 participants said they didn't know, while just under 2,000 voters said yes.

162 comments
  • That's kind of abstract. Like, nobody pays purely for hardware. They pay for the ability to run software.

    The real question is, would you pay $N to run software package X?

    Like, go back to 2000. If I say "would you pay $N for a parallel matrix math processing card", most people are going to say "no". If I say "would you pay $N to play Quake 2 at resolution X and fps Y and with nice smooth textures," then it's another story.

    I paid $1k for a fast GPU so that I could run Stable Diffusion quickly. If you asked me "would you pay $1k for an AI-processing card" and I had no idea what software would use it, I'd probably say "no" too.

    • Yup, the answer is going to change real fast when the next Oblivion with NPCs you can talk to needs this kind of hardware to run.

      • I'm still not sold that dynamic text generation is going to be the major near-term application for LLMs, much less in games. Like, don't get me wrong, it's impressive what they've done. But I've also found it to be the least-practically-useful of the LLM model categories. Like, you can make real, honest-to-God solid usable graphics with Stable Diffusion. You can do pretty impressive speech generation in TortoiseTTS. I imagine that someone will make a locally-runnable music LLM model and software at some point if they haven't yet; I'm pretty impressed with what the online services do there. I think that there are a lot of neat applications for image recognition; the other day I wanted to identify a tree and seedpod. Someone hasn't built software to do that yet (that I'm aware of), but I'm sure that they will; the ability to map images back to text is pretty impressive. I'm also amazed by the AI image upscaling that Stable Diffusion can do, and I suspect that there's still room for a lot of improvement there, as that's not the main goal of Stable Diffusion. And once someone has done a good job of building a bunch of annotated 3d models, I think that there's a whole new world of 3d.

        I will bet that before we see that becoming the norm in games, we'll see LLMs regularly used for either pre-generated speech synth or in-game speech synthesis, so that the characters say text (which might be procedurally-generated, aren't just static pre-recorded samples, but aren't necessarily generated from an LLM). Like, it's not practical to have a human voice actor cover all possible phrases with static recorded speech that one might want an in-game character to speak.

    • This. Apple is doing it the right way, avoiding the term AI and instead focusing on what benefits it brings in iOS18. Other companies need to figure out what problem people need to solve and what AI would do to solve it. Instead they’re trying to cram it into everything and people are largely nonplussed about it.

  • Depends on what kind of AI enhancement. If it's just more things nobody needs and solves no problem, it's a no brainer. But for computer graphics for example, DLSS is a feature people do appreciate, because it makes sense to apply AI there. Who doesn't want faster and perhaps better graphics by using AI rather than brute forcing it, which also saves on electricity costs.

    But that isn't the kind of things most people on a survey would even think of since the benefit is readily apparent and doesn't even need to be explicitly sold as "AI". They're most likely thinking of the kind of products where the manufacturer put an "AI powered" sticker on it because their stakeholders told them it would increase their sales, or it allowed them to overstate the value of a product.

    Of course people are going to reject white collar scams if they think that's what "AI enhanced" means. If legitimate use cases with clear advantages are produced, it will speak for itself and I don't think people would be opposed. But obviously, there are a lot more companies that want to ride the AI wave than there are legitimate uses cases, so there will be quite some snake oil being sold.

    • well, i think a lot of these cpus come with a dedicated npu, idk if it would be more efficient than the tensor cores on an nvidia gpu for example though

      edit: whatever npu they put in does have the advantage of being able to access your full cpu ram though, so I could see it might be kinda useful for things other than custom zoom background effects

      • But isn't ram slower then a GPU's vram? Last year people were complaining that suddenly local models were very slow on the same GPU, and it was found out it's because a new nvidia driver automatically turned on a setting of letting the GPU dump everything on the ram if it filled up, which made people trying to run bigger models very annoyed since a crash would be preferable to try again with lower settings than the increased generation time a regular RAM added.

  • Personally I would choose a processor with AI capabilities over a processor without, but I would not pay more for it

  • They'll pay for it. When the tech companies decide, it's a thing to make money off & advertise it, all the good ants will buy, buy, buy and the rest of the time they will work, work, work for it.

162 comments