Skip Navigation
38 comments
  • Yes, i agree and if it must run neural network it could do it on GPU, NPU is not necesary.

    • Someone with the expertise should correct me if I am wrong; it's been 4-5 years since I learnt about NPUs during my internship so I am very rusty:

      You don't even need a GPU if all you want to do is to run - i.e. perform inference with - a neural network (abbreviating it to NN). Just a CPU would do if the NN is sufficiently lightweight. The GPU is only needed to speed up the training of NNs.

      The thing is, the CPU is a general-purpose processor, so it won't be able run the NN optimally / as efficiently as possible. Imagine you want to do something that requires the NN and as a result, you can't do anything else on your phone / laptop (it won't be problem for desktops with GPUs though).

      Where NPU really shines is when there are performance constraints on the model: when it has to be fast (to be specific: have real-time speed), lightweight and memory efficient. Use cases include mobile computing and IoT.

      In fact, there's news about live translation on Apple AirPod. I think this may be the perfect scenario for using NPUs - ideally housed within the earphones directly but if not, within a phone.

      Disclaimer: I am only familiar with NPUs in the context of "old-school" convolutional neural networks (boy, tech moves so quickly). I am not familiar with NPUs for transformers - and LLMs by extension - but I won't be surprised if NPUs have been adapted to work with them.

  • I mean, even if the NPU space can't be replaced by more useful components easily or cheaply, just removing it is sure to save a small amount of power which equates to a possibly not so small amount of heat that needs to be dissipated, which takes not insignificant amounts of and/or requires slowing the system down. Additionally, the pathways likely could be placed to create less interference with each other and direct heat transfer which is likely to mean more stability overall.

    Of course without a comparable processor without the NPU to compare to, these are really difficult things to quantify, but are true of nearly all compact chips on power sensitive platforms.

38 comments