Skip Navigation
374 comments
  • As I mentioned in another post, about the same topic:

    Slapping the words “artificial intelligence” onto your product makes you look like those shady used cars salesmen: in the best hypothesis it’s misleading, in the worst it’s actually true but poorly done.

  • "AI" is certainly a turn-off for me, I would ask a salesman "do you have one that doesn't have that?" and I will now enumerate why:

    1. LLMs are wrongness machines. They do have an almost miraculous ability to string words together to form coherent sentences but when they have no basis at all in truth it's nothing but an extremely elaborate and expensive party trick. I don't want actual services like web searches replaced with elaborate party tricks.
    2. In a lot of cases it's being used as a buzzword to mean basically anything computer controlled or networked. Last time I looked up they were using the word "smart" to mean that. A clothes dryer that can sense the humidity of the exhaust air to know when the clothes are dry isn't any more "AI" than my 90's microwave that can sense the puff of steam from a bag of popcorn. This is the kind of outright dishonest marketing I'd like to see fail so spectacularly that people in the advertising business go missing over it.
    3. I already avoided "smart" appliances and will avoid "AI" appliances for the same reasons: The "smart" functionality doesn't actually run locally, it has to connect to a server out on the internet to work, which means that while that server is still up and offering support to my device, I have a hole in my firewall. And then they'll stop support ten minutes after the warranty expires and the device will no longer work. For many of these devices there's no reason the "smart" functionality couldn't run locally on some embedded ARM chip or talk to some application running on a PC that I own inside my firewall, other than "then we don't get your data."
    4. AI is apparently consuming more electricity than air conditioning. In fact, I'm not convinced that power consumption isn't the selling point they're pushing at board meetings. "It'll keep our friends in the pollution industry in business."
  • In your own words, tell me why you're calling today.

    My medication is in the wrong dosage.

    You need to refill your medication is that right?

    No, my medication is in the wrong dosage, it's supposed to be tens and it came as 20s.

    You need to change the pharmacy where you're picking up your medication?

    I need to speak to a human please.

    I understand that you want to speak to an agent, is that right?

    Yes.

    Chorus, 5x. (Please give me your group number, or dial it in at the keypad. For this letter press that number for that letter press this number. No I'm driving, just connect me with an agent so I can verify over the phone)

    I'm sorry, I can't verify your identity please collect all your paperwork and try calling again. Click

    Why ever would we be mad?

    • I went through a McDonald’s drive-thru the other day and had the most insane experience. For the context of this anecdote, I don’t do that often, so, what I experienced was just weird.

      While not quite “AI,” the first thing that happened was an automated voice yells at me, “are you ordering using your mobile app today?”

      There’s like three menu-speaker boxes, and due to where the car in front of me stopped, I’m like in between the last two. The other speaker begins to yell, “Are you ordering using your mobile app today?”

      The person running drive-thru mumbles something about pull around. I do. Pass by the other menu “Are you ordering using your mobile app today?”

      Dude walks out with a headset and starts taking orders from each car using a tablet.

      I have no idea what is happening. I can’t even see a menu when the guy gets around to me. Turns the tablet around at me.

      I realized that I was indeed ordering using the mobile app today.

    • To be fair, this is not new, unless you're counting all answering machines as AI

      • Hardly. It used to be natural language dictation and decision tree. Now they're trying to use LLM training to automatically pick up more edge cases and it's pretty much b*******.

  • Unsurprisingly. I have use for LLMs and find them helpful, but even I don't see why should we have the copilot button on new keyboards and mice, as well as on the LinkedIn's post input form.

  • I don't know anyone who is actively looking for products that have "AI".

    It's like companies drank their own Kool aid and think because they want AI, so do the consumers. I have no need for AI. My parents don't even understand what it is. I can't imagine Gen Z gives a hoot.

  • AI in consumer devices at this point stands for data harvesting, wonky functionality and questionable usefulness. No wonder nobody wants that crap.

  • AI is garbage.

    • AI is just an excuse to lay off your employees for an objectively less reliable computer program, which somehow statistically beats us in logic.

      • I've used LLMs a lot over the post couple years. Pro tip. Use it a lot and learn the models. Then they look much more intelligent as you the user becomes better. Obviously if you prompt "Write me a shell script to calculate the meaning of life, make my coffee, and scratch my nuts before 9AM" it will be a grave disappointment.

        If you first design a ball fondling/scratching robot, use multiple instances of LLMs to help you plan it out, etc. then you may be impressed.

        I think one of the biggest problems is that most people interacting with llms forget they are running on computers and that they are digital and not like us. You can't make assumptions like you can with humans. Usually even when you do that with us you just get stuff you didn't want because you weren't clear enough. We are horrible at instructions and this is something I hope AI will help us learn how to do better. Because ultimately bad instructions or incomplete information doesn't lead to being able to determine anything real. Computers are logic machines. If you tell a computer to go ride a bike at best it'll go out and do all the work to embody itself in a robot and buy a bike and ride it. Wait, you don't even know it did it though because you never specified for it to record the ride....

        A very few of us are pretty good at giving computers clear instructions some of the time. Also though, I have found just forcing models to reason in context is powerful. You have to know to tell it to "use a drill down tree style approach to problem solving. Use reflection and discussion to explore and find the optimal solution to reasoning through the problem." Might still give you bad results. That is why you have to experiment. It is a lot of fun if you really just let your thoughts run wild. It takes a lot of creative thinking right now to really get the most out of these models. They should all be 110% open source and free for all. BTW Gemini 1.5 and Claude and Llama 3.1 are all great, nd Llama you can run locally or on a rented GPU VM. OpenAI I'm on the fence about but given who all is involved over there I wouldn't say I would trust them. Especially since they want to do a regulatory capture.

374 comments