We hate AI because it's everything we hate
We hate AI because it's everything we hate

Tech companies wanted "AI" to represent a bright future. Now it represents every annoyance in our daily lives.

We hate AI because it's everything we hate
Tech companies wanted "AI" to represent a bright future. Now it represents every annoyance in our daily lives.
You're viewing a single thread.
I don't hate AI. AI didn't do anything. The people who use it wrong are the ones I hate. You don't sue the knife that stabbed you in court, it was the human behind it that was the problem.
While true to a degree, I think the fact is that AI is just much more complex than a knife, and clearly has perverse incentives, which cause people to use it "wrong" more often than not.
Sure, you can use a knife to cook just as you can use a knife to kill, but just as society encourages cooking and legally & morally discourages murder, then in the inverse, society encourages any shortcut that can get you to an end goal for the sake of profit, while not caring about personal growth, or the overall state of the world if everyone takes that same shortcut, and the AI technology is designed with the intent to be a shortcut rather than just a tool.
The reason people use AI in so many damaging ways is not just because it is possible for the tool to be used that way, and some people don't care about others, it's that the tool is made with the intention of offloading your cognitive burden, doing things for you, and creating what can be used as a final product.
It's like if generative AI models for image generation could only fill in colors on line art, nothing more. The scope of the harm they could cause is very limited, because you'd always require line art of the final product, which would require human labor, and thus prevent a lot of slop content from people not even willing to do that, and it would be tailored as an assistance tool for artists, rather than an entire creation tool for anyone.
Contrast that with GenAI models that can generate entire images, or even videos, and they come with the explicit premise and design of creating the final content, with all line art, colors, shading, etc, with just a prompt. This directly encourages slop content, because to have it only do something like coloring in lines will require a much more complex setup to prevent it from simply creating the end product all at once on its own.
We can even see how the cultural shifts around AI happened in line with how UX changed for AI tools. The original design for OpenAI's models was on "OpenAI Playground," where you'd have this large box with a bunch of sliders you could tweak, and the model would just continue the previous sentence you typed if you didn't word it like a conversation. It was designed to look like a tool, a research demo, and a mindless machine.
Then, they released ChatGPT, and made it look more like a chat, and almost immediately, people began to humanize it, treating it as its own entity, a sort of semi-conscious figure, because it was "chatting" with them in an interface similar to how they might text with a friend.
And now, ChatGPT's homepage is presented as just a simple search box, and lo and behold, suddenly the marketing has shifted to using ChatGPT not as a companion, but as a research tool (e.g. "deep research") and people have begun treating it more like a source of truth rather than just a thing talking to them.
And even in models where there is extreme complexity to how you could manipulate them, and the many use cases they could be used for, interfaces are made as sleek and minimalistic as possible, to hide away any ability you might have to influence the result with real, human creativity.
The tools might not be "evil" on their own, but when interfaces are designed the way they are, marketing speak is used how it is, and the profit motive incentivizes using them in the laziest way possible, bad outcomes are not just a side effect, they are a result by design.
This is fantastic description of Dark Patterns. Basically all the major AI products people use today are rife with them, but in insidiously subtle ways. Your point about minimal UX is a great example. Just because the interface is minimal does not mean it should be, and OpenAI ditched their slider-driven interface even though it gave the user far more control over the product.
But it's when you promote the knife like it's medicine rather than a weapon is when the shit turns sideways.
Scalpel: Am I a joke to you?
[Allegorical confusion]
Can your confusion be treated with a scalpel?
I dunno, is it heavy and blunt?
It's AI... So.... Yeah.
I dunno, I like AI for what it's good for. The luddite argument doesn't particularly sway me, my clothes, furniture, car, etc, are all machine made. Machine made stuff is everywhere, the handmade hill to die on was centuries back during the industrial revolution.
The anti-capitalist arguments don't sway me when specifically applied to AI. The corporations are going to bad things? Well yeah! It's not "AI bad" it's "corporate bad".
The ethical arguments kinda work. Deep fakes are bad, and I don't think that the curios AI provides tip the scales when weighed against the bad of deepfakes.
Tl:Dr AI is a heavy, blunt tool.
I think my point is, the consumer versions of AI, like chat bots, are pretty shit, and they're making us dumber. They're also kind of dangerous, of which we've already seen numerous examples.
I'm also not interested as a programmer. I'm not looking to bug hunt as a profession. I want to make my own bugs, dammit! That's the fun part! To create something! Not fix something a machine made until it's ready to ship. How boring.
Ok but there's a distinction between "you don't see the value in it", and "there is no value in it." The first means, congrats don't use it, leave everyone else alone, unless you want to sound like Ben Shapiro claiming hip-hop isn't music. The second is much harder to demonstrate, particularly as it's value has already been demonstrated to many people. Just as an example, it turned a blank page into a covering letter that I could edit into what I wanted, breaking through blank page paralysis=value. Maybe it's very little value, but it's still value. Not the only use case for generative AI, or the best one.
Back in my day calculators were making us dumber, and to be clear I would accent that mental numeracy ability is lower now, but not that we're dumber for having them. Luddite arguments are not convincing, I suppose I'm still hearing "calculators are making us dumber"
My use of a calculator is not making me dumber, just faster. I can do the calculations, just not as fast. And I have to know what needs to be calculated in order to get the correct results. AI is more like "give me a budget that works for me". I fear AI will allow us to bypass having to learn anything and prevent us from thinking at all. The leap is too great.
It could/should definitely be used as a tool, but IMO not used to create whole solutions.
Scaffolding = fine, I guess, but I think it can be a slippery slope if abused.
I have an example of actual people using AI to generate a vacation itinerary for them, and they just printed it and followed it on their vacation. Could potentially be dangerous, first of all. And it's literally following the will of a computer and experiencing its version of your vacation instead of doing a little bit of research into what you want to do and see and experience.
I don't know. I just have a bad gut feeling about it.
It's not Luddite either, I am usually one to adopt tech very quickly and be optimistic about new shit. I for example trust my car to keep pace after other cars and automatically break when the car in front slows down. This is new tech for me as of this year. But what I'm seeing right now with AI induces strong skepticism for me. I have actual people I know that have lost whole projects due to vibe coding.
And the numerous examples of AI being blatantly yet confidently incorrect, racist, suicide coercion, all kinds of shit. Having its input be based on human output just doesn't feel good. That's not a good feedback loop. It's not conducive to independent thought, at its very core.
Correct, calculators can make you quicker... Just like they made me quicker with my cover letter. A pocket calculator would make my writing a cover letter slower though. Correct tool, correct job. I will accept for some jobs there isn't an appropriate calculator yet.
Let's reframe the issue with your car using your braked for you. You don't see potential dangers in trusting a machine with acceleration and breaking? Tesla is screaming that you should.
But for cruise control you have accepted certain dangers and for AI you haven't. That's fine, don't use it. For my own experience, the car can accelerate but the brakes are mine always, for if it does weird things with the power.
It is luddite though. "Tech is potentially dangerous" is luddite. I agree, it is potentially dangerous, so are knives, cars, etc. but we accept potential dangers in society, I would like them better regulated (deep fakes are bad yo) but I wouldn't throw away scalpels because knife crime is on the rise.
The thing they created hates you. Trust me, it does.
The thing they created was a mathematical algorithm.
Trust me, it has no feelings.
Why do you say that? I'm not disagreeing. Even if you're just being rhetorical/trolling, where's that coming from? Because...actually yeah, I do get that impression sometimes and it's weird as hell.