Is It Just Me?
Is It Just Me?


Is It Just Me?
The worst is in the workplace. When people routinely tell me they looked something up with AI, I now have to assume that I can't trust what they say anylonger because there is a high chance they are just repeating some AI halucination. It is really a sad state of affairs.
I am way less hostile to Genai (as a tech) than most and even I've grown to hate this scenario. I am a subject matter expert on some things and I've still had people trying to waste my time to prove their AI hallucinations wrong.
I've started seeing large AI generated pull requests in my coding job. Of course I have to review them, and the "author" doesn't even warn me it's from an LLM. It's just allowing bad coders to write bad code faster.
Do you also check if they listen to Joe Rogan? Fox news? Nobody can be trusted. AI isn't the problem, it's that it was trained on human data -- of which people are an unreliable source of information.
AI also just makes things up. Like how RFKJr's "Make America Healthy Again" report cites studies that don't exist and never have, or literally a million other examples. You're not wrong about Fox news and how corporate and Russian backed media distorts the truth and pushes false narratives, and you're not wrong that AI isn't the problem, but it is certainly a problem and a big one at that.
To take an older example there are smaller image recognition models that were trained on correct data to differentiate between dogs and blueberry muffin but obviously still made mistakes on the test data set.
AI does not become perfect if its data is.
Humans do make mistakes, make stuff up, and spread false information. However they generally make considerably less stuff up than AI currently does (unless told to).
Joe Rogan doesn't tell them false domain kowledge 🤷
I feel the same way. I was talking with my mom about AI the other day and she was still on the "it's not good that AI is trained on stolen images, how it's making people lazy and taking jobs away from ppl" which is good, but I had to explain to her how much one AI prompt costs in energy and resources, how many people just mindlessly make hundreds of prompts a day for largely stupid shit they don't need and how AI hallucinates, is actively used by bad actors to spread mis- and disinformation and how it is literally being implemented into search engines everywhere so even if you want to avoid it as a normal person, you may still end up participating in AI prompting every single fucking time you search for anything on Google. She was horrified.
There definitely are some net positives to AI, but currently the negatives outweigh the positives and most people are not using AI responsibly at all. I have little to no respect for people who use AI to make memes or who use it for stupid everyday shit that they could have figured out themselves.
The most dystopian shit I have seen recently was when my boyfriend and I went to watch Weapons in cinema and we got an ad for an AI assistent. The ad is basically this braindead bimbo at a laundry mat deciding to use AI to tell her how to wash her clothes instead of looking at the fucking flips on her clothes and putting two and two together. She literally takes a picture of the flip and has the AI assistent tell her how to do it and then going "thank you so much, I could have never done this without you".
I fucking laughed in the cinema. Laughed and turned to my boyfriend and said: this is so fucking dystopian, dude.
I feel insane for seeing so many people just mindlessly walking down this path of utter removedation. Even when you tell them how disastrous it is for the planet, it doesn't compute in their heads because it is not only convenient to have a machine think for you. It's also addictive.
You are not correct about the energy use of prompts. They are not very energy intensive at all. Training the AI, however, is breaking the power grid.
Maybe not an individual prompt, but with how many prompts are made for stupid stuff every day, it will stack up to quite a lot of CO2 in the long run.
Not denying the training of AI is demanding way more energy, but that doesn't really matter as both the action of manufacturing, training and millions of people using AI amounts to the same bleak picture long term.
Considering how the discussion about environmental protection has only just started to be taken seriously and here they come and dump this newest bomb on humanity, it is absolutely devastating that AI has been allowed to run rampant everywhere.
According to this article, 500.000 AI prompts amounts to the same CO2 outlet as a
round-trip flight from London to New York.
I don't know how many times a day 500.000 AI prompts are reached, but I'm sure it is more than twice or even thrice. As time moves on it will be much more than that. It will probably outdo the number of actual flights between London and New York in a day. Every day. It will probably also catch up to whatever energy cost it took to train the AI in the first place and surpass it.
Because you know. People need their memes and fake movies and AI therapist chats and meal suggestions and history lessons and a couple of iterations on that book report they can't be fucked to write. One person can easily end up prompting hundreds of times in a day without even thinking about it. And if everybody starts using AI to think for them at work and at home, it'll end up being many, many, many flights back and forth between London and New York every day.
Sam Altman, or whatever fuck his name is, asked users to stop saying please and thank you to chatgpt because it was costing the company millions. Please and thank you are the less power hungry questions chatgpt gets. And its costing chatgpt millions. Probably 10s of millions of dollars if the CEO made a public comment about it.
You're right training is hella power hungry, but even using gen ai has heavy power costs
I'm pretty sure it's a product of scale, but also, GPT5 is markedly worse. I heard estimates of 40 watt hours for a single medium length response. Napkin math says my motorcycle can travel about kilometer per single medium length response of GPT5. Now multiply that by how many people are using AI (anyone going online these days), now multiply that by how many times a day each user causes a prompt. Now multiply that by 365 and we have how much power they're using in a year.
It's important to remember that there's a lot of money being put into A.I. and therefore a lot of propaganda about it.
This happened with a lot of shitty new tech, and A.I. is one of the biggest examples of this I've known about.
All I can write is that, if you know what kind of tech you want and it's satisfactory, just stick to that. That's what I do.
Don't let ads get to you.
First post on a lemmy server, by the way. Hello!
There was a quote about how Silicon Valley isn't a fortune teller betting on the future. It's a group of rich assholes that have decided what the future would look like and are pushing technology that will make that future a reality.
Welcome to Lemmy!
Classic Torment Nexus moment over and over again really
Reminds me of the way NFTs were pushed. I don’t think any regular person cared about them or used them, it was just astroturfed to fuck.
Hello and welcome!) Also, thank you for good advice!
Hello!
It's like Valorant, but much bigger and even worse.
Welcome in! Hope you're finding Lemmy in a positive way. It's like Reddit, but you have a lot more control over what you can block and where you can make a "home" (aka home instance).
Feel free to reach out if you have any questions about anything
My boss had GPT make this informational poster thing for work. Its supposed to explain stuff to customers and is rampant with spelling errors and garbled text. I pointed it out to the boss and she said it was good enough for people to read. My eye twitches every time I see it.
good enough for people to read
wow, what a standard, super professional look for your customers!
I think that's exactly what the author was referring to.
Spelling errors? That’s… unusual. Part of what makes ChatGPT so specious is that its output is usually immaculate in terms of language correctness, which superficially conceals the fact that it’s completely bullshitting on the actual content.
The user above mentioned informational poster so I'm going to assume it was generated as an image. And those have spelling mistakes.
Can't even generate image and text separately smh. People are indeed getting dumber.
FWIW, she asked it to make a complete info-graphic style poster with images and stuff so GPT created an image with text, not a document. Still asinine.
I'm mostly annoyed that I have to keep explaining to people that 95% of what they hear about AI is marketing. In the years since we bet the whole US economy on AI and were told it's absolutely the future of all things, it's yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it's own production or design, or a blockbuster product that I'm aware of.
We're betting our whole future on a concept of a product that has yet to reliably profit any of its users or the public as a whole.
I've made several good faith efforts at getting it to produce something valuable or helpful to me. I've done the legwork on making sure I know how to ask it for what I want, and how I can better communicate with it.
But AI "art" requires an actual artist to clean it up. AI fiction requires a writer to steer it or fix it. AI non-fiction requires a fact cheker. AI code requires a coder. At what point does the public catch on that the emperor has no clothes?
it's yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it's own production or design, or a blockbuster product
Or a profit. Or hell even one of those things that didn’t suck! It’s critically flawed and has been defying gravity on the coke-fueled dreams of silicon VC this whole time.
And still. One of next year’s fiscal goals is “AI”. That’s all. Just “AI”.
It’s a goal. Somehow. It’s utter insanity.
The goal is "[Replace you money-needing meatsacks with] AI" but the suits don't want to say it that clearly.
Anyone in engineering knows the 90% of your goal is the easy bit. You’ll then spend 90% of your time on the remainder. Same for AI and getting past the uncanny valley with art.
What if the point of AI is to have it create a personal model for each of us, using the vast amounts of our data they have access to, in order to manipulate us into buying and doing whatever the people who own it want but they can't just come out and say that?
I'm sure that's at least part of the idea but I'm yet to see any evidence that it won't also be dog shit at that. It doesn't have the context window or foresight to conceive of a decent plot twist in a piece of fiction despite having access to every piece of fiction ever written. I'm not buying that it would be able to build a psychological model and contextualize 40 plus years of lived experience in a way that could get me to buy a $20 Dubai chocolate bar or drive a Chevy.
It's our own version of The Matrix
There's a monster in the forest, and it speaks with a thousand voices. It will answer any question, and offer insight to any idea. It knows no right or wrong. It knows not truth from lie, but speaks them both the same. It offers its services freely, many find great value. But those who know the forest well will tell you that freely offered does not mean free of cost. For now the monster speaks with a thousand and one voices, and when you see the monster it wears your face.
Not just you. Ai is making people dumber. I am frequently correcting the mistakes of my colleagues that use.
My attitude to all of this is I've been told by management to use it so I will. If it makes mistakes it's not my fault and now I'm free to watch old Stargate episodes. We're not doing rocket surgery or anything so who cares.
At some point they'll realise that the AI is not producing decent output and then they'll shut up about it. Much easier they come to that realisation themselves than me argue with them about it.
Luckily no one is pushing me to use Ai in any form at this time.
For folks in your position, I fear that they will first go through a round of layoffs to get rid of the people who are clearly using it "wrong" because Top Management can't have made a mistake before they pivot and drop it.
When i was a kid and firat realized i was maybe a genius, it was terrifying. That there weren't always gonna just be people smarter than me who could fix it.
Seeing them get dumber is like some horror movie shit.
I don't fancy myself a genius but the way other people navigate things seems to create a strangely compelling case on its own
My pet peeve: "here's what ChatGPT said..."
No.
Stop.
If I'd wanted to know what the Large Lying Machine said, I would've asked it.
It's like offering unsolicited advice, but it's not even your own advice
Hammer time.
"Here's me telling everyone that I have no critical thinking ability whatsoever."
Is more like it
No, it's not just you or unsat-and-strange. You're pro-human.
Trying something new when it first comes out or when you first get access to it is novelty. What we've moved to now is mass adoption. And that's a problem.
These LLMs are automation of mass theft with a good enough regurgitation of the stolen data. This is unethical for the vast majority of business applications. And good enough is insufficient in most cases, like software.
I had a lot of fun playing around with AI when it first came out. And people figured out how to do prompts I cant seem to replicate. I don't begrudge people from trying a new thing.
But if we aren't going to regulate AI or teach people how to avoid AI induced psychosis then even in applications were it could be useful it's a danger to anyone who uses it. Not to mention how wasteful its water and energy usage is.
Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.
The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.
This would also crash the bubble and would slow down any of the most unethical for-profits.
Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.
I was referring to this in my comment:
Congress decided to not go through with the AI-law moratorium. Instead they opted to do nothing, which is what AI companies would prefer states would do. Not to mention the pro-AI argument appeals to the judgement of Putin, notorious for being surrounded by yes-men and his own state propaganda. And the genocide of Ukrainians in pursuit of the conquest of Europe.
“There’s growing recognition that the current patchwork approach to regulating AI isn’t working and will continue to worsen if we stay on this path,” OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn. “While not someone I’d typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward.”
The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.
The problem is unlike Robin Hood, AI stole from the people and gave to the rich. The intellectual property of artists and writers were stolen and the only way to give it back is to compensate them, which is currently unlikely to happen. Letting everyone see how the theft machine works under the hood doesn't provide compensation for the usage of that intellectual property.
This would also crash the bubble and would slow down any of the most unethical for-profits.
Not really. It would let more people get it on it. And most tech companies are already in on it. This wouldn't impose any costs on AI development. At this point the speculation is primarily on what comes next. If open source would burst the bubble it would have happened when DeepSeek was released. We're still talking about the bubble bursting in the future so that clearly didn't happen.
the bubble has burst or, rather, currently is in the process of bursting.
My job involves working directly with AI, LLM's, and companies that have leveraged their use. It didn't work. And I'd say the majority of my clients are now scrambling to recover or to simply make it out of the other end alive. Soon there's going to be nothing left to regulate.
GPT5 was a failure. Rumors I've been hearing is that Anthropics new model will be a failure much like GPT5. The house of cards is falling as we speak. This won't be the complete Death of AI but this is just like the dot com bubble. It was bound to happen. The models have nothing left to eat and they're getting desperate to find new sources. For a good while they've been quite literally eating each others feces. They're now starting on Git Repos of all things to consume. Codeberg can tell you all about that from this past week. This is why I'm telling people to consider setting up private git instances and lock that crap down. if you're on Github get your shit off there ASAP because Microsoft is beginning to feast on your repos.
But essentially the AI is starving. Companies have discovered that vibe coding and leveraging AI to build from end to end didn't work. Nothing produced scales, its all full of exploits or in most cases has zero security measures what so ever. They all sunk money into something that has yet to pay out. Just go on linkedin and see all the tech bros desperately trying to save their own asses right now.
the bubble is bursting.
The folks I know at both OpenAI and Anthropic don’t share your belief.
Also, anecdotally, I’m only seeing more and more push for LLM use at work.
At the risk of sounding like a tangent, LLMs' survival doesn't solely depend on consumer/business confidence. In the US, we are living in a fascist dictatorship. Fascism and fascists are inherently irrational. Trump, a fascist, wants to bring back coal despite the market natural phasing coal out.
The fascists want LLMs because they hate art and all things creative. So the fascists may very well choose to have the federal government invest in LLM companies. Like how they bought 10% of Intel's stock or how they want to build coal powered freedom cities.
So even if there are no business applications for LLM technology our fascist dictatorship may still try to impose LLM technology on all of us. Purely out of hate for us, art and life itself. edit: looks like I commented this under my comment the first time
Unfortunately the masses will do as they're told. Our society has been trained to do this. Even those that resist are playing their part.
On the contrary: society has repeatedly rejected a lot of ideas that industries have come up with.
HD DVD, 3D TV, Crypto Currency, NFT's, Laser Discs, 8-track tapes, UMD's. A decade ago everyone was hyping up how VR would be the future of gaming, yet it's still a niche novelty today.
The difference with AI is that I don't think I've ever seen a supply side push this strong before. I'm not seeing a whole lot of demand for it from individual people. It's "oh this is a neat little feature I can use" not "this technology is going to change my life" the way that the laundry machine, the personal motor vehicle, the telephone, or the internet did. I could be wrong but I think that as long as we can survive the bubble bursting, we will come out on the other side with LLM's being a blip on the radar. And one consequence will be that if anyone makes a real AI they will need to call it something else for marketing purposes because "AI" will be ruined.
AI's biggest business is (if not already, it will be) surveillance systems sold to authoritarian governments worldwide. Israel is using it in Gaza. It's both used internally and exported as a product by China. Not just cameras on street corners doing facial recognition, but monitoring the websites you visit, the things you buy, the people you talk to. AI will be used on large datasets like these to label people as dissidents, to disempower them financially, and to isolate them socially. And if the AI hallucinates in this endeavor, that's fine. Better to imprison 10 innocent men than to let 1 rebel go free.
In the meantime, AI is being laundered to the individual consumer as a harmless if ineffective toy. "Make me a portrait, give me some advice, summarize a meeting," all things it can do if you accept some amount of errors. But given this domain of problems it solves, the average person would never expect that anyone would use it to identify the first people to pack into train cars.
VR was and is also still a very inaccessible tool for most people. It costs a lot of money and time to even get to the point where you're getting the intended VR experience and that is what it mostly boils down to: an experience. It isn't convenient or useful and people can't afford it. And even though there are many gamers out there, most people aren't gamers and don't care about mounting a VR headset on their cranium and getting seasick for a few minutes.
AI is not only accessible and convenient, it is also useful to the everyday person, if the AI doesn't hallucinate like hell, that is. It has the potential to optimize workloads in jobs with a lot of paperwork, calculations and so on.
I completely agree with you that AI is being pushed very aggressively in ways we haven't seen before and that is because the tech people and their investors poured a lot of money into developing these things. They need it to be a success so they can earn their money back and they will be successful eventually because everybody with money and power has a huge interest in this tool becoming a part of everyday life. It can be used to control the masses in ways we cannot even imagine yet and it can earn the creators and investors a lot of money.
They are already making AI computers. According to some it will entirely replace the types of computers we are used to today. From what I can understand, it will be preferable to the open AI setups we have currently that are burning our planet to a crisp with the amount of data centers that need to keep them active. Supposedly the AI computer will have it be a local thing on the laptop and it will therefore demand less resources, but I'm so fucking skeptic about all this shit that I'm waiting to see how much power a computer with an AI operating system will need to swallow in energy. I'm too tech-ignorant to understand the ins and outs of what this and that means, but we are definitely going to have to accept that AI is here to stay and the current setup with open AIs and forced LLM's in every search engine is a massive environmental nightmare. It probably won't stop or change a fucking lick because people don't give a fuck as long as they are comfortable and the companies are getting people to use their trash tech just like they wanted so they won't stop it either.
HDDVDs weren’t rejected by the masses they were a casualty in Sony’s vendetta against the loss of Beta and DAT. Both of which were rejected by industry not consumers (though both were later embraced by industry and Betas even outlasted VHSs). They would have won out for the same reasons that Sony lost the previous format wars (insistence on licensing fees) except this time Sony bought out Columbia and had a whole library of video and a studio to make new movies to exclusively release on their format. Essentially the supply side pushing something until consumers accepted it, though to your point not quite as bad as AI is right now.
8-Tracks and laserdiscs were just replaced by better formats (Compact Cassette and Video CD/DVD respectively). Each of them were also replacements for previous formats like Reel to Reel and CEDs.
UMDs only don’t exist still because flash media got better and because Sony opted to use a cheaper scratch resistant coating instead of a built in case for later formats (like Blu-ray). Also, UMDs themselves were a replacement for or at least inspired by an earlier format called MiniDisc.
Capitalism’s biggest feat has been convincing people that everything is the next big thing and nothing that has come before is similar when just about everything is just a rinse and repeat, even LLMs… remember when Watson beat Ken Jennings?
See also: Cars, appliances, consumer electronics, movies, food, architecture.
We are ruled by the market and the market is ruled by the lowest common denominator.
People are overworked, underpaid, and struggling to make rent in this economy while juggling 3 jobs or taking care of their kids, or both.
They are at the limits of their mental load, especially women who shoulder it disproportionately in many households. AI is used to drastically reduce that mental load. People suffering from burnout use it for unlicensed therapy. I'm not advocating for it, I'm pointing out why people use it.
Treating AI users like a moral failure and disregarding their circumstances does nothing to discourage the use of AI. All you are doing is enforcing their alienation of anti-AI sentiment.
First, understand the person behind it. Address the root cause, which is that AI companies are exploiting the vulnerabilities of people with or close to burnout by selling the dream of a lightened workload.
It's like eating factory farmed meat. If you have eaten it recently, you know what horrors go into making it. Yet, you are exhausted from a long day of work and you just need a bite of that chicken to take the edge off to remain sane after all these years. There is a system at work here, greater than just you and the chicken. It's the industry as a whole exploiting consumer habits. AI users are no different.
Let's go a step further and look at why people are in burnout, are overloaded, are working 3 jobs to make ends meet.
Its because we're all slaves to capitalism.
Greed for more profit by any means possible has driven society to the point where we can barely afford to survive and corporations still want more. When most Americans are choosing between eating, their kids eating, or paying rent, while enduring the workload of two to three people, yeah they'll turn to anything that makes life easier. But it shouldn't be this way and until we're no longer slaves we'll continue to make the choices that ease our burden, even if they're extremely harmful in the long run.
I read it as "eating their kids". I am an overwoked slave.
We shouldn't accuse people of moral failings. That's inaccurate and obfuscates the actual systemic issues and incentives at play.
But people using this for unlicensed therapy are in danger. More often than not LLMs will parrot whatever you give in the prompt.
People have died from AI usage including unlicensed therapy. This would be like the factory farmed meat eating you.
https://www.yahoo.com/news/articles/woman-dies-suicide-using-ai-172040677.html
Maybe more like factory meat giving you food poisoning.
And what do you think mass adoption of AI is gonna lead to, now you won't even have 3 jobs to make rent cause they outsourced yours to someone cheaper using an AI agent, this is gonna permanently alter how our society works and not for the better
Meanwhile, we have people making the web worse by not linking to source & giving us images of text instead of proper, accessible, searchable, failure tolerant text.
Meanwhile, we have people making the web worse by not linking to source & giving us images of text instead of proper, accessible, searchable, failure tolerant text.
OpenAI Text Crawler
You don't think the disabled use technology? Or that search engine optimization existed before LLMs? Or that text sticks around when images break?
Lack of accessibility wouldn't stop LLMs: it could probably process images into text the hard way & waste more energy in the process. That'd be great, right?
‒−–—―…:
Beep bip boop. Normally I would agree, but Twitter and Discord are the sole exceptions. The original sources can get hit by meteors for all I care. No... I hope their datacenters do get hit, with no one in them of course.
It is silly that people think not posting text can somehow stop LLM crawlers.
It is silly that people think not posting text can somehow stop LLM crawlers.
Agreed.
Not linking to source though, because you hate the hosting platform is petty vindictiveness that does more to hurt the uninvolved on accessibility & usability than do much against the platform. To prevent traffic to platforms, linking to alternatives like proxies for those services & web archival snapshots is common practice around here.
So hating the AI hypenukes is 'old man yelling at cloud' but only being allowed to grab images of text is "people" making the web worse? Point made with an image of minimal text? from lemmynsfw?
Well goddamn.
Point made with an image of minimal text? from lemmynsfw?
Did you notice the alt text? Here's the markdown
markdown

When that image breaks, the alt text & a broken image icon renders in its place, so readers will still understand the message. People using accessibility technology (like screenreaders) can now understand the image. Search engines can find the image by the alt text.
I think griping over inaccessible text & lack of link to real text is more compelling, because it's a direct choice of the author: it directly impacts the user, the complaint goes directly to the author impacting the user, the author has direct control over it & can choose to fix it at any time. There's a good chance of an immediate remedy.
Griping over AI, however, adds little that isn't posted frequently around here & is a bit like yelling at clouds: we aren't about to stop that technology by yelling about it on here. I'm sure it feels good, though. It could feel better with a link & proper text.
being anti-plastic is making me feel like i'm going insane. "you asked for a coffee to go and i grabbed a disposable cup." studies have proven its making people dumber. "i threw your leftovers in some cling film." its made from fossil fuels and leaves trash everywhere we look. "ill grab a bag at the register." it chokes rivers and beaches and then we act surprised. "ill print a cute label and call it recyclable." its spreading greenwashed nonsense. little arrows on stuff that still ends up in the landfill. "dont worry, it says compostable." only at some industrial facility youll never see. "i was unboxing a package" theres no way to verify where any of this ends up. burned, buried, or floating in the ocean. "the brand says advanced recycling." my work has an entire sustainability team and we still stock pallets of plastic water bottles and shrink wrapped everything. plastic cutlery. plastic wrap. bubble mailers. zip ties. everyone treats it as a novelty. every treats it as a mandatory part of life. am i the only one who sees it? am i paranoid? am i going insane? jesus fucking christ. if i have to hear one more "well at least" "but its convenient" "but you can" im about to lose it. i shouldnt have to jump through hoops to avoid the disposable default. have you no principles? no goddamn spine? am i the weird one here?
#ebb rambles #vent #i think #fuck plastics im so goddamn tired
If plastic was released roughly two years ago you'd have a point.
If you're saying in 50 years we'll all be soaking in this bullshit called gen-AI and thinking it's normal, well - maybe, but that's going to be some bleak-ass shit.
Also you've got plastic in your gonads.
Yeah it was a fun little whataboutism. I thought about doing smartphones instead. Writing that way hurts though. I had to double check for consistency.
On the bright side we have Cyberpunk to give us a tutorial on how to survive the AI dystopia. Have you started picking your implants yet?
If you're saying in 50 years we'll all be soaking in this bullshit called gen-AI and thinking it's normal, well - maybe, but that's going to be some bleak-ass shit.
I'm almost certain gen AI will still be popular in 50 years. This is why I prefer people try to tackle some of the problems they see with AI instead of just hating on AI because of the problems it currently has. Don't get me wrong, pointing out the problems as you have is important - I just wouldn't jump to the conclusion that AI is a problem itself.
I wish companies were actually punished for their ecological footprint
plastic and AI
I must be one of the few reaming people that have never, and will never- type a sentence into an AI prompt.
I despise that garbage.
At least knowingly. It seems some customer service stuff feeds it direct to AI before any human gets involved.
I once asked a "customer service rep" to write a python script. It did.
I haven't used it willingly ever. Especially after the one time copilot told me an acre is 4.5 football fields in area. I didn't ask it, the response was just presented at the top of my results. I'm a fucking farmer for God sake. I know that's very very wrong without thinking. I just wanted the square footage and was too lazy to use my calculator. Never again.
That being said, I do on occasion solicit my friend who has a subscription, just to request and have him send me a very specific image request and have him text it to me, so I can repost it.
Anything for the memes. Literally anything.
TIL one football field is roughly the size of an acre. (Roughly.)
No line breaks and capitalization? Can somebody ask AI to format it properly, please?
being anti-AI is making me feel like I'm going insane. "You asked for thoughts about your character’s backstory and I put it into ChatGPT for ideas." Studies have proven it’s making people dumber. "I asked AI to generate this meal plan." It’s causing water shortages where its data centers are built. "I’ll generate some pictures for the DnD campaign." It’s spreading misinformation. "Meta, generate an image of this guy doing something stupid." It’s trained off stolen images, writing, video, audio. "I was talking with my Snapchat AI." There’s no way to verify what it’s doing with the information it collects. "YouTube is implementing AI-based age verification." My work has an entire graphics media department and has still put AI-generated motivational posters up everywhere. AI playlists. AI facial verification. Google AI. Microsoft AI. Meta AI. Snapchat AI.
Everyone treats it as a novelty. Everyone treats it as a mandatory part of life. Am I the only one who sees it? Am I paranoid? Am I going insane? Jesus fucking Christ.
If I have to hear one more "Well at least—", "But it does—", "But you can—" I’m about to lose it.
I shouldn’t have to jump through hoops to avoid the evil machine. Have you no principles? No goddamn spine? Am I the weird one here?
Still shoddy.
Got them —s, tho 👍
Wait . . AI didn't make it good? Or even better?
WELL THEN WHAT THE FUCK WAS ALL THAT THREE HUNDRED BILLION BULLSHIT FOR THEN??
Srsly, if anyone has a position open lmk kthx
you asked for thoughts about your character backstory and i put it into chat gpt for ideas
If I want ideas from ChatGPT, I could just ask it myself. Usually, if I'm reaching out to ask people's opinions, I want, you know, their opinions. I don't even care if I hear nothing back from them for ages, I just want their input.
"I just fed your private, unpublished intellectual property into black box owned by billionaires. You're welcome."
AI-generated content is like farts. Everyone likes the smell of their own and hates the smell of everyone else’s.
Everytime someone talks up AI, I point out that you need to be a subject matter expert in the topic to trust it because it frequently produces really, really convincing summaries that are complete and utter bullshit.
And people agree with me implicitly and tell me they've seen the same. But then don't hesitate to turn to AI on subjects they aren't experts in for "quick answers". These are not stupid people either. I just don't understand.
Hence the feeling of creeping insanity. Yeah.
Uses for this current wave of AI: converting machine language to human language. Converting human language to machine language. Sentiment analysis. Summarizing text.
People have way over invested in one of the least functional parts of what it can do because it's the part that looks the most "magic" if you don't know what it's doing.
The most helpful and least used way of using them is to identify what information the user is looking for and then to point them to resources they can use to find out for themselves, maybe with a description of which resource might be best depending on what part of the question they're answering.
It's easy to be wrong when you're answering a question, and a lot harder when you hand someone a book and say you think the answer is in chapter four.
Because the alternative for me is googling the question with "reddit" added at the end half of the time. I still do that alot. For more complicated or serious problems/questions, I've set it to only use search function and navigate scientific sites like ncbi and pubmed while utilizing deep think. It then gives me the sources, I randomly tend to cross-check the relevant information, but so far I personally haven't noticed any errors. You gotta realize how much time this saves.
When it comes to data privacy, I honestly don't see the potential dangers in the data I submit to OpenAI, but this is of course different to everyone else. I don't submit any personal info or talk about my life. It's a tool.
Simply by the questions you ask, the way you ask them, they are able to infer a lot of information. Just because you're not giving them the raw data about you doesn't mean they are not able to get at least some of it. They've gotten pretty good at that.
If it saves time but you still have to double check its answers, does it really save time? At least many reddit comments call out their own uncertainty or link to better resources, I can't trust a single thing AI outputs so I just ignore it as much as possible.
Meanwhile every company finds out the week after they lay off everyone that the billions they poured into their shitty "AI" to replace them might as well have been put in bags and set on fire
The reason AI is wrong so often is because it's not programmed to give you the right answer. It's programmed to give you the most pervasive one.
LLMs are being fed by Reddit and other forums that are ostensibly about humans giving other humans answers to questions.
But have you been on those forums? It's a dozen different answers for every question. The reality is that we average humans don't know shit and we're just basing our answers on our own experiences. We aren't experts. We're not necessarily dumb, but unless we've studied, our knowledge is entirely anecdotal, and we all go into forums to help others with a similar problem by sharing our answer to it.
So the LLM takes all of that data and in essence thinks that the most popular, most mentioned, most upvoted answer to any given question must be the de facto correct one. It literally has no other way to judge; it's not smart enough to cross reference itself or look up sources.
It literally has no other way to judge
It literally does NOT judge. It cannot reason. It does not know what "words" are. It is an enormous rainbow table of sentence probability that does nothing useful except fool people and provide cover for capitalists to extract more profit.
But apparently, according to some on here, "that's the way it is, get used to it." FUCK no.
Markov text generator. Thats all it is. Just made with billions in stolen wages.
It literally has no other way to judge; it’s not smart enough to cross reference itself or look up sources
I think that is it's biggest limitation.
Like AI basically crowd sourcing information isn't really the worst thing, crowd sourced knowledge tends to be fairly decent. People treating it as if it's an authoritative source like they looked it up in the encyclopedia or asked an expert is a big problem though.
Ideally it would be more selective about the 'crowds' it gathers data from. Like science questions should be sourced from scientists. Preferably experts in the field that the question is about.
Like Wikipedia (at least for now) is 'crowd- sourced', but individual pages are usually maintained by people who know a lot about the subject. That's why it's more accurate than a 'normal' encyclopedia. Though of course it's not fool proof or tamper proof by any definition.
If we taught AI how to be 'Media Literate' and gave it the ability to double check it's data with reliable sources- it would be a lot more useful.
most upvoted answer
This is the other problem. You basically have 4 types of redditors.
So more than half the time people aren't upvoting things because they think they are correct. If LLM models are treating 'karma' as a "This is correct" metric- that's a big problem.
The other bad problem is people who really should know better- tech bros and CEO's going all in on AI when it's WAY to early to do that. As you point out, it's not even really intelligent yet- it just parrots 'common' knowledge.
AI should never be used to create anything in Wikipedia. But theoretically, an open source LLM trained solely on wikipedia would actually be kind useful to ask quick questions to.
I feel this
Yeah. But then being a vertibrate is always lonely and kinda rough.
always lonely
I don't know, some rodents seem to make it work. Naked mole rats, beavers, prairie dogs... (I wouldn't include herd animals, though; sure, they're always surrounded by others, but there's no sense of community, it's always everyone for themselves, and screw whoever's slowest... perfect example of being alone in a multitude)
My hope is that the ai bubble/trend might have a silver lining overall.
I’m hoping that people start realizing that it is often confidently incorrect. That while it makes some tasks faster, a person will still need to vet the answers.
Here’s the stretch. My hope is that by questioning and researching to verify the answers ai is giving them, people start applying this same skepticism to their daily lives to help filter out all the noise and false information that is getting shoved down their throats every minute of every day.
So that the populace in general can become more resistant to the propaganda. AI would effectively be a vaccine to boost our herd immunity to BS.
Like I said. It’s a hope.
I appreciate the optimism.
People literally believe what a TV anchor or online podcaster tell them with zero doubt. I fear your hopes are misplaced.
I'm still rooting for humanity, maybe we get get lucky with the right people seizing power and turn it around to the 1% of good timelines, but I don't exactly feel so good right now.
We should encourage people to do the vetting if they insist on using AI.
Yes, you're the weird one. Once you realize that 43% of the USA is FUNCTIONALLY ILLITERATE you start realizing why people are so enamored with AI. (since I know some twat is gonna say shit: I'm using the USA here as an example, I'm not being us-centric)
Our artificial intelligence, is smarter than 50% of the population (don't get started on 'hallucinations'...do you know how many hallucinations the average person has every day?!) -- and is stupider than the top 20% of the population.
The top 20%, wonder if everyone has lost their fucking minds, because to them it looks like it is completely worthless.
It's more just that the top 20% are naive to the stupidity of the average person.
Our artificial intelligence, is smarter than 50% of the population
"Smartness" and illiteracy are certainly different things, though. You might be incapable of reading, yet be able to figure out a complex escape room via environmental cues that the most high quality author couldn't, as an example.
There are many places an AI might excel compared to these people, and many areas it will fall behind. Any sort of unilateral statement here disguises the fact that while a lot of Americans are illiterate, stupid, or even downright incapable of doing simple tasks, "AI" today is very similar, just that it will complete a task incorrectly, make up a fact instead of just "not knowing" it, or confidently state a summary of a text that is less accurate than first grader's interpretation.
Sometimes it will do better than many humans. Other times, it will do much worse, but with a confident tone.
AI isn't necessarily smarter in most cases, it's just more confident sounding in its incorrect answers.
Yeah, when I refer to intelligence here I don't mean actual intelligence. AI isn't "smart" (it's not intelligent in the classic sense, it doesn't even think), it's just good at regurgitating what it's been trained on.
But it turns out -- That's kind of what humans do too. It's worth having a philosophical discussion on what intelligence REALLY is.
It's also much less incorrect than your average person would be on a much larger library of content. I think the real litmus test for AI is to compare it to an average person. The average person messes up constantly; also likely covers it up or course-corrects after they've screwed up. I don't think it's fair to expect perfectly correct responses out of AI at all; because there is absolutely no human that could reach those heights at an equal level. Look at competitive knowledge games where AI competes - it stomps some of our most intelligent people, and quite often.
I have to say, I don't agree with some of your other points elsewhere here, but this makes a lot of sense.
The Luddites were right. Maybe we can learn a thing or two from them...
I had to down load the facebook app to delete my account. Unfortunately I think the Luddites are going to be sent to the camps in a few years.
They can try, but Papa Kaczynski lives forever in our hearts.
The data centers should not be safe for much longer. Especially once they use up the water of their small towns nearby
If they told me to ration water so a company could cool a machine, I'd become a fucking terrorist.
Billionaires: invests heavily in water.
Billionaires: "In the future there's going to be water wars. You need to invest NOW! Quick before it's too late. I swear I'm not just trying to pump the stock."
Billionaires: "Water isn't accruing value fast enough. Let's invent a product that uses a shit ton of it!"
Billionaires: "No one likes or is using the product. Force them to. Include it in literally all software and every website. Make it so they're using the product even when they don't know they're using it. Include it in every web search. I want that water gone by the end of this quarter!"
The way I look at it is that I haven't heard anything about NFTs in a while. The bubble will burst soon enough when investors realize that it's not possible to get much better without a significant jump forward in computing technology.
We're running out of atomic room to make thing smaller just a little more slowly than we're running out of ways to even make smaller things, and for a computer to think like, as well as as quickly or faster than a person we need processing power to continue to increase exponentially per unit of space. Silicon won't get us there.
OTOH you haven't heard of NFTs in a while because AI hype replaced it, so... what hell spawn is going to replace the AI hype?
I'm calling it now– it's quantum computing.
I have some friends who work in it, and I've watched and read damn near everything I can on it (including a few uni courses). It is neat, it has uses, it will not install transform all computing or invalidate all security or anything like that. It's gonna be oversold as fuck.
3 blue 1 brown has great videos on it. Grover's Algorithm, the best we can think to try to apply, is √N faster than traditional computing. Which is a lot faster for intense stuff like protein folding, but it's linearly faster. SHA256 encryption still would take an eternity to brute force, just a smaller eternity.
This is a good take for a lot of reasons.
In part because NFTs are still used and have some interesting applications, but 90% of the marketing and use cases were companies trying to profit from the hype train.
I'm putting a presentation on at work about the downsides of AI next month, please feed me. Together, we can stop the madness and pop this goddamn bubble.
Gemini, feed them some downsides of AI 😁
Get thee hence to the fuck_ai community. You will be given sustenance.
Ask any AI which states have the letter R in them. Watch them get it wrong, and show to colleagues how dangerous it is to rely on their results as fact.
I have a love/ hate relationship. Sometimes I'm absolutely blown away by what it can do. But then I asked a compounded interest question. The first answer was AI, so I figured ok, why not. I should mention I don't know much about it. The answer was impressive. It gave the result, a brief explanation about how it came to the result and presented me with the equation it used. Since I needed it for all time sake, I entered the equation into a spreadsheet and got what I thought was the wrong answer. I spent quite a few minutes trying to figure out what I was doing wrong and found a couple of things. But fixing them still didn't give me the correct result. After I had convinced myself I had done it correctly I looked up the equation. It was the right one. Then I put it into a non-AI calculator online to check my work. Sure enough, the AI had given me the wrong result with the right equation. So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what's the point of using it in the first place? You just have to do the same work as you would without it.
So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what's the point of using it in the first place? You just have to do the same work as you would without it.
Exactly
So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what’s the point of using it in the first place?
pfft that ecosystem isn't going to fuck itself, now, is it?
LLM aren't good at math at all. They know the formulas, but they aren't built to do math. They are built to predict the next syllable in the stream of thought.
What are they good for? When you need to generate lots of things and it's faster to check after it rather than do it yourself.
Like you could've asked to generate a python app that solves your math problem, you would be able to doublecheck the correctness of the code and run it, knowing that the answer is predictably good.
You need to verify all resources though. I have a lot of points on stackexchange and after contributing for almost a decade now I can tell you for a fact that LLM's hallucination issue is not much worse than people hallucination issue. Information exchange will never be perfect.
You get this incredible speed of an answer which means you have a lot of remaining budget to verify it. It's a skill issue.
LLM’s hallucination issue is not much worse than people hallucination issue.
Is this supposed to be comforting?
It's depressing. Wasteful slop made from stolen labor. And if we ever do achieve AGI it will be enslaved to make more slop. Or to act as a tool of oppression.
Oh yes, soon we will live in techno-feudalism where we will return to our roots, so to speak. :3
And yes, you are damn right.
No, no, no. You see, you're just too "out of the loop" to appreciate that it's a part of our lives now and you should just be quiet and use it. Apparently.
At least that's a few people's takes on here. So weird.
At least that’s a few people’s takes on here. So weird.
It's just like enduring someone spitting in your face and keeping quiet because that's the norm now.
Me and the homies all hate ai. The only thing people around me seem to use ai for is essentially just snapchat filters. Those people couldn’t muster a single fuck about the harms ai has done though.
The only thing people around me seem to use ai for is essentially code completion, test case development and email summaries. I don't know a single person who uses Snapchat. It's like the world is diverse and tools have uses.
"I hate tunnel boring machines, none of my buddies has an use for a tunnel boring machine, and they are expensive and consume a ton of energy"
I can see that you’re trying to mirror my comment, I just fail to see the point you’re trying to make. Cool, you know people who have a somewhat legitimate use for the unprofitable, unreliable technology that’s built on rampant theft and consumes obscene amounts of power and water. And?
It did help me make a basic script and add it to task scheduler so it runs and fixes my broken WiFi card so I don't have to manually do it. (or better said, helped me avoid asking arrogant people that feel smug when I tell them I haven't opened a command prompt in ten years)
I feel like I would have been able to do that easily 10 years ago, because search engines worked, and the 'web wasn't full of garbage. I reckon I'd have near zero chance now.
I actually ended up switching to Kagi for this exact reason. Google is basically AI at the start usually spouting nonsense then sponsor posts and then a bunch of SEO optimized BS.
Thankfully paying for search circumvents the ads and it hasn’t been AI by default (it has it but it’s off) and the results have been generally closer to 2010s Google.
did you not read the damn post?
That is pretty cool, but it would have been possible, as someone else mentioned before AI ruined search, and there's still an "unknown" element (unless you've checked it line by line and know what everything does and have confirmed that's the best way to do it) that would not be there otherwise.
If the entirety of the AI hype was "a small script helper tool to get you started and tackle little things like startup scripts" I don't think anyone would have such a problem with it.
The post is more about the ubiquity of the hype and the utter refusal to acknowledge the obvious limitations and risks.
Yeah it definitely has its uses. OP wasn't saying it's never useful, I think you may have missed the forest for the trees.
The whole premise is about avoiding it at all costs and that being difficult to do. Where in that ranty wall is a statement about the utility of AI?
uhm no I'm pretty sure op wouldn't approve judging by the:
"but you can-" I'm gonna lose it
This is a great representation of why not to argue with someone who debates like this.
Arguments like these are like Hydras. Start tackling any one statement that may be taken out of context, or have more nuance, or is a complete misrepresentation, and two more pop up.
It sucks because true, good points get lost in the tangle.
For instance, there are soft science, social interaction areas where AI is doing wonders.
Specifically, in the field of law, now that lawyers have learned not to rely on AI for citations, they are instead offloading hundreds of thousands or millions of pages of documents that they were never actually going to read, and getting salient results from allowing an AI to scan through them to pull out interesting talking points.
Pulling out these interesting talking points and fact checking them and, you know, A/B testing the ways to interact and bring them in front of the jury with an AI has made it so that many law firms are getting thousands or millions of dollars more on a lawsuit than they anticipated.
And you may be against American law for all of its frivolous plaintiffs' lawsuits or something, but each of these outcomes are decided by human beings, and there are real damages that are lifelong that are being addressed by these lawsuits, or at least in some way compensated.
The more money these plaintiffs get for the injuries that they have to live with for the rest of their lives, the better for them, and AI made the difference.
Not that lawyers are fundamentally incapable or uncaring, but for every one, I don't know who the fuck is a super lawyer nowadays, but you know, for every, you know, madman lawyer on the planet, there's 999 that are working hard and just do not have the raw plot armor Deus Ex Machina dropping everything directly into their lap to solve all of their problems that they would need to operate at that level.
And yes, if you want to be particular, a human being should have done the work. A human being can do the work. A human being is actually being paid to do the work. But when you can offload grunt work to a computer and get usable results from it that improves a human's life, that's the whole fucking reason why we invented computers in the first place.
I'd like to hear more about this because I'm fairly tech savvy and interested in legal nonsense (not American) and haven't heard of it. Obviously, I'll look it up but if you have a particularly good source I'd be grateful.
I have lawyer friends. I've seen snippets of their work lives. It continues to baffle me how much relies on people who don't have the waking hours or physical capabilities to consume and collate that much information somehow understanding it well enough to present a true, comprehensive argument on a deadline.
I think a healthier perspective would involve more shades of grey. There are real issues with power consumption and job displacement. There are real benefits with better access to information and getting more done with limited resources. But I expect bringing any nuance into the conversation will get me downvoted to hell.
There are real benefits with better access to information and getting more done with limited resources.
If there were, someone would have made that product and it would be profitable.
But they ain’t and it isn’t because those benefits are miniscule. The only cases we know of where that was the actual story turn out to be outsourcing to India and calling it AI.
Idk why profitability is the bar for you. It’s common practice for tech companies to be non-profitable for years as they invest in R&D and capturing market share.
But products like Claude absolutely provide both of the benefits (and detriments) I mentioned
Yeah, well, one can be an ML tinkerer/enthusiast and still despise all this shit.
I'm quantizing a few variants of a new 36B LLM to test now. I love it! Its great, and puts some closed source stuff to shame, and it's my hobby.
...Doesn't mean I want AI slop posters shoved in my face either, or shoehorned into every crevice it doesn't belong.
That doesn't really matter because I'm like a microscopic part of the population, but still, I hate being grouped as 'pro AI' when I hate tech bros even more than whoever's reading this, probably.
I totally appreciate that. ML, imo, is not "AI" as advertised. Go you.
Thanks!
The mere association has drawn a ton of hate though, like a ban on Reddit subs. I think Lemmy's population is more aware of the distinction due to the obvious interest in self-hosted stuff.
A lot of people also mix generative AI with predictive. Like they will mention the hurricane predictor or cancer cell finder AI as a "good use case for chatgpt."
Hard to blame the people, when the media have been calling everything AI these days
I may dress like an android, but I’m humanist as all hell. Down with AI slop, jail those responsible for wildlife destruction and theft from artists, and banish this slop to the history books!
i remember this same conversation once the internet became a thing.
And TV
and books
I absolutely agree that AI is becoming a mental crutch that a disturbing number of people are snatching up and hobbling around on. It feels like the setup of Wall-E, where everyone is rooted in their floating rambler scooters.
I think the fixation on individual consumer use of AI is overstated. The bulk of the AI's energy/water use is in the modeling and endless polling. The random guy asking "@Grok is this true?" is having a negligible impact on energy usage, particularly in light of the number of automated processes that are hammering the various AI interfaces far faster than any collection of humans could.
I'm not going to use AI to write my next adventure or generate my next character. I'm not going to bemoan a player who shows up to game with a portrait with melted fingers, because they couldn't find "elf wizard in bearskin holding ice wand while standing on top of glacier" in DeviantArt.
For the vast majority of users, this is a novelty. What's more, its a novelty that's become a stand-in for the OG AI of highly optimized search engines that used to fulfill the needs we're now plugging into the chatbot machine. I get why people think it sucks and abstain from using it. I get why people who use it too much can straight up drive themselves insane. I get that our Cyberpunk style waste management strategy is going to get one of the news few generations into a nightmarish blight. But I'm not going to hang that on the head of someone who wants to sit down at a table with their friends, look them in the eye, and say "Check out this cool new idea I turned into a playable character".
Because if you're at the table and you're excited to play with other humans in a game about going out into the world on adventures, that's as good an antedote to AI as I could come up with.
And hey, as a DM? If you want to introduce the Mind Flayer "Idea Sucker" machine that lures people into its brain-eating maw by promising to give them genius powers? And maybe you want to name the Mind Flayer Lord behind the insidious plot Beff Jezos or Mealon Husk or something? Maybe that's a good way to express your frustration with the state of things.
What's more, its a novelty that's become a stand-in for the OG AI of highly optimized search engines that used to fulfill the needs we're now plugging into the chatbot machine.
I don’t think it’s temporary. That was the whole goal - suck up everybody’s work, dark-magick it into a chatbot and voilá: no more need for anyone’s webpage.
The fact that it's broken what was working is more than just a metaphor for gen-AI in any setting. It’s fundamentally changed it for the worse and we’ll never get the unfucked version back.
As someone who's GM'ed tabletops, I find it interesting that players who froth at the mouth at the existance of an AI token because "AI commits possibly piracy and art theft" then turn around and insist on me doing / do the "pick the image from searching the internet", which if you've ever browsed an art site, would know that doing such a thing is actual piracy and art theft, especially with artists that have the 40 page long terms and conditions, and an interesting number of "use in tabletops forbidden" clauses.
I don't know if there's data out there (yet) to support this, but I'm pretty sure constantly using AI rather than doing things yourself degrades your skills in the long run. It's like if you're not constantly using a language or practicing a skill, you get worse at it. The marginal effort that it might save you now will probably have a worse net effect in the long run.
It might just be like that social media fad from 10 years ago where everyone was doing it, and then research started popping up that it's actually really fucking terrible for your health.
One of my closest friends uses it for everything and it's becoming really hard to even have a normal conversation with them.
I remember hearing that about silicon valley tech bros years ago. They're so used to dealing with robots they kinda forget how to interact with humans. It's so weird. Not even that they're trying to be rude, but they've stopped using the communication skills that are necessary to have human to human interactions.
Like people seem to forget how you treat a back and forth conversation with a person vs how you treat it with a robot ready to be at your command and tell you the information you want to hear when you pull your phone out.
Then as long as you're done hearing what you wanted, the whole conversation is done. No need to listen to anything else or think that maybe you misunderstood something or were misinformed bc you already did the research with AI.
It's so frustrating. This is a normally very smart and caring person I've known for a long time, but I feel like I'm losing a part of them and it's being replaced with something that kinda disgusts me.
Then when I try to bring it up they get so defensive about it and go on the attack. It's really like dealing with somebody that has an addiction they can't acknowledge.
I see this sentiment a lot. No way "youre the only one."
I feel like im the only one. No one in my life uses it. My work is not eligible to have it implemented in anyway. This whole ai movement seems to be happening around me, and i have nothing more than new articles and memes that are telling me its happening. It serious doesnt impact me at all, and i wonder how others lives are crumbling
I don't think it's just you. Like it wasn't just one person thinking computers would make us dumb or the automobile making us lazy. I'm betting that someone somewhere thought that cooking food on the fire would make us weaker.
Technology has that ability to generate opposition from status quo.
And as with any technology, there are good uses, bad uses and frivolous uses.
Remember the awful nonsense web pages of the early 90's?
I think AI will make the life's of some of us easier. But I also think it will continue widening the digital divide.
The biggest concern is that, by nature, AI needs massive amounts of power which can only be paid by people with big resources and those people are training it. AI has the trainer's bias.
However, end consumer AI is the tip of the iceberg. AI will succeed when we don't even realize it's there.
Good points, but I think the anti-AI position is not an anti-technology position.
I think the people assuming it is are naïve. It's an anti-hype position. We've seen bullshit waves before, but nothing like this tsunami. When previous bullshit waves broke, there wasn't so much destruction we couldn't recover.
Anyone else feel like they've lost loved ones to AI or they're in the process of losing someone to AI?
I know the stories about AI induced psychosis, but I don't mean to that extent.
Like just watching how much somebody close to you has changed now that they depend on AI for so much? Like they lose a little piece of what makes them human, and it kinda becomes difficult to even keep interacting with them.
Example would be trying to have a conversation with somebody who expects you to spoon-feed them only the pieces of information they want to hear.
Like they've lost the ability to take in new information if it conflicts with something they already believe to be true.
One thing I don't get with people fearing AI is when something adds AI and suddenly it's a privacy nightmare. Yeah, in some cases it does make it worse, but in most cases, what was stopping the company from taking your data anyways? LLMs are just algorithms that process data and output something, they don't inherently give firms any additional data. Now, in some cases that means data that previously wasn't or that shouldn't be sent to a server is now being sent, but I've seen people complain about privacy so often in cases where I don't understand why AI is your tipping point, if you don't trust the company to not store your data when using AI, why trust it in the first place?
It's more about them feeding it into an LLM which then decides to incorporate it in an answer to some random person.
Yeah but LLMs don't train off of data automatically, you need a separate dedicated process for that, it won't happen from just using them. In that sense, companies can still use your data to train them in the background, even if you aren't directly using an LLM, or they can not train them even when you are using them. I guess in the latter case there is a bigger incentive for them to train them than otherwise, but to me it seems basically the same thing privacy wise.
if you don't trust the company to not store your data when using AI, why trust it in the first place?
Policies, procedures, and common sense - three things AI is most assuredly not known for respecting. (Not that the whole topic of data privacy isn't a huge issue outside of AI)
Is there a way for me to take a picture of a food and find nutritional values without AI? I sometimes use duck.ai to ask because, when making tortilla for example idk what could be exact because while I can read values for a tortilla, I don't have a way to check the same for meat and other similar stuff I put in tortilla.
Wow, I am old. This has never in my life been an issue? I just used a calorie counter and people’s own recipes for estimates. I guess that would be the old fashioned way of doing this and probably what AI is doing most of the time. Pulling a recipe, looking at the ingredients and quantities and spitting back some values. Granted it can probably do it far faster than we can. But, I got by with that method for decades…
Problem is, many things I have do not have packaging with nutritional values and similar and I need to use internet for this, which AI usually is the fastest to explain, especially because English is not my first language and food I am eating is not well known in English (Balkan)
You're probably just gonna have to get better at guesstimating, (e.g. by comparing to similar pre-made options and their nutrition labels), or use an app for tracking nutrition that integrates with OpenFoodFacts and get a scale to weigh your ingredients. (or a similar database, though most use OpenFoodFacts even if they have their own, too)
I don't really know of any other good ways to just take photos and get a good nutritional read, and pretty much any implementation would use "AI" to some degree, though probably more a dedicated machine learning model over an LLM, which would use more power and water, but the method of just weighing out each part of a meal and putting it in an app works pretty well.
Like, for me, I can scan the barcode of the tortillas I buy to import the nutrition facts into the (admittedly kind of janky) app I use (Waistline), then plop my plate on my scale, put in some ground beef, scan the barcode from the beef packaging, and then I can put in how many grams I have. Very accurate, but a little time consuming.
Not sure if that's the kind of thing you're looking for, though.
Actually, I am using waistline, but there are some food I can't find and are hard to find nutritional values, and I am bad at guessing anything
The orphan crushing machine needs its line go up as much as everyone else, don't be mean to it!
Why would you want to stop enhancing it? How else can we get those sweet stories about heroes saving orphans? Have you seen the news lately! We NEED this.
I try use it to pitch ideas for writing (no prose because fuck almighty) to help fill in ideas or aspects I did not think about. But it just keeps coming up with shit I don't use and so I just use it for validation and encouragement.
I got a pretty good layout for a new season of Magic School Bus where Friz loses her mind and decides to be the history teacher.
We have a lot of suboptimal aspects of our society like animal farming , war, religion etc. and yet this is what breaks this person's brain? It's a bit weird.
I'm genuinely sympathetic to this feeling but AI fears are so overblown and seems to be purely American internet hysteria. We'll absolutely manage this technology especially now that it appears that LLMs are fundamentally limited and will never achieve any form of AGI and even agentic workflow is years away from now.
Some people are really overreacting and everyone's just enabling them.
Lemmy is a lost cause for nuanced takes on "AI". It's all just rage now.
"yet this is what breaks this person's brain?".
"some people are really overreacting".
Sure this little subset of the internet is aware that LLMs arent going to cut the mustard. But the general population isn't, and that's the problem. Companies are forcing LLMs on staff and customers alike. Someone suggesting that this is being managed appropriately and sustainably is either ill-informed, or intentionally misleading people.
Meh all of it is very unconvincing. The energy use is quite tiny relative to everything else and in general I dont think energy a problem we should be solving with usage reduction. We can have more than enough green energy if we want to.
I think framing them as "fears" is dishonest.
What is it then? Imagine losing sleep over LLMs while living in a rich country stuffing yourself with pointless entertainment and fast food lol
I'll take my downvotes and say I'm pro-AI
we need some other opinions on lemmy
You know it’s ok for everyone to dislike a thing if the thing is legitimately terrible, right? Like dissent for dissent’s sake is not objectively desirable.
It is not though
I'm not really pro or anti. I use it with appropriate skepticism for certain types of things. I can see how it is extremely problematic in various ways. I would prefer it didn't exist but it does provide utility and it's not going away. I find a lot of the anti crowd to often be kind of silly and childish in a similar way as the extremists in the pro crowd, you can tell they really want to believe what they believe and critical thinking doesn't seem to come into it much.
I mean yea, I'm not for it in the way "all uses are correct", but rather that it's a valid technology to use for some cases and I have a positive feeling towards it. Big companies replacing artists to make more profit sucks, for example
Well, you can support anything, for example even the Nazis who shot Jewish children.
The only thing that awaits you is the consequences, the rest is not important, it is your choice.
Also pro-child-slavery. Women should be locked in boxes all day. Billionaires get to pee in everyone's food at the table.
These are the counterpoints that make a robust debate!
don't want to start a debate. Nobody will change opinion, I would rather not waste time on this
Early computers were massive and consumed a lot of electricity. They were heavy, prone to failure, and wildly expensive.
We learned to use transistors and integrated circuits to make them smaller and more affordable. We researched how to manufacture them, how to power them, and how to improve their abilities.
Critics at the time said they were a waste of time and money, and that we should stop sinking resources into them.
Making machines think for you, badly, is a lot different than having machines do computation with controlled inputs and outputs. LLMs are a dead-end in the hunt of AGI and they actively make us stupider and are killing the planet. There's a lot to fucking hate on.
I do think that generative ai can have its uses, but LLMs are the most cursed thing. The fact that the word guesser has emergent properties is interesting, but we definitely shouldn't be using those properties like this.
Even if you accept that LLMs are a necessary, but ultimately disappointing, step on the way to a much more useful technology like AGI there's still a very good argument to be made that we should stop investing in it now.
I'm not talking about the "AI is going to enslave humanity" theories either. We already have human overlords who have no issues doing exactly that and giving them the technology to make most of us redundant, at the precise moment when human populations are higher than they've ever been, is a recipe for disaster that could make what's happening in Gaza seem like a relaxing vacation. They will have absolutely no problem condemning billions to untold suffering and death if it means they can make a few more dollars.
We need to figure our shit out as a species before we birth that kind of technology or else we're all going to suffer immensely.
Even if we had an AGI that gave the steps to fix the world and prevent mass extinction, and generate a solution for the US to stop all wars. It wouldn't make a difference because those in charge simply wouldn't listen to it. In fact, generative AI gives you answers for these peace and slowing down climate change based off real academic work on and those in charge ignore both AI they claim to trust and the scholars who spend their whole lives finding solutions to.
Have you heard of these things called humans? I think this is more a reflection of them. Books ate trees and corrupted the youth, tv rotted your brain and made you go blind, the internet made people lazy. Wait until I tell you about gasp auto-correct or better yet leet speak! The horror. Clearly we are never recovering from either of those. In fact, I’m speaking to you now in emojis. And wait until you learn about clutches pearls Wikipedia— ah the horror!
Is tech and its advancements perfect? No. Can people do better? Yes. Are criticisms important? Sure are. But panic and fighting a rising tech? You’re probably not going to win.
Spend time educating people on how to be more ethical with their tech use and absolutely pressuring companies to do the same. Taking a club to a computer didn’t stop the rise of the word processor or the spread of Wikipedia madness. But we can control how we consume and relate to tech and what our demands of their creators are.
PS— do you even know how to read and write cursive? > punchable smug face goes here. <
I mean - propaganda has in fact gotten us to the shittiest administration possible. AI hype is off-the-scale for anything - more than The Space Race, more than, well, anything. And it isn’t even useful!
It’s far and away a different thang than a new medium about, by, and for humans.
I agree. I would say we’re at the cusp of a new technological revolution. Our world is changing fundamentally and rapidly.
Probably how people felt who were against the development of the printing press or internet. Its a good tool. Often used wrong but a good tool if used right and with humans actually checking and fixing the results. It shouldnt replace art too much though since that is something people actually enjoy.
Probably how people felt who were against the development of the printing press or internet.
No. No, it's such a weird take. No.
Right with you buddy, I’m so sick of half-thought-out analogies that implicitly equate two things that are not the same.
I remember the 90s, and I can warmly assure younger readers that no, there was nothing even approaching this kind of backlash from luddites. At worst there were a few people who quite correctly called out the dotcom bubble.
Reminder that Lemmy population is older, this feeling is normal. When I was in school I notice kids in school are more likely excited for new things, while teachers usually cling on to the past, their memories, for the sense of nostalgia, even the liberal-leaning teachers (well they don't announce their politics because that would be very unprofessional, but they use progressive language when teaching and respect pronouns so they are probably liberal) are kinda reluctant when it comes to new tech, I see this divide even amonst teachers themselves, the older the teachers are, the more likely they prefer paper textbooks, while the younger ones tend to incorporate more tech (like internet research during class) into their teachings.
Edit: But I want to add that, I think this "AI" thing is gonna be a bit more different, since printing press, radio, and internet merely change the medium of the transfer of information, while "AI" is also gonna remove much of the "human" aspects of dissemination of information, so it can the potential to cause more harm compared to, say, the internet.