Skip Navigation
Microblog Memes @lemmy.world
Optional @lemmy.world

Is It Just Me?

349 comments
  • No, it's not just you or unsat-and-strange. You're pro-human.

    Trying something new when it first comes out or when you first get access to it is novelty. What we've moved to now is mass adoption. And that's a problem.

    These LLMs are automation of mass theft with a good enough regurgitation of the stolen data. This is unethical for the vast majority of business applications. And good enough is insufficient in most cases, like software.

    I had a lot of fun playing around with AI when it first came out. And people figured out how to do prompts I cant seem to replicate. I don't begrudge people from trying a new thing.

    But if we aren't going to regulate AI or teach people how to avoid AI induced psychosis then even in applications were it could be useful it's a danger to anyone who uses it. Not to mention how wasteful its water and energy usage is.

    • Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.

      The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.

      This would also crash the bubble and would slow down any of the most unethical for-profits.

      • Regulate? This is what lead AI companies are pushing for, they would pass the bureaucracy but not the competitors.

        I was referring to this in my comment:

        https://www.nbcnews.com/tech/tech-news/big-beautiful-bill-ai-moratorium-ted-cruz-pass-vote-rcna215111

        Congress decided to not go through with the AI-law moratorium. Instead they opted to do nothing, which is what AI companies would prefer states would do. Not to mention the pro-AI argument appeals to the judgement of Putin, notorious for being surrounded by yes-men and his own state propaganda. And the genocide of Ukrainians in pursuit of the conquest of Europe.

        “There’s growing recognition that the current patchwork approach to regulating AI isn’t working and will continue to worsen if we stay on this path,” OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn. “While not someone I’d typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward.”

        The shit just needs to be forced to opensource. If you steal the content from entire world to build a thinking machine - give back to the world.

        The problem is unlike Robin Hood, AI stole from the people and gave to the rich. The intellectual property of artists and writers were stolen and the only way to give it back is to compensate them, which is currently unlikely to happen. Letting everyone see how the theft machine works under the hood doesn't provide compensation for the usage of that intellectual property.

        This would also crash the bubble and would slow down any of the most unethical for-profits.

        Not really. It would let more people get it on it. And most tech companies are already in on it. This wouldn't impose any costs on AI development. At this point the speculation is primarily on what comes next. If open source would burst the bubble it would have happened when DeepSeek was released. We're still talking about the bubble bursting in the future so that clearly didn't happen.

    • the bubble has burst or, rather, currently is in the process of bursting.

      My job involves working directly with AI, LLM's, and companies that have leveraged their use. It didn't work. And I'd say the majority of my clients are now scrambling to recover or to simply make it out of the other end alive. Soon there's going to be nothing left to regulate.

      GPT5 was a failure. Rumors I've been hearing is that Anthropics new model will be a failure much like GPT5. The house of cards is falling as we speak. This won't be the complete Death of AI but this is just like the dot com bubble. It was bound to happen. The models have nothing left to eat and they're getting desperate to find new sources. For a good while they've been quite literally eating each others feces. They're now starting on Git Repos of all things to consume. Codeberg can tell you all about that from this past week. This is why I'm telling people to consider setting up private git instances and lock that crap down. if you're on Github get your shit off there ASAP because Microsoft is beginning to feast on your repos.

      But essentially the AI is starving. Companies have discovered that vibe coding and leveraging AI to build from end to end didn't work. Nothing produced scales, its all full of exploits or in most cases has zero security measures what so ever. They all sunk money into something that has yet to pay out. Just go on linkedin and see all the tech bros desperately trying to save their own asses right now.

      the bubble is bursting.

  • Unfortunately the masses will do as they're told. Our society has been trained to do this. Even those that resist are playing their part.

    • On the contrary: society has repeatedly rejected a lot of ideas that industries have come up with.

      HD DVD, 3D TV, Crypto Currency, NFT's, Laser Discs, 8-track tapes, UMD's. A decade ago everyone was hyping up how VR would be the future of gaming, yet it's still a niche novelty today.

      The difference with AI is that I don't think I've ever seen a supply side push this strong before. I'm not seeing a whole lot of demand for it from individual people. It's "oh this is a neat little feature I can use" not "this technology is going to change my life" the way that the laundry machine, the personal motor vehicle, the telephone, or the internet did. I could be wrong but I think that as long as we can survive the bubble bursting, we will come out on the other side with LLM's being a blip on the radar. And one consequence will be that if anyone makes a real AI they will need to call it something else for marketing purposes because "AI" will be ruined.

      • AI's biggest business is (if not already, it will be) surveillance systems sold to authoritarian governments worldwide. Israel is using it in Gaza. It's both used internally and exported as a product by China. Not just cameras on street corners doing facial recognition, but monitoring the websites you visit, the things you buy, the people you talk to. AI will be used on large datasets like these to label people as dissidents, to disempower them financially, and to isolate them socially. And if the AI hallucinates in this endeavor, that's fine. Better to imprison 10 innocent men than to let 1 rebel go free.

        In the meantime, AI is being laundered to the individual consumer as a harmless if ineffective toy. "Make me a portrait, give me some advice, summarize a meeting," all things it can do if you accept some amount of errors. But given this domain of problems it solves, the average person would never expect that anyone would use it to identify the first people to pack into train cars.

      • VR was and is also still a very inaccessible tool for most people. It costs a lot of money and time to even get to the point where you're getting the intended VR experience and that is what it mostly boils down to: an experience. It isn't convenient or useful and people can't afford it. And even though there are many gamers out there, most people aren't gamers and don't care about mounting a VR headset on their cranium and getting seasick for a few minutes.

        AI is not only accessible and convenient, it is also useful to the everyday person, if the AI doesn't hallucinate like hell, that is. It has the potential to optimize workloads in jobs with a lot of paperwork, calculations and so on.

        I completely agree with you that AI is being pushed very aggressively in ways we haven't seen before and that is because the tech people and their investors poured a lot of money into developing these things. They need it to be a success so they can earn their money back and they will be successful eventually because everybody with money and power has a huge interest in this tool becoming a part of everyday life. It can be used to control the masses in ways we cannot even imagine yet and it can earn the creators and investors a lot of money.

        They are already making AI computers. According to some it will entirely replace the types of computers we are used to today. From what I can understand, it will be preferable to the open AI setups we have currently that are burning our planet to a crisp with the amount of data centers that need to keep them active. Supposedly the AI computer will have it be a local thing on the laptop and it will therefore demand less resources, but I'm so fucking skeptic about all this shit that I'm waiting to see how much power a computer with an AI operating system will need to swallow in energy. I'm too tech-ignorant to understand the ins and outs of what this and that means, but we are definitely going to have to accept that AI is here to stay and the current setup with open AIs and forced LLM's in every search engine is a massive environmental nightmare. It probably won't stop or change a fucking lick because people don't give a fuck as long as they are comfortable and the companies are getting people to use their trash tech just like they wanted so they won't stop it either.

      • HDDVDs weren’t rejected by the masses they were a casualty in Sony’s vendetta against the loss of Beta and DAT. Both of which were rejected by industry not consumers (though both were later embraced by industry and Betas even outlasted VHSs). They would have won out for the same reasons that Sony lost the previous format wars (insistence on licensing fees) except this time Sony bought out Columbia and had a whole library of video and a studio to make new movies to exclusively release on their format. Essentially the supply side pushing something until consumers accepted it, though to your point not quite as bad as AI is right now.

        8-Tracks and laserdiscs were just replaced by better formats (Compact Cassette and Video CD/DVD respectively). Each of them were also replacements for previous formats like Reel to Reel and CEDs.

        UMDs only don’t exist still because flash media got better and because Sony opted to use a cheaper scratch resistant coating instead of a built in case for later formats (like Blu-ray). Also, UMDs themselves were a replacement for or at least inspired by an earlier format called MiniDisc.

        Capitalism’s biggest feat has been convincing people that everything is the next big thing and nothing that has come before is similar when just about everything is just a rinse and repeat, even LLMs… remember when Watson beat Ken Jennings?

    • See also: Cars, appliances, consumer electronics, movies, food, architecture.

      We are ruled by the market and the market is ruled by the lowest common denominator.

  • This is a great representation of why not to argue with someone who debates like this.

    Arguments like these are like Hydras. Start tackling any one statement that may be taken out of context, or have more nuance, or is a complete misrepresentation, and two more pop up.

    It sucks because true, good points get lost in the tangle.

    • For instance, there are soft science, social interaction areas where AI is doing wonders.

      Specifically, in the field of law, now that lawyers have learned not to rely on AI for citations, they are instead offloading hundreds of thousands or millions of pages of documents that they were never actually going to read, and getting salient results from allowing an AI to scan through them to pull out interesting talking points.

      Pulling out these interesting talking points and fact checking them and, you know, A/B testing the ways to interact and bring them in front of the jury with an AI has made it so that many law firms are getting thousands or millions of dollars more on a lawsuit than they anticipated.

      And you may be against American law for all of its frivolous plaintiffs' lawsuits or something, but each of these outcomes are decided by human beings, and there are real damages that are lifelong that are being addressed by these lawsuits, or at least in some way compensated.

      The more money these plaintiffs get for the injuries that they have to live with for the rest of their lives, the better for them, and AI made the difference.

      Not that lawyers are fundamentally incapable or uncaring, but for every one, I don't know who the fuck is a super lawyer nowadays, but you know, for every, you know, madman lawyer on the planet, there's 999 that are working hard and just do not have the raw plot armor Deus Ex Machina dropping everything directly into their lap to solve all of their problems that they would need to operate at that level.

      And yes, if you want to be particular, a human being should have done the work. A human being can do the work. A human being is actually being paid to do the work. But when you can offload grunt work to a computer and get usable results from it that improves a human's life, that's the whole fucking reason why we invented computers in the first place.

  • I absolutely agree that AI is becoming a mental crutch that a disturbing number of people are snatching up and hobbling around on. It feels like the setup of Wall-E, where everyone is rooted in their floating rambler scooters.

    I think the fixation on individual consumer use of AI is overstated. The bulk of the AI's energy/water use is in the modeling and endless polling. The random guy asking "@Grok is this true?" is having a negligible impact on energy usage, particularly in light of the number of automated processes that are hammering the various AI interfaces far faster than any collection of humans could.

    I'm not going to use AI to write my next adventure or generate my next character. I'm not going to bemoan a player who shows up to game with a portrait with melted fingers, because they couldn't find "elf wizard in bearskin holding ice wand while standing on top of glacier" in DeviantArt.

    For the vast majority of users, this is a novelty. What's more, its a novelty that's become a stand-in for the OG AI of highly optimized search engines that used to fulfill the needs we're now plugging into the chatbot machine. I get why people think it sucks and abstain from using it. I get why people who use it too much can straight up drive themselves insane. I get that our Cyberpunk style waste management strategy is going to get one of the news few generations into a nightmarish blight. But I'm not going to hang that on the head of someone who wants to sit down at a table with their friends, look them in the eye, and say "Check out this cool new idea I turned into a playable character".

    Because if you're at the table and you're excited to play with other humans in a game about going out into the world on adventures, that's as good an antedote to AI as I could come up with.

    And hey, as a DM? If you want to introduce the Mind Flayer "Idea Sucker" machine that lures people into its brain-eating maw by promising to give them genius powers? And maybe you want to name the Mind Flayer Lord behind the insidious plot Beff Jezos or Mealon Husk or something? Maybe that's a good way to express your frustration with the state of things.

  • Is there a way for me to take a picture of a food and find nutritional values without AI? I sometimes use duck.ai to ask because, when making tortilla for example idk what could be exact because while I can read values for a tortilla, I don't have a way to check the same for meat and other similar stuff I put in tortilla.

    • Wow, I am old. This has never in my life been an issue? I just used a calorie counter and people’s own recipes for estimates. I guess that would be the old fashioned way of doing this and probably what AI is doing most of the time. Pulling a recipe, looking at the ingredients and quantities and spitting back some values. Granted it can probably do it far faster than we can. But, I got by with that method for decades…

      • Problem is, many things I have do not have packaging with nutritional values and similar and I need to use internet for this, which AI usually is the fastest to explain, especially because English is not my first language and food I am eating is not well known in English (Balkan)

    • You're probably just gonna have to get better at guesstimating, (e.g. by comparing to similar pre-made options and their nutrition labels), or use an app for tracking nutrition that integrates with OpenFoodFacts and get a scale to weigh your ingredients. (or a similar database, though most use OpenFoodFacts even if they have their own, too)

      I don't really know of any other good ways to just take photos and get a good nutritional read, and pretty much any implementation would use "AI" to some degree, though probably more a dedicated machine learning model over an LLM, which would use more power and water, but the method of just weighing out each part of a meal and putting it in an app works pretty well.

      Like, for me, I can scan the barcode of the tortillas I buy to import the nutrition facts into the (admittedly kind of janky) app I use (Waistline), then plop my plate on my scale, put in some ground beef, scan the barcode from the beef packaging, and then I can put in how many grams I have. Very accurate, but a little time consuming.

      Not sure if that's the kind of thing you're looking for, though.

      • Actually, I am using waistline, but there are some food I can't find and are hard to find nutritional values, and I am bad at guessing anything

  • Have you heard of these things called humans? I think this is more a reflection of them. Books ate trees and corrupted the youth, tv rotted your brain and made you go blind, the internet made people lazy. Wait until I tell you about gasp auto-correct or better yet leet speak! The horror. Clearly we are never recovering from either of those. In fact, I’m speaking to you now in emojis. And wait until you learn about clutches pearls Wikipedia— ah the horror!

    Is tech and its advancements perfect? No. Can people do better? Yes. Are criticisms important? Sure are. But panic and fighting a rising tech? You’re probably not going to win.

    Spend time educating people on how to be more ethical with their tech use and absolutely pressuring companies to do the same. Taking a club to a computer didn’t stop the rise of the word processor or the spread of Wikipedia madness. But we can control how we consume and relate to tech and what our demands of their creators are.

    PS— do you even know how to read and write cursive? > punchable smug face goes here. <

349 comments