Skip Navigation

Do any non-corpos actually like AI slop?

I've found that AI has done literally nothing to improve my life in any way and has really just caused endless frustrations. From the enshitification of journalism to ruining pretty much all tech support and customer service, what is the point of this shit?

I work on the Salesforce platform and now I have their dumbass account managers harassing my team to buy into their stupid AI customer service agents. Really, the only AI highlight that I have seen is the guy that made the tool to spam job applications to combat worthless AI job recruiters and HR tools.

281 comments
  • Personally I use it when I can't easily find an answer online. I still keep some skepticism about the answers given until I find other sources to corroborate, but in a pinch it works well.

    • because of the way it's trained on internet data, large models like ChatGPT can actually work pretty well as a sort of first-line search engine. My girlfriend uses it like that all the time especially for obscure stuff in one of her legal classes, it can bring up the right details to point you towards googling the correct document rather than muddling through really shitty library case page searches.

  • Yes:

    • Demystifying obscure or non-existent documentation
    • Basic error checking my configs/code: input error, ask what the cause is, double check it's work. In hour 6 of late night homelab fixing this can save my life
    • I use it to create concepts of art I later commission. Most recently I used it to concept an entirely new avatar and I'm having a pro make it in their style for pay
    • DnD/Cyberpunk character art generation, this person does not exist website basically
    • duplicate checking / spot-the-diffetences, like pastebins "differences" feature because the MMO I play released prelim as well as full patch notes and I like to read the differences
  • Garbage in; garbage out. Using AI tools is a skillset. I've had great use with LLMs and generative AI both, you just have to use the tools to their strengths.

    LLMs are language models. People run into issues when they try to use them for things not language related. Conversely, it's wonderful for other tasks. I use it to tone check things I'm unsure about. Or feed it ideas and let it run with them in ways I don't think to. It doesn't come up with too much groundbreaking or new on its own, but I think of it as kinda a "shuffle" button, taking what I have already largely put together, and messing around with it til it becomes something new.

    Generative AI isn't going to make you the next mona Lisa, but it can make some pretty good art. It, once again, requires a human to work with it, though. You can't just tell it to spit out an image and expect 100% quality, 100% of the time. Instead, it's useful to get a basic idea of what you want in place, then take it to another proper photo editor, or inpainting, or some other kind of post processing to refine it. I have some degree of aphantasia - I have a hard time forming and holding detailed mental images. This kind of AI approaches art in a way that finally kinda makes sense for my brain, so it's frustrating seeing it shot down by people who don't actually understand it.

    I think no one likes any new fad that's shoved down their throats. AI doesn't belong in everything. We already have a million chocolate chip cookie recipes, and chatgpt doesn't have taste buds. Stop using this stuff for tasks it wasn't meant for (unless it's a novelty "because we could" kind of way) and it becomes a lot more palatable.

    • This kind of AI approaches art in a way that finally kinda makes sense for my brain, so it’s frustrating seeing it shot down by people who don’t actually understand it. Stop using this stuff for tasks it wasn’t meant for (unless it’s a novelty “because we could” kind of way) and it becomes a lot more palatable.

      Preach! I'm surprised to hear it works for people with aphantasia too, and that's awesome. I personally have a very vivid mind's eye and I can often already imagine what I want something to look like, but could never put it to paper in a satisfying way that didn't cost excruciating amount of time. GenAI allows me to do that with still a decent amount of touch up work, but in a much more reasonable timeframe. I'm making more creative work than I've ever been because of it.

      It's crazy to me that some people at times completely refuse to even acknowledge such positives about the technology, refuse to interact with it in a way that would reveal those positives, refuse to look at more nuanced opinions of people that did interact with it, refuse even simple facts about how we learn and interact with other art and material, refusing legal realities like the freedom to analyze that allow this technology to exist (sometimes even actively fighting to restrict those legal freedoms, which would hurt more artists and creatives than it would help, and give even more more power to corporations and those with enough capital to self sustain AI model creation).

      It's tiring, but luckily it seems to be mostly an issue on the internet. Talking to people (including artists) in real life about it shows that it's a very tiny fraction that holds that opinion. Keep creating 👍

  • The image generators have been great for making token art for my dnd campaign. Other than that, no.

  • My primary use of AI is for programming and debugging. It's a great way to get boilerplate code blocks, bootstrap scripts, one-liner shell commands, creating regular expressions etc. More often than not, I've also learned new things because it ends up using something new that I didn't know about, or approaches I didn't know were possible.

    I also find it's a good tool to learn about new things or topics. It's very flexible in giving you a high level summary, and then digging deeper into the specifics of something that might interest you. Summarizing articles, and long posts is also helpful.

    Of course, it's not always accurate, and it doesn't always work. But for me, it works more often than not and I find that valuable.

    Like every technology, it will follow the Gartner Hype Cycle. We are definitely in the times of "everything-AI" or AI for everything - but I'm sure things will calm down and people will find it valuable for a number of specific things.

  • I have a local instance of Stable Diffusion that I use to make art for MtG proxies. Prior to AI my art was limited to geometric designs and edits of existing pieces. Integrating AI into my work flow has expanded my abilities greatly, and my art experience means that I can do more with it than just prompt engineering.

  • An LLM (large language model, a.k.a. an AI whose output is natural language text based on a natural language text prompt) is useful for the tasks when you're okay with 90% accuracy generated at 10% of the cost and 1,000% faster. And where the output will solely be used in-house by yourself and not served to other people. For example, if your goal is to generate an abstract for a paper you've written, AI might be the way to go since it turns a writing problem into a proofreading problem.

    The Google Search LLM which summarises search results is good enough for most purposes. I wouldn't rely on it for in-depth research but like I said, it's 90% accurate and 1,000% faster. You just have to be mindful of this limitation.

    I don't personally like interacting with customer service LLMs because they can only serve up help articles from the company's help pages, but they are still remarkably good at that task. I don't need help pages because the reason I'm contacting customer service to begin with is because I couldn't find the solution using the help pages. It doesn't help me, but it will no doubt help plenty of other people whose first instinct is not to read the fing manual. Of course, I'm not going to pretend customer service LLMs are perfect. In fact, the most common problem with them seems to be that they go "off the script" and hallucinate solutions that obviously don't work, or pretend that they've scheduled a callback with a human when you request it, but they actually haven't. This is a really common problem with any sort of LLM.

    At the same time, if you try to serve content generated by an LLM and then present it as anything of higher quality than it actually is, customers immediately detest it. Most LLM writing is of pretty low quality anyway and sounds formulaic, because to an extent, it was generated by a formula.

    Consumers don't like being tricked, and especially when it comes to creative content, I think that most people appreciate the human effort that goes into creating it. In that sense, serving AI content is synonymous with a lack of effort and laziness on the part of whoever decided to put that AI there.

    But yeah, for a specific subset of limited use cases, LLMs can indeed be a good tool. They aren't good enough to replace humans, but they can certainly help humans and reduce the amount of human workload needed.

  • Do I think it's generally useful? No, not at all.

    But for very specific purposes it's worth considering as an option.

    Text-to-image generation has been worth it to get a jumping-off point for a sketch, or to get a rough portrait for a D&D character.

    Regular old ChatGPT has been good on a couple occasions for humor (again D&D related; I asked it for a "help wanted" ad in the style of newspaper personals and the result was hilariously campy)

    In terms of actual problem solving... There have been a couple instances where, when Google or Stack Overflow haven't helped, I've asked it for troubleshooting ideas as a last resort. It did manage to pinpoint the issue once, but usually it just ends up that one of the topics or strategies it floats prove to be useful after further investigation. I would never trust anything factual without verifying, or copy/paste code from it directly though.

  • Theres someone I sometimes encounter in a discord Im in that makes a hobby of doing stuff with them (from what I gather seeing it, they do more with it that just asking them for a prompt and leaving them at that, at least partly because it doesnt generally give them something theyre happy with initially and they end up having to ask the thing to edit specific bits of it in different ways over and over until it does). I dont really understand what exactly it is this entails, as what they seem to most like making it do is code "shaders" for them that create unrecognizable abstract patterns, but they spend a lot of time talking at length about technical parameters of various models and what they like and dont like about them, so I assume the guy must find something enjoyable in it all. That being said, using it as a sort of strange toy isnt really the most useful use case.

  • To me it's glorified autocomplete. I see LLM as a potencial way of drastically lowering barrier of entry to coding. But I'm at a skill level that coercing a chatbot into writing code is a hiderance. What I need is good documentation and good IDE statical analysis.

    I'm still waiting on a good, IDE integrated, local model that would be capable of more that autompleting a line of code. I want it to generate the boiler plate parts of code and get out of my way of solving problems.

    What I don't want, is a fucking chatbot.

  • If used in the specific niche use cases its trained for, as long as its used as a tool and not a final product. For example, using AI to generate background elements of a complete image. The AI elements aren't the focus, and should be things that shouldn't matter, but it might be better to use an AI element rather than doing a bare minimum element by hand. This might be something like a blurred out environment background behind a peice of hand drawn character art - otherwise it might just be a gradient or solid colour because it isn't important, but having something low-quality is better than having effectively nothing.

    In a similar case, for multidisciplinary projects where the artists can't realistically work proficiently in every field required, AI assets may be good enough to meet the minimum requirements to at least complete the project. For example, I do a lot of game modding - I'm proficient with programming, game/level design, and 3D modeling, but not good enough to make dozens of textures and sounds that are up to snuff. I might be able to dedicate time to make a couple of most key resources myself or hire someone, but seeing as this is a non-commercial, non-monitized project I can't buy resources regularly. AI can be a good enough solution to get the project out the door.

    In the same way, LLM tools can be good if used as a way to "extend" existing works. Its a generally bad idea to rely entirely on them, but if you use it to polish a sentence you wrote, come up with phrasing ideas, or write your long if-chain for you, then it's a way of improving or speeding up your work.

    Basically, AI tools as they are, should be seen as another tool by those in or adjacent to the related profession - another tool in the toolbox rather than a way to replace the human.

  • So I'm really bad about remembering to add comments to my code, but since I started using githubs ai code assistant thing in vs code, it will make contextual suggestions when you comment out a line. I've even gone back to stuff I made ages ago, and used it to figure out what the hell I was thinking when I wrote it back then 😆

    It's actually really helpful.

    I feel like once the tech adoption curve settles down, it will be most useful in cases like that: contextual analysis

  • I use LLMs for multiple things, and it's useful for things that are easy to validate. E.g. when you're trying to find or learn about something, but don't know the right terminology or keywords to put into a search engine. I also use it for some coding tasks. It works OK for getting customized usage examples for libraries, languages, and frameworks you may not be familiar with (but will sometimes use old APIs or just hallucinate APIs that don't exist). It works OK for things like "translation" tasks; such as converting a MySQL query to a PostGres query. I tried out GitHub CoPilot for a while, but found that it would sometimes introduce subtle bugs that I would initially overlook, so I don't use it anymore. I've had to create some graphics, and am not at all an artist, but was able to use transmission1111, ControlNet, Stable Diffusion, and Gimp to get usable results (an artist would obviously be much better though). RemBG and works pretty well for isolating the subject of an image and removing the background too. Image upsampling, DLSS, DTS Neural X, plant identification apps, the blind-spot warnings in my car, image stabilization, and stuff like that are pretty useful too.

  • to copy my own comment from another similar thread:

    I’m an idiot with no marketable skills. I put boxes on shelves for a living. I want to be an artist, a musician, a programmer, an author. I am so bad at all of these, and between having a full time job, a significant other, and several neglected hobbies, I don’t have time to learn to get better at something I suck at. So I cheat. If I want art done, I could commission a real artist, or for the cost of one image I could pay for dalle and have as many images as I want (sure, none of them will be quite what I want but they’ll all be at least good). I could hire a programmer, or I could have chatgpt whip up a script for me since I’m already paying for it anyway since I want access to dalle for my art stuff. Since I have chatgpt anyway, I might as well use it to help flesh out my lore for the book I’ll never write. I haven’t found a good solution for music.

    I have in my brain a vision for a thing that is so fucking cool (to me), and nobody else can see it. I need to get it out of my brain, and the only way to do that is to actualize it into reality. I don’t have the skills necessary to do it myself, and I don’t have the money to convince anyone else to help me do it. generative AI is the only way I’m going to be able to make this work. Sure, I wish that the creators of the content that were stolen from to train the ai’s were fairly compensated. I’d be ok with my chatgpt subscription cost going up a few dollars if that meant real living artists got paid, I’m poor but I’m not broke.

    These are the opinions of an idiot with no marketable skills.

281 comments