Skip Navigation
157 comments
  • Other than endless posts from the general public telling us how amazing it is, peppered with decision makers using it to replace staff and then the subsequent news reports how it told us that we should eat rocks, or some variation thereof, there's been no impact whatsoever in my personal life.

    In my professional life as an ICT person with over 40 years experience, it's helped me identify which people understand what it is and more specifically, what it isn't, intelligent, and respond accordingly.

    The sooner the AI bubble bursts, the better.

    • I fully support AI taking over stupid, meaningless jobs if it also means the people that used to do those jobs have financial security and can go do a job they love.

      Software developer Afas has decided to give certain employees one day a week off with pay, and let AI do their job for that day. If that is the future AI can bring, I'd be fine with that.

      Caveat is that that money has to come from somewhere so their customers will probably foot the bill meaning that other employees elsewhere will get paid less.

      But maybe AI can be used to optimise business models, make better predictions. Less waste means less money spent on processes which can mean more money for people. I then also hope AI can give companies better distribution of money.

      This of course is all what stakeholders and decision makers do not want for obvious reasons.

      • The thing that's stopping anything like that is that the AI we have today is not intelligence in any sense of the word, despite the marketing and "journalism" hype to the contrary.

        ChatGPT is predictive text on steroids.

        Type a word on your mobile phone, then keep tapping the next predicted word and you'll have some sense of what is happening behind the scenes.

        The difference between your phone keyboard and ChatGPT? Many billions of dollars and unimaginable amounts of computing power.

        It looks real, but there is nothing intelligent about the selection of the next word. It just has much more context to guess the next word and has many more texts to sample from than you or I.

        There is no understanding of the text at all, no true or false, right or wrong, none of that.

        AI today is Assumed Intelligence

        Arthur C Clarke says it best:

        "Any sufficiently advanced technology is indistinguishable from magic."

        I don't expect this to be solved in my lifetime, and I believe that the current methods of"intelligence " are too energy intensive to be scalable.

        That's not to say that machine learning algorithms are useless, there are significant positive and productive tools around, ChatGPT and its Large Language Model siblings not withstanding.

        Source: I have 40+ years experience in ICT and have an understanding of how this works behind the scenes.

      • and let AI do their job for that day.

        What? How does that work?

  • Never explored it at all until recently, I told it to generate a small country tavern full of NPCs for 1st edition AD&D. It responded with a picturesque description of the tavern and 8 or 9 NPCs, a few of whom had interrelated backgrounds and little plots going on between them. This is exactly the kind of time-consuming prep that always stresses me out as DM before a game night. Then I told it to describe what happens when a raging ogre bursts in through the door. Keeping the tavern context, it told a short but detailed story of basically one round of activity following the ogre's entrance, with the previously described characters reacting in their own ways.

    I think that was all it let me do without a paid account, but I was impressed enough to save this content for a future game session and will be using it again to come up with similar content when I'm short on time.

    My daughter, who works for a nonprofit, says she uses ChatGPT frequently to help write grant requests. In her prompts she even tells it to ask her questions about any details it needs to know, and she says it does, and incorporates the new info to generate its output. She thinks it's a super valuable tool.

  • I have a gloriously reduced monthly subscription footprint and application footprint because of all the motherfuckers that tied ChatGPT or other AI into their garbage and updated their terms to say they were going to scan my private data with AI.

    And, even if they pull it, I don't think I'll ever go back. No more cloud drives, no more 'apps'. Webpages and local files on a file share I own and host.

  • I worked for a company that did not govern AI use. It was used for a year before they were bought.

    I stopped reading emails because they were absolute AI generated garbage.

    Clients started to complain and one even left because they felt they were no longer a priority for the company. they were our 5th largest client that had a MRR of $300k+

    they still did nothing to curb AI use.

    they then reduced the workforce in the call center because they implemented an AI chat bot and began to funnel all incidents through it first before giving a phone number to call.

    company was then acquired a year ago. new administration banned all AI usage under security and compliance guidelines.

    today, new company hired about 20 new call center support staff. Customers are now happy. I can read my emails again because they contain human competent thought with industry jargon and not some generated thesaurus.

    overall, I would say banning AI was the right choice.

    IMO, AI is not being used in the most effective ways and causes too much chaos. cryptobros are pushing AI to an early grave because all they want is a cash cow to replace crypto.

  • Its making the impact of bots more polarizing, turning social media into a self radicalizing tool.

  • It's erased several tech jobs and replaced some helpforum commentors with bots to pretend their communities are alive and when you read their comments or 'suggestions' you can clearly tell, this isn't someone trying to help it's just a bot posting garbage pretending to help

  • i've used it fairly consistently for the last year or so. i didn't actually start using it until chatgpt 4 and when openai offered the $20 membership

    i think AI is a tool. like any other tool, your results vary depending on how you use it

    i think it's really useful for specific intents

    example, as a fancy search engine. yesterday I was watching Annie from 1999 with my girlfriend and I was curious about the capitalist character. i asked chatgpt the following question

    in the 1999 hit movie annie, who was the billionaire mr warbucks supposed to represent? were there actually any billionaires in the time period? it's based around the early 1930s

    it gave me context. it showed examples of the types of capitalist the character was based on. and it informed me that the first billionaire was in 1916.

    very useful for this type of inquiry.

    other things i like using it for are to help coding. but there's a huge caveat here. some thing it's very helpful for... and some things it's abysmal for.

    for example i can't ask it "can you help me write a nice animation for a react native component used reanimated"

    because the response will be awful and won't work. and you could go back and forth with it forever and it won't make a difference. the reason is it's trained on a lot of stuff that's outdated so it'll keep giving you code that maybe would have worked 4 years ago. and even then, it can't hold too much context so complex applications just won't work

    BUT certain things it's really good. for example I need to write a script for work. i use fish shell but sometimes i don't know the proper syntax or everything fish is capable of

    so I ask

    how to test, using fish, if an "images.zip" file exists in $target_dir

    it'll pump out

     undefined
            if test -f "$target_dir/images.zip"
            echo "File exists."
        else
            echo "File does not exist."
        end
    
    
      

    which gives me what i needed in order to place it into the script i was writing.

    or for example if you want to convert a bash script to a fish script (or vice versa), it'll do a great job

    so tldr:

    it's a tool. it's how you use it. i've used it a lot. i find great value in it. but you must be realistic about its limitations. it's not as great as people say- it's a fancy search engine. it's also not as bad as people say.

    as for whether it's good or bad for society, i think good. or at least will be good eventually. was the search engine a bad thing for society? i think being able to look up stuff whenever you want is a good thing. of course you could make the argument kids don't go to libraries anymore.. and maybe that's sorta bad. but i think the trade-off is definitely worth it

  • After 2 years it's quite clear that LLMs still don't have any killer feature. The industry marketing was already talking about skyrocketing productivity, but in reality very few jobs have changed in any noticeable way, and LLM are mostly used for boring or bureaucratic tasks, which usually makes them even more boring or useless.

    Personally I have subscribed to kagi Ultimate which gives access to an assistant based on various LLMs, and I use it to generate snippets of code that I use for doing labs (training) - like AWS policies, or to build commands based on CLI flags, small things like that. For code it gets it wrong very quickly and anyway I find it much harder to re-read and unpack verbose code generated by others compared to simply writing my own. I don't use it for anything that has to do communication, I find it unnecessary and disrespectful, since it's quite clear when the output is from a LLM.

    For these reasons, I generally think it's a potentially useful nice-to-have tool, nothing revolutionary at all. Considering the environmental harm it causes, I am really skeptical the value is worth the damage. I am categorically against those people in my company who want to introduce "AI" (currently banned) for anything other than documentation lookup and similar tasks. In particular, I really don't understand how obtuse people can be thinking that email and presentations are good use cases for LLMs. The last thing we need is to have useless communication longer and LLMs on both sides that produce or summarize bullshit. I can totally see though that some people can more easily envision shortcutting bullshit processes via LLMs than simply changing or removing them.

  • Man, so much to unpack here. It has me worried for a lot of the reasons mentioned: The people who pay money to skilled labor will think "The subscription machine can just do it." And that sucks.

    I'm a digital artist as well, and while I think genAi is a neat toy to play with for shitposting or just "seeing what this dumb thing might look like" or generating "people that don't exist" and it's impressive tech, I'm not gonna give it ANY creative leverage over my work. Period. I still take issue with where it came from and how it was trained and the impact it has on our culture and planet.

    We're already seeing the results of that slop pile generated from everyone who thought they could "achieve their creative dreams" by prompting a genie-product for it instead of learning an actual skill.

    As for actual usefulness? Sometimes I run a local model for funsies and just bounce ideas off of it. It's like a parrot combined with a "programmer's rubber ducky." Sometimes that gets my mind moving, in the same way "autocomplete over and over" might generate interesting thoughts.

    I also will say it's pretty decent at summarizing things. I actually find it somewhat helpful when YouTube's little "ai summary" is like "This video is about using this approach taking these steps to achieve whatever."

    When the video description itself is just like "Join my Patreon and here's my 50+ affiliate links for blinky lights and microphones" lol

    I use it to explain concepts to me in a slightly different way, or to summarize something for which there's a wealth of existing information.

    But I really wish people were more educated about how it actually works, and there's just no way I'm trusting the centralized "services" for doing so.

  • I used it once to write a polite "fuck off" letter to an annoying customer, and tried to see how it would revise a short story. The first one was fine, but using it with a story just made it bland, and simplified a lot of the vocabulary. I could see people using it as a starting point, but I can't imagine people just using whatever it spots out.

    • just made it bland, and simplified

      Not always, but for the most part, you need to tell it more about what you're looking for. Your prompts need to be deep and clear.

      "change it to a relaxed tone, but make it make me feel emotionally invested, 10th grade reading level, add descriptive words that fit the text, throw an allegory, and some metaphors" The more you tell it, the more it'll do. It's not creative. It's just making it fit whatever you ask it to do. If you don't give enough direction, you'll just get whatever the random noise rolls, which isn't always what you're looking for. It's not uncommon to need to write a whole paragraph about what you want from it. When I'm asking it for something creative, sometimes it takes half a dozen change requests. Once in a while, I'll be so far off base, I'll clear the conversation and just try again. The way the random works, it will likely give you something completely different on the next try.

      My favorite thing to do is give it a proper outline of what I need it to write, set the voice, tone, objective, and complexity. Whatever it gives back, I spend a good solid paragraph critiquing it. when it's > 80% how I like it, I take the text and do copy edits on it until I'm satisfied.

      It's def not a magic bullet for free work. But it can let me produce something that looks like I spent an hour on it when I spent 20 minutes, and that's not nothing.

  • It's my rubber duck/judgement free space for Homelab solutions. Have a problem: chatgpt and Google it's suggestions. Find a random command line: chatgpt what does this do.

    I understand that I don't understand it. So I sanity check everything going in and coming out of it. Every detail is a place holder for security. Mostly, it's just a space to find out why my solutions don't work, find out what solutions might work, and as a final check before implementation.

  • Been using Copilot instead of CharGPT but I'm sure it's mostly the same.

    It adds comments and suggestions in PRs that are mostly useful and correct, I don't think it's found any actual bugs in PRs though.

    I used it to create one or two functions in golang, since I didn't want to learn it's syntax.

    The most use Ive gotten out of it is to replace using Google or Bing to search. It's especially good at finding more obscure things in documentation that are hard to Google for.

    I've also started to use it personally for the same thing. Recently been wanting to startup the witcher 3 and remembered that there was something missable right at the beginning. Google results were returning videos that I didn't want to watch and lists of missable quests that I didn't want to parse through. Copilot gave me the answer without issue.

    Perhaps what's why Google and Ms are so excited about AI, it fixes their shitty search results.

    • Perhaps what’s why Google and Ms are so excited about AI, it fixes their shitty search results.

      Google used to be fantastic for doing the same kinds of searches that AI is mediocre at now, and it went to crap because of search engine optimization and their AI search isn't any better. Even if AI eventually improves for searching, search AI optimization will end up trashing that as well.

  • It has helped tremendously with my D&D games. It remembers past conversations, so world building is a snap.

  • My broken brain thinks up of a lot of dumb questions about science, history, and other topics. I use it all the time to answer those. Especially if it's a question that's a nuisance to lookup on Wikipedia (though I still love Wikipedia). I like ChatGPT because of the interactive nature of it. And I often have dumb follow-up questions for it.

    It has also been a huge help when I get stuck of a coding or scripting task. Both at work and at home.

  • Work wise no impact so far but I use it to write any bullshit corpo speak emails , tidy up CVs and for things like game cheats etc. Its banned now in my job and we have to use copilot but I dont cause it will send everything back to the company so if I need it I just use chatgpt it on my personal one and email it to my work one.

157 comments