I really want to use AI like llama, ChatGTP, midjourney etc. for something productive. But over the last year the only thing I found use for it was to propose places to go as a family on our Hokaido Japan journey. There were great proposals for places to go.
But perhaps you guys have some great use cases for AI in your life?
I don't. Played with it a bit but as a capable writer and coder I don't find it fills a need and just shifts the effort from composition (which I enjoy) to editing and review (which I don't).
Mostly the same. I tried ChatGPT a few times to get it to generate some code, but mostly it produced code that didn't even compile and when I asked it to fix it, it created code that didn't compile in a different way. I enjoy writing code on my own a lot more than having to review some pre-generated code.
Though I use it as a glorified Google sometimes and that is not even so bad.
I don't and the energy consumption of public AI services is a stopper for "testing and playing around". I think I'll just wait until it takes over the world as advertised.
I would argue they already have. Just as cars used to be slow, inefficient, and loud, compared to today. Overtime their will inevitably be improvements in how they run, but also improvements in dedicated hardware support. Timeline wise, we are enjoying the hot new Model T, knowing eventually we will get to have a modern Honda Civic.
Nope, nothing. There doesn't honestly seem to be anything I'd use it for, even then I wouldn't wanna support it as long as it uses Data its gotten by basically stealing. Maybe once that has gotten better I'll look more into it, but at the current moment I just don't have the heart to support it
They take what we make, be it art or Text without our or anyones consent, to me thats stealing something. And yes, there are AI Tools fully build on public Domain and open source things, but those are at the moment, few and far between.
This article by Kit Walsh, a senior staff attorney at the EFF, and this one by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries are a good place to start.
Summarising articles / extracting information / transforming it according to my needs. Everyone knows LLM-bssed summaries are great, but not many folks utilise them to their full extent. For instance, yesterday, Sony published a blog piece on how a bunch of games were discounted on the PlayStation store. This was like a really long list that I couldn't be bothered reading, so I asked ChatGPT to display just the genres that I'm interested in, and sort them according to popularity. Another example is parsing changelogs for software releases, sometimes some of them are really long (and not sorted properly - maybe just a dump of commit messages), so I'd ask it to summarise the changes, maybe only show me new feature additions, or any breaking changes etc.
Translations. I find ChatGPT excellent at translating Asian languages - expecially all the esoteric terms used in badly-translated Chinese webcomics. I feed in the pinyin word and provide context, and ChatGPT tells me what it means in that context, and also provides alternate translations. This is a 100 times better than just using Google Translate or whatever dumb dictionary-based translator, because context is everything in Asian languages.
Oh that reminds me of another use of it last year. I let it translate some official divorce papers from Korean to German and then let a human read through it and give it a stamp of approval. Payed $5 for the stamp instead $70 for the translation.
Naming things in programming is a solved problem now. You can just name it Thingy, and then ask Copilot Chat what it should be called when you're done implementing it
I use it to generate code documentation because I'm incapable of documenting things without sounding like a condescending ass. Paste in a function, tell it to produce docstrings and doctests, then edit the hell out of it to sound more human and use actual data in the tests.
Its also great for readmes. I have a template that I follow for that and only work on one section at a time.
Its also great for readmes. I have a template that I follow for that and only work on one section at a time.
Templates in sections are somewhere where it shines. I set up a template for giving information about a song -- tempo, scales used and applicable overlapping ones, and other misc stuff. It's really nice for just wanting to get going, it's yet to be inaccurate. It's quite nice, having a fast database that's mostly accurate. I do scrutinize it, but honestly even if it were to be wrong one day, it's just music and the scale being "wrong" can only be so wrong anyhow.
I used to spend hours agonizing over documenting things because I couldn't get the tone right, or in over explained, or some other stupid shit.
Now I give my llamafile the code, it gives me a reasonable set of documentation, I edit the documentation because the LLM isn't perfect, and I'm done in 10 minutes.
Over-explaining is my biggest issue. I'm entirely self taught and the trash quality of certain softwares with non-descriptive variable and function names sort of steered me towards clearly naming things (sometimes verbosely). That has the unfortunate side effect of repetition when documenting and it comes across as sarcastic or condescending when proofreading.
Its far easier to have a machine do it than to second-guess every sentence.
You mentioned a llamafile, is that offline? I'm using GPT-4 at the moment because my partner has a subscription. If so, I maaaay have to check it out ^^
Dito, although probably not in the same way you mean :D
I've actually noticed that I respond stronger to erotic short stories than straight up videos or images, so I use AI for basically erotic fantasy chatting. Some of them can actually generate images to show surroundings or chars during conversations and weave them into the chat.
I find them neat, but there's just too many issues I can't overlook.
The environmental impact of these technologies is immense, and growing exponentially.
A vast amount of the training data used for the big llms and image generators is not in the public domain, which is at best ethically grey but at worst just blatantly exploiting artists and other professionals.
If there existed some alternatives to the big names that avoided both of these issues, I'd love to use them for code autocomplete and image generation for ttrpgs, but as it stands the moral cost is too high.
Mostly for finding information that for whatever reason can be difficult to find using search engines. For example, I've used ChatGPT to ask spoiler-free questions about plot points in books I'm reading, which has worked rather well. It hasn't spoiled me yet, but rather tells me that giving more information would be a spoiler.
Last time I tried to look something up on Google, carefully, I got a massive spoiler for the end of the entire book series.
I also use it for code-related questions at times, but very rarely, and mostly when using a language I'm not used to. Such as when I wrote an expect script for the first (and perhaps only) time recently.
So many times I wanted to know the name of an actor who played a character after the first episode and the top result was something like "[Character Name] (deceased)" or " Villain: [Character Name]."
I've found it useful for TTRPGs too. Art generators are certainly helpful for character portraits, I also find ChatGPT can be useful for lots of other things. I've had pretty mediocre results trying to get it to generate a whole adventure but if you give it tight enough parameters then it can flesh out content for you - ranging from NPC name ideas, to ideas for custom magic items, to whole sections of dialogue.
You can give it a plot hook you have in mind and ask it to generate ideas for a three-act structure and encounter summary to go with it (helpful when brainstorming the party's next adventure), or you can give it an overview of an encounter you have in mind and ask it to flesh out the encounter - GPT4 is reasonably good at a lot of this, I just wouldn't ask it to go the whole way from start to finish in adventure design as it starts to introduce inconsistencies.
You also need to be ready to take what it gives you as a starting point for editing rather than a finished product. For example, if I ask it to come up with scene descriptions in D&D then it has a disproportionate tendency to come up with things that are 'bioluminescent' - little tells like that which show it's AI generated.
Overall - you can use it as a tool for a busy DM that can free you up to focus on the more important aspects of designing your adventure. But you need to remember it's just a tool, don't think you can outsource the whole thing to it and remember it's only as helpful as how you try to use it.
One of my favorite things to do is pass my speech into it and have it rewrite with fog index "#". Really helps with speaking to varied audiences about the same topic.
I use it all the time to write Microsoft Excel and Microsoft PowerApps formulas. I use it to draft and re-write e-mails. I use it to come up with ideas and brainstorm.
I just gave it the first bit and two text input fields initially and then asked it to add the remainder for me instead of hitting copy paste and changing the numbers a dozen times.
Probably saved me 5 minutes, but I do this kind of thing fairly regularly so it's probably saving me a half-hour to an hour per week on formulas alone.
I find a ton of uses for quick Python scripts hammered out with Bing Chat to get random stuff done.
It's also super useful when brainstorming and fleshing out stuff for the tabletop roleplaying games I run. Just bounce ideas off it, have it write monologues, etc.
I use it quite a bit. I don’t trust big companies who commercialize AI so I run my AIs locally on my old retired gaming desktop that I’ve turned into a homelab/media server.
I use Kobold.AI to self host an LLM like ChatGPT (Dolphin-Mistral7b if you are curious). I mainly use it for low effort knowledge searches for stuff that is easier typed out long and descriptive (since google struggles with this). Since it’s AI I have to be careful about what I search since I’ve seen it make stuff up but for the majority of what I use it for (Programming syntax, Linux troubleshooting, general questions) it’s pretty good.
I also have Stable Diffusion running as well using the ICBINP model (which is pretty decent for photorealistic images). I use this AI to generate social media display pictures and porn :) it’s fun because it’s a surprise at what you’re going to get but sometimes it generates absolute horrors. Anatomical horrors. Those are genuinely horrific. Other times it’s really good.
How do you set up stable diffusion to run locally? I have been trying out llama.cpp for text and was looking for a similarly easy tool to try image generation.
I’ve found it useful for getting approaches to programming projects. Rarely does it completely solve my problems, but it keeps me headed in the right direction.
I’m also partway through making my first ARG and it’s super useful for generating ideas, especially when I feed it my established lore because it can keep ideas within that universe.
I’ve found overall, it’s best to use it to fill in the gaps on ideas I have in general. I theoretically could make all of the content myself from scratch, but I’m honestly terrible at all the little details in many cases. It allows me to not dwell on the little stuff.
I've used it to make specific images for work proposals that stock sources may not have. Sometimes for fun, I vary it so it's in the style of a cartoon or a Japanese woodcut.
The only practical thing I have found I can do with AI is brainstorm ideas (or rather expand upon little ideas I have but don't know where to go after) or figure out what's wrong with a snippet of code when I can't figure it out on my own.
@jeena I only use DeepL to translate and that's it. I also started taking notes in .md files, so that could make for a good use case in the future if there was an AI that I could use without connecting to the internet (e.g. to only let me tell stuff based on the files I got). Otherwise I am pretty reticent on AI. Perhaps I watched too many fiction movies, but I am afraid it will become too sentient and somehow escape the human oversight, thus creating havoc in our lives.
A lot of translation and summarisation. ChatGPT is extremely good in absorbing a whole mix of comments in different languages and summarising them in English (or whatever other language).
For programming I don't use it so much anymore because it hallucinates too much, calling APIs that don't even exist. And when I lower the temperature the output is too sparse.
I'm also trying to build an assistant that can also communicate proactively (I intend to auto-prompt it when things happen and then evaluate if it should cause a message to me). But I need to get a local LLM going for that because running that through the ChatGPT API will be too costly.
Also, a replacement for some of my web searches. Sometimes I just want to know something and it's refreshing that it can give me an answer (even though it does need to be validated, it's much easier to do that when you know what you're looking for!)
So far, there have been two interesting uses I've seen for chat gpt.
One is I've used it to help me write regular expressions in the very rare time I need to for my job.
The other is kind of cool but also kind of troubling in a way. But I've come across a couple of therapy style chat bots that are essentially just reading off a list of "here's what to do for XYZ"
I've tested them a bit, and I've found I'm 1) concerned who gets access to the information shared. 2) If/when these kinds of bots will be used to manipulate people in a negative way. 3)The possibility of a bot replying in a bad way that could make an issue worse for someone
Overall, I like the idea of them. I find it's hard to process information if it's coming directly from myself, or accept compassion from myself. So funny enough, these chat bots actually work really well in that respect.
In some cases, I've had better discussions than I have had with actual therapists, which is funny but also sad.
So while there's some troubling possibilities, I think there's a lot of positives that I've seen from my time with it.
Out of say a year, I have used it once to help put a work quote into better formatting, the rest of the time I use it solely as a way to suggest films I would enjoy based on a previously warched list, it is actually good at that
I'm a bit disappointed to the practical uses, but I still get some value out of AI.
I sometimes use chatgpt to tweak existing SQL scripts at work, and as a trouble shooting assistant. Also I use this tool ultimate vocal remover to extract stems from songs, mainly to make myself instrumentals to practice singing over.
Those are really only things I do regularly, despite trying different self hosted AI tools.
Most are cool but not very useful.
I'm using Claude (subbed) to help me do qualitative coding and summarizing within a very niche academic framework. I was encouraged to try it by an LLM researcher and frankly I'm happy with the results. I am using it as a tool to assist my work, not replace it, and I'm trying to balance the bias and insights of the tool with my own position as a researcher.
On that note, if anyone has any insights or suggestions to improve prompts, tools, or check myself while I tinker, please, tell me.
I've only used DuckDuckGo's implementations of GPT and Claude. I haven't really found a use case yet. I don't trust it enough to for queries related to things I don't understand (gaps in my knowledge) and would rather solve these problems or learn these skills through exisiting sources of information that I know have had at least some level of human refinement/vetting. Personally I enjoy the challenge of problem solving in life, particularly when the solution involves learning a new skill that I can utilise again in the future. I find it interesting that AI is advertised as being able to maximise our capabilities as humans, because it appears to be used for the complete opposite in most cases. People want to use their brains less and just be spoonfed the answers.
I've been using ChatGPT in conjunction with search engines just to find things I need. For instance, I did an April Fools presentation for a work meeting and needed humorous real-life legal stories, so the AI was able to provide suggestions.
I also use it to for simple tasks, like organizing info into a table.
Mainly, though, my reason for using it is that, since I work in tech, I'm going to need to know how to use it well, and the best way to do that is being hands-on.
General purpose LLMs are starting to replace everyday queries I used to posit to Google. Perplexity can be quite good for this.
Copilot as enhanced autocomplete when writing code. A particularly good use-case is writing tests: with a few test cases already written, a sufficiently good test name will most often generate a well-written test case.
LLMs for lazy generation of SQL queries can sometimes be quite nice.
Writing assistance for things I find myself struggling to get written by myself. A writing unblocking tool, if you will.
It's reducing the effort and time I have to put into some things, and I appreciate that. It's far from perfect, but it doesn't have to be perfect to be useful.
I've found it helpful at work for things like preparing agendas for meetings, or creating an outline of a presentation or document I need to write.
I've also found it helpful when I'm trying to Google something where I need to be pretty specific and then I can't find exactly what I mean by searching.
Asking extremely niche scientific questions: I don't depend on these answers but in the answer is usually the specific terminology I can then search and find the answers I was looking for. I have learned a lot about the properties of metals and alloys this way and what the planet could look like with different compositions.
Re-phrasing things: At work when I'm drained and out of patience I can tell that what I'm writing in my emails is not really appropriate, so I have GPT re-phrase it. GPT's version is typically unusable of course but it kicks my brain in the direction of re-phrasing my email myself.
Brainstorming: The program has endless patience for my random story-related questions and gives me instant stupid or cliche answers. This is great for me because part of my creative process since I was a kid has been seeing in media something that was less than satisfying and my brain flying into all the ways I could have done it better. I ask the program for its opinion on my story question, say "no idiot, instead:" and what comes after is the idea I was looking for from my own mind. Sometimes by total chance it has a good suggestion, and I can work with that too.
Fun uses which are less common:
Comedy use: I once had it generating tweets from Karl Marx about smoking weed every day. The program mixed marxist philosophy and language with contemporary party music to endlessly amusing results. Having historical figures with plenty of reference material from their writings opining on various silly things is very funny to me, especially when the program makes obvious mistakes.
Language Manipulation: If some philosophical text which was written to be deliberately impenetrable is getting too annoying to read, the program is decent at translating. If I plug in a block of text written by Immanual Kant and have the program re-write it in the style of Mark Twain, the material instantly becomes significantly easier to understand. Re-writing it in the style of stereotypical gen-z is hilarious.
Nothing but have it write stories (not shared or used for anything but just for fun). That, and come up with names for things since I struggle with that.
Almost nothing. I sometimes use it to rephrase a question or answer. I refuse to become dependent on AI or contribute to it more than I already unwittingly have.