Skip Navigation

Do you guys use AI when programming? If so, how?

If so, I'd like to know about that questions:

  • Do you use an code autocomplete AI or type in a chat?
  • Do you consider environment damage that use of AIs can cause?
  • What type of AI do you use?
  • Usually, what do you ask AIs to do?
72 comments
  • I don't.

    I played around with it twice, but both times it gave me nonfunctioning code. It seemed stupid to use it when I'd still have to go back and rewrite it anyway.

  • No, I don't. I often have to fix the work of my colleague and my boss, who do use it. I often have to gently point out to my boss that just because the chatbot outputs results for things, doesn't mean those results are accurate or helpful.

  • I am still relatively inexperienced and only embedded. (Electronics by trade) I am working on an embedded project with Zephyr now.

    If I run into a problem I kind of do this method (e.g. trying to figure out when to use mutexes vs semaphores vs library header file booleans for checking ):

    • first look in the zephyr docs at mutexes and see if that clears it up
    • second search ecosia/ddg for things like "Zephyr when to use global boolean vs mutex in thread syncing"
    • if none of those work, I will ask AI, and then it often gives enough context that I can see if it is logical or not (in this case, it was better to use a semi-global boolean to check if a specific thread had seen the next message in the queue, and protect the boolean with a mutex to know if that thread was currently busy processing the data), but then it also gave options like using a gate check instead of a mutex, which is dumb because it doesn't exist in zephyr.

    For new topics if I can't find a video or application note that doesn't assume too much knowledge or use jargon I am not yet familiar with, I will use AI to become familiar with the basic concept in the terms so that I can then go on to other, better resources.

    In engineering and programming, jargon is constant and makes topic introduction quite difficult if they don't explain it in the beginning.

    I never use it for code with the exception of codebases that are ingested but with no documentation on all of the keys available, or like in zephyr where macro magic is very difficult to navigate to what it actually does and isn't often documented at all.

  • As for actual coding, I use ChatGPT sometimes to write SDK glue boilerplate or learn about API semantics. For this kind of stuff it can be much more productive than scanning API docs trying to piece together how to write something simple. Like for example, writing a function to check if an S3 bucket is publicly accessible. That would have taken me a lot longer without ChatGPT.

    In short: it basically replaced google and stack overflow in my workflow, at least as my first information source. I still have to fall back to a real search engine sometimes.

    I do not give LLMs access to my source code tree.

    Sometimes I'll use it for ideas on how to write specific SQL queries, but I've found you have to be extremely careful with this use case because ChatGPT hallucinates some pretty bad SQL sometimes.

  • Sparingly. I use chatgpt to help with syntax and idioms when learning new languages. And sometimes I use it to help determine the best algorithm to use for a general problem. Other times I feed in working code and ask for improvements like a mini code review

    The only time I had it code something from scratch for me was when I wanted some Vimscript and I didn't want to learn it. I tried the same thing with jq and it failed and I had to learn me some jq.

    I hate popups in editors in general (no intellisense for me), so I lothe AI trying to auto complete my code.

  • Yes because I can't program.

    I ask it to construct small blocks like if or for loop statements with a very verbose prompt so that all variables are properly named and the code block is small enough I can debug myself.

    Basically is like building lego where the AI prints every piece.

    1. It's much more time consuming than if I knew the language myself but it's actually a fun way to learn and it's faster than wading through forums for n amount of time.
    2. I don't get paid to do it, so I don't see it as problematic, my biggest gripe is I used to cite the stack overflow, etc, user where I got the snippet of code before and now I can't give credit to the original author.
    3. It's useful since it has allowed me to automate a lot of tedious tasks that would otherwise be more time consuming, making the activation energy necessary to create the automation much lower.
    4. I use mistral exclusively, the GPT 4, 4o and 5 are quite useless in comparison. The latest mistral and codestral tower above them in my anecdotal experience, at least the way I use it.
    5. It works well with local models so I don't have to feed the beast.
    6. I'm an illiterate idiot when it comes to python so it has resulted in someone being able to do something they otherwise couldn't.
    7. I'm not a programmer, AI hasn't made me a programmer, If I were a programmer, the code completion is so slow I'd probably not use it, I'm unaware of other uses other than debugging, but even for its own code, debugging is hit or miss, miss, miss because of limited context, it really can't debug well.
    8. It's definitely not worth how many trillions are being poured into it. Especially when one uses it more and becomes painfully aware of the limitations, it becomes quite obvious that the applications lie in increasing industrial and scientific productivity rather than creating a mass market tool.
    9. Agentic AIs are pure cancer and a security catastrophe waiting to happen. The ease with which one can use prompt injection to exfiltrate basically any kind of data the agent has access to is probably keeping many a cyber security experts awake at night. I envision, ironically, black hat being invaded by "prompt engineers" specialized in creating injection prompts.

    Thank you for coming to my ted talk.

  • I use a chat interface as a research tool when there's something I don't know how to do, like write a relationship with custom conditions using sqlalchemy, or I want to clarify my understanding on something. first I do a Kagi search. If I don't find what I'm looking for on Stack Overflow or library docs in a few minutes then I turn to the AI.

    I don't use autocompletion - I stick with LSP completions.

    I do consider environmental damage. There are a few things I do to try to reduce damage:

    1. Search first
    2. Search my chat history for a question I've already asked instead of asking it again.
    3. Start a new chat thread for each question that doesn't follow a question I've already asked.

    On the third point, my understanding is that when you write a message in an LLM chat all previous messages in the thread are processed by the LLM again so it has context to respond to the new message. (It's possible some providers are caching that context instead of replaying chat history, but I'm not counting on that.) My thinking is that by starting new threads I'm saving resources that would have been used replaying a long chat history.

    I use Claude 4.5.

    I ask general questions about how to do things. It's most helpful with languages and libraries I don't have a lot of experience with. I usually either check docs to verify what the LLM tells me, or verify by testing. Sometimes I ask for narrowly scoped code reviews, like "does this refactored function behave equivalently to the original" or "how could I rewrite this snippet to do this other thing" (with the relevant functions and types pasted into the chat).

    My company also uses Code Rabbit AI for code reviews. It doesn't replace human reviewers, and my employer doesn't expect it to. But it is quite helpful, especially with languages and libraries that I don't have a lot of experience with. But it probably consumes a lot more tokens than my chat thread research does.

  • I'm using it for some side projects. I used it as an assistant for setting up services in Kubernetes - also used it a lot for debugging and creating kubectl commands.

    another side project is to write a full web app in F# SAFE stack, which uses Fable to transpile to JavaScript and React, so I'm learning several things at once.

    At work I didn't use it as much, but it got used to generate tests, since no one really cared enough about them. I also did some cool stuff where someone wrote a guide on how to migrate something from a V1 of a thing to a V2 of a thing, I hooked up MCP to link the doc, asked it to migrate one, and it did it perfectly.

    I used it a lot to generate Mongo queries, but they put that feature in MongoDB Compass.

    We used Claude Sonnet pretty much exclusively. for my side projects I often use Auto since Sonnet burns through the monthly budget pretty quickly. It definitely isn't as good, but it tends to be favorable for template-y things, debugging why some strange thing is happening in F# or react.

    For the side projects, I find I'm using it less as I learn more. It's good for getting over a quick hump if you have a sense of how things generally should be.

    I've considered the lakes I've burned because I didn't copy paste those kubectl commands to a file.

    I prefer Sonnet. Anything less isn't that great, which is one reason I think people hate it.

    I tend to use it for crufty things. And certain front end things. It's been a long time since I've done web UI.

  • I don't mess with code autocomplete, Cursor, agents or any of that stuff. I've got subscriptions to 2 platforms that give me access to a bunch of different models and I just ask whatever model I need directly, copy/paste the context it needs. On that note, AI search engines like Perplexity genuinely bring zero value to my workflow. I'd rather do the searching myself and feed it the relevant context, feels like it misleads me more often than it helps. I actually have a Perplexity sub (got it free) and haven't touched their web search in like 4 months.

    I've thought about the environmental impact and taken steps to minimize my usage. That's actually one reason I avoid Cursor, agents, and AI web search - feels super wasteful and I'm not convinced it's sustainable long-term. I guess I just like being in control, you know? I also try using smaller open source models when I can, even if they're not as powerful.

    My go-to models right now for daily use (easiest to hardest tasks): Llama 4 Scout -> DeepSeek v3.1 -> DeepSeek v3.1 (thinking) -> Gemini 2.5 Pro / Claude 4 Sonnet (thinking) -> GPT 5 (thinking). Sometimes I'll throw in other models like Gemini 2.5 Flash but mostly stick to these.

    By the way I would recommend trying out t3.chat ( that's one of the platforms that I use). Cost 8 USD / month and is made by Theo pretty happy with it for the price. The UI is honestly its strongest point.

    For how I actually use AI, I wrote a more detailed answer in another thread about AI usage. Have a read

  • I am a data scientist and we use databricks which has copilot (I think) installed by default. So with this we have an autocomplete which I use the most because it can do some of the tedious steps for an analysis if I write good comments which I do anyhow. This is around 50% accurate with it being the most accurate for simple mindless things or getting the name of things correct.

    There is code generating block tool that I never use. There is also a something that troubleshoots and diagnosis any error. Those are mostly useless but has been good to finding missing commas and other simple things. Their suggestions sometimes are terrible enough that I mostly ignore this.

    We have a Copilot bot as part of our Github (I don't know is this standard now?) that I actually enjoy and has uses. It writes up great summarizes of what code was commited which has a great format and seems almost 100% accurate for me. Most importantly it has a great spellchecker as part of their suggestions. I am a terrible speller and never double check names so it can fix them both in the notes and in my code (It fixes it everywhere in the code which is nice). The rest of the suggestions are okay. There are some that are useful but some that are way off or overengineered for what I am doing. This I like because it just comes in at the end of my process and I can choose to accept or deny.

  • When I use it, I use it to create single functions that have known inputs and outputs.

    If absolutely needed, I use it to refactor old shitty scripts that need to look better and be used by someone else.

    I always do a line-by-line analysis of what the AI is suggesting.

    Any time I have leveraged AI to build out a full script with all desired functions all at once, I end up deleting most of the generated code. Context and "reasoning" can actually ruin the result I am trying to achieve. (Some models just love to add command line switch handling for no reason. That can fundamental change how an app is structured and not always desired.)

  • At work, I still use JetBrains', including the in-line, local code completion. Though it (or rather the machines at work) are so slow, 99% of the time I've already written everything out before it can suggest something.

  • Sometimes it is helpful to summarize large unfamiliar codebases relatively quickly, provide a high level overview, quickly understanding the layout and structure and help me locate the particular areas I'm interested in but I don't really use it to write or modify code directly. It can be good at analyzing logs and datafiles to find problems or patterns or areas that need closer (human) investigation. Even the documentation it produces can sometimes be tolerably decent, at least in comparison to my own which is sometimes intolerably bad or missing completely.

    But as far as generating code? I've found the autocomplete largely useless and random. As for chat, where I can direct it more carefully, it might be able to accurately provide a well-known algorithm for something but then will use a mess of variables and inputs that interact with that algorithm in the stupidest ways possible, the more code you ask it to generate the worse it gets, getting painfully overengineered in some aspects and horribly lacking in others. If it even compiles and runs at all. Even for relatively simple find this/replace it with this refactoring I find I cannot fully trust it and rely on the results, so I don't. I'm proficient enough with regex and scripting that I don't find it any faster to walk a generative AI to the result I want while analyzing the fuzzy logic it uses to get there than it is to just write a perfectly deterministic script to do it instead.

    As a general rule, I find it is sometimes better at quickly communicating particular things to my manager or other developers than I am, but I am almost always better and quicker at communicating things to computers than it is. That is, after all, my job. Which I happen to think I'm pretty good at.

    As for the environmental aspect, that's why I don't use it in my personal life basically at all if I can avoid it. Only at work, and only because they judge my usage of it as part of my performance. I would be just as happy not using it at all for anything. And when I do use it for personal use, which is a point I haven't really reached except for a bit of experimentation and learning, I am never willingly going to use a datacenter-hosted model/service/subscription, I will run it on my own hardware where I pay the bills so I am at least aware of the consequences and in control of the choices it's making.

  • I use the Jetbrains AI Chat with Claude and the AI autocomplete. I mostly use the AI as a rubber duck when I need to work through a problem. I don't trust the AI to write my code, but I find it very useful for bouncing ideas off of and getting suggestions on things I might have missed. I've also found it useful for checking my code quality but it's important to not just accept everything it tells you.

  • Ive used it when I’ve found myself completely stumped with a problem, and I don’t know exactly how to search for the solution. I’m building a macOS app, and unfortunately a lot of the search results are for iOS — even if I exclude iOS from the results (e.g. how to build a window with tabs (like safari tabs), but all results comes up for iOS’ TabView).

  • Visual Studio provides some kind of AI even without Copilot.

    Inline (single line) completions - I not always but regularly find quite useful

    Repeated edits continuation - I haven't seen them in a while, but have use them on maybe two or three occasions. I am very selective about these because they're not deterministic like refractorings and quick actions, which I can be confident in correctness even when doing those across many files and lines. For example invert if changes many line indents; if an LLM does that change you can't be sure it didn't change any of those lines.

    Multi-line completions/suggestions - I disabled those because it offsets/moves away the code and context I want to see around it, as well as noisy movement, for - in my limited experience - marginal if any use[fulness].

    In my company we're still in selective testing phase regarding customer agreements and then source code integration into AI providers. My team is not part of that yet. So I don't have practical experience regarding any analysis, generating, or chat functionality with project context. I'm skeptical but somewhat interested.

    I did do private projects, I guess one, a Nushell plugin in Rust, which is largely unfamiliar to me, and tried to make use of Copilot generating methods for me etc. It felt very messy and confusing. Generated code was often not correct or sound.

    I use Phind and more recently more ChatGPT for research/search queries. I'm mindful of the type of queries I use and which provider or service I use. In general, I'm a friend of ref docs, which is the only definite source after all. I'm aware of and mindful of the environmental impact of indirectly costly free AI search/chat. Often, AI can have a quicker response to my questions than searching via search ending and on and in upstream docs. Especially when I am familiar with the tech, and can relatively quickly be reminded, or guide the AI when it responds bullshit or suboptimal or questionable stuff, or also relatively quickly disregard the entire AI when it doesn't seem capable to respond to what I am looking for.

72 comments