New development policy: code generated by a large language model or similar technology (e.g. ChatGPT, GitHub Copilot) is presumed to be tainted (i.e. of unclear copyright, not fitting NetBSD's licensing goals) and cannot be committed to NetBSD.
https://www.NetBSD.org/developers/commit-guidelines.ht...
New development policy: code generated by a large language model or similar technology (e.g. ChatGPT, GitHub Copilot) is presumed to be tainted (i.e. of unclear copyright, not fitting NetBSD's licensing goals) and cannot be committed to NetBSD.
How do they know that you wrote it yourself and didn't just steal it?
This is a rule to protect themselves. If there is ever a case around this, they can push the blame to the person that committed the code for breaking that rule.
I mean, generally rules at least are to strongly discourage people from doing a thing, or to lead to things that WOULD prevent people from doing a thing.
A purely conceptual rule by itself would not magically stop someone from doing a thing, but that's kind of a weird way to think about it.
It’s actually simple to detect: if the code sucks or is written by a bad programmer, and the docstrings are perfect, it’s AI. I’ve seen this more than once and it never fails.
Not specific to AI but someone flat out told me they didn't even run the code to see it work. They didn't understand why I would or expect that before accepting code. This was someone submitting code to a widely deployed open source project.
So, I would expect the answer is yes or very soon to be yes.
Around me, most beginners who use that don't have the skills to understand or even test what they get. They don't want to learn I guess, ChatGPT is easier.
I recently suspected a new guy was using ChatGPT because everything seemed perfect (grammar, code formatting, classes made with design patterns, etc.) but the code was very wrong. So I did some pair programming with him and asked if we could debug his simple application. He didn't know where the debug button was.
So your results are biased, because you're not going to see the decent programmers who are just using it to take mundane tasks off their back (like generating boilerplate functions) while staying in control of the logic. You're only ever going to catch the noobs trying to cheat without fully understanding what it is they're doing.
Docstrings based on the method signature and literal contents of a method or class are completely pointless, and that's all copilot can do. It can't Intuit anything that docstrings are actually there for.
Definitely not my experience. With a well structured code base it can be pretty uncanny. I think it's context is limited to files that are currently opened in the editor, so that may be your issue if you're coding with just one file open?
GitHub Copilot introduced a new keyword a little while ago, "@workspace", where it can see everything in your project. The code it generates uses all your own functions and variables in your libraries and it figures out how to use them correctly.
There was one time where I totally went "WTF", because it spat out Python. In a C++ project. But those kind of hallucinations are getting more and more rare. The more code you write, the better it gets. It really does become sort of like a "Copilot", sitting there coding alongside you. The mistake people make is assuming it's going to come up with ideas and algorithms for them without spending any mental energy at all.
I'm not trying to shill. I'm not a programmer by trade. Just a hobbyist who started on QBasic in the ancient times. But I've been trying to learn it off and on for the past 30 years, and I've never learned so much and had so much fun as in the last 1.5 with AI help. I can just think of stuff to do, and shit will just flow out now.