And to a large extent, there is automatic software that can audit things like dependencies. This software is also largely open source because hey, nobody's perfect. But this only works when your source is available.
My very obvious rebuttal: Shellshock was introduced into bash in 1989, and found in 2014. It was incredibly trivial to exploit and if you had shell, you had root perms, which is insane.
env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
Though one of the major issues is that people get comfortable with that idea and assume for every open source project there is some other good Samaritan auditing it
I would argue that even in that scenario it's still better to have the source available than have it closed.
If nobody has bothered to audit it then the number of people affected by any flaws will likely be minimal anyway. And you can be proactive and audit it yourself or hire someone to before using it in anything critical.
If nobody can audit it that's a whole different situation though. You pretty much have to assume it is compromised in that case because you have no way of knowing.
The point is not that you can audit it yourself, it's that SOMEBODY can audit it and then tell everybody about it. Only a single person needs to find an exploit and tell the community about it for that exploit to get closed.
But eventually somebody will look and if they find something, they can just fork the code and remove anything malicious.
Anyways, open source to me is not about security, but about the public "owning" the code. If code is public all can benefit from it and we don't have to redo every single crappy little program until the end of time but can instead just use what is out there.
Especially if we are talking about software payed for by taxes. That stuff has to be out in the open (with exception for some high security stuff - I don't expect them to open source the software used in a damn tank, a rocket or a fighter jet)
You can get a good look at a T-bone by sticking your head up a cow's ass but I'd rather take the butcher's word for it.
There are people that do audit open source shit quite often. That is openly documented. I'll take their fully documented word for it. Proprietary shit does not have that benefit.
And even when problems are found, like the heartbleed bug in OpenSSL, they're way more likely to just be fixed and update rather than, oh I dunno, ignored and compromise everybody's security because fixing it would cost more and nobody knows about it anyway. Bodo Moller and Adam Langley fixed the heartbleed bug for free.
I had a discussion with a security guy about this.
For software with a small community, proprietary software is safer. For software with a large community, open source is safer.
Private companies are subject to internal politics, self-serving managers, prioritizing profit over security, etc. Open source projects need enough skilled people focused on the project to ensure security. So smaller companies are more likely to do a better job, and larger open source projects are likely to do a better job.
This is why you see highly specialized software has really enterprise-y companies running it. It just works better going private, as much as I hate to say it. More general software, especially utilities like OpenSSL, is much easier to build large communities and ensure quality.
With all due respect, I have to strongly disagree. I would hold that all OSS is fundamentally better regardless of community size.
Small companies go under with startling frequency, and even with an ironclad contract, there's often nothing you can do but take them to court when they've gone bankrupt. Unless you've specifically contracted for source access, you're completely SOL. Profitable niche companies lose interest too, and while you may not have the same problems if they sell out, you'll eventually have very similar problems that you can't do anything about.
Consider any of my dozens of little OSS libraries that a handful of people have used, on the other hand. Maybe I lost interest a while ago, but it's pretty well written still (can't have people judging my work) and when you realize it needs to do something, or be updated (since things like dependabot can automatically tell you long after I'm gone), you're free and licensed to go make all the changes you need to.
I think you see highly specialized software being run by enterprisey companies because that's just business, not because it's better. It's easiest to start in a niche and grow from there, but that holds true with open software and protocols too. Just look at the internet: used to share research projects between a handful of universities, and now has grown to petabytes of cat gifs. Or linux. Started out as a hobby operating system for a handful of unix geeks, and now runs 96.3 percent of the top 1 million web servers.
It always starts small and gets better if it's good enough. This goes for OSS and companies.
Unfortunately that is not the case. Closed sourced software for small communities are not safer. My company had an incredibly embarrassing data leak because they outsourced some work and trusted a software used also by the competitors. Unfortunately the issue was found by one of our customers and ended up on the newspapers.
Absolutely deserved, but still, closed sourced stuff is not more secure
It never should have been anything but bcrypt/scrypt, but sha256 is so much better than many alternatives. Hopefully it's at least salted in addition to hashing.
You don't need to. If it's open source, it's open to billions of people. It only takes one finding a problem and reporting it to the world
There are many more benefits to open source:
a. It future proofs the program (many old software can't run on current setups without modifications). Open source makes sure you can compile a program with more recent tooling and dependencies rather than rely on existing binaries with ancient tooling or dependencies
b. Remove reliance on developer for packaging. This means a developer may only produce binaries for Linux, but I can take it and compile it for MacOS or Windows or a completely different architecture like ARM
c. It means I can contribute features to the program if it wasn't the developer's priority. I can even fork it if the developer didn't want to merge it into their branch.
Regarding point 2. I get what you’re saying but I instantly thought of Heartbleed. Arguably one of the most used examples of open source in the world, but primarily maintained by one single guy and it took 2 years for someone to notice the flaw.
You shouldn't automatically trust open source code just because its open source. There have been cases where something on github contains actual malicious code, but those are typically not very well known or don't have very many eyes on it. But in general open source code has the potential to be more trustworthy especially if its very popular and has a lot of eyes on it.
Here are a few things that apparently need to be stated:
Any code that is distributed can be audited, closed or open source.
It is easier to audit open source code because, well, you have the source code.
Closed source software can still be audited using reverse engineering techniques such as static analysis (reading the disassembly) or dynamic analysis (using a debugger to walk through the assembly at runtime) or both.
Examples of vulnerabilities published by independent researchers demonstrates 2 things: people are auditing open source software for security issues and people are in fact auditing closed source software for security issues
Vulnerabilities published by independent researchers doesn't demonstrate any of the wild claims many of you think they do.
No software of a reasonable size is 100% secure. Closed or open doesn't matter.
As you increase the complexity of a system, it makes sense that your chance of vulnerability increases. End of the day, open source or not, you will never beat basic algorithm principals and good coding practice.
I would however argue that just because closed source code is possibly reversed doesn’t mean it’s easier or as reliable as having the source code. As long as corporations have an interest in possession there will always be someone striving and spending ungodly amounts of money to keep their castle grounds gated heavily which makes securing them en mass much harder and slower
Closed source software can still be audited using reverse engineering techniques such as static analysis (reading the disassembly) or dynamic analysis (using a debugger to walk through the assembly at runtime) or both.
How are you going to do that if it's software-as-a-service?
See the first bullet point. I was referring to any code that is distributed.
Yeah, there's no way to really audit code running on a remote server with the exception of fuzzing. Hell, even FOSS can't be properly audited on a remote server because you kind of have to trust that they're running the version of the source code they say they are.
Also, just because you can see the source code does not mean it has been audited, and just because you cannot see the source code does not mean it has not been audited. A company has a lot more money to spend on hiring people and external teams to audit their code (without needing to reverse engineer it). More so than some single developer does for their OSS project, even if most of the internet relies on it (see openssl).
And just because a company has the money to spend on audits doesn't mean they did, and even when they did, doesn't mean they acted on the results. Moreover, just because code was audited doesn't mean all of the security issues were identified.
That's exactly the problem with many open source projects.
I recently experienced this first hand when submitting some pull requests to Jerboa and following the devs: As long as there is no money funding the project the devs are trying to support the project in their free time which means little to no time for quality control. Mistakes happen... most of them are uncritical but as long as there's little to no time and expertise to audit code meaningfully and systematically, there will be bugs and these bugs may be critical and security relevant.
Even when you do have time. There have been “researchers” submitting malicious prs and when caught just act like it’s no big deal. Even had an entire institution banned from submitting prs to the Linux kernel.
For the human-hours of work that's put into it it's very expensive. I put in translations, highlighted bugs, put in a Jerboa fork to help mitigate issues with the 0.18 Lemmy upgrade... if I were to do this kind of thing for work I'd bill 25CAD per hour at the very minimum.
There is a much higher chance that someone out of 7 billion people will audit open source than it is likely for a corporation to do it, let alone make it publicly known and fix it.
Open source software is safe because so few people use it it's not worth a hacker's time to break into it (joking, but of course that doesn't apply to server software)
Honestly, for some software this is the answer. The other one with hackers is that it's usually easier to trick an employee into giving you the master password than finding an obscure exploit in their codebase, though it does still happen.
I really like the idea of open source software and use it as much as possible.
But another "problem" is that you don't know if the compiled program you use is actually based on the open source code or if the developer merged it with some shady code no one knows about. Sure, you can compile by yourself. But who does that 😉?
But another "problem" is that you don't know if the compiled program you use is actually based on the open source code or if the developer merged it with some shady code no one knows about.
Actually, there is a Debian project working on exactly that problem, called reproducible builds
yes and others are working on it, also! i believe some android folks are (f-droid iirc), and i've heard about it elsewhere. this stuff is super nerdy (so therefore cool to nerds such as myself). before the internet existed it would be so hard to even imagine the need for this sort of thing!
You can check it using the checksum. But who does that?
In all seriousness I am running NixOS right now using flakes. The package manager compiles everything unless a trusted source already has it compiled, in which case the package manager checks the checksum to ensure you still get the same result and downloads that instead. It also aims to be fully reproducible and with flakes it automatically pins all dependency versions so next time you build your configurations, you get the same result. It is all really cool, but I still don't understand everything and I'm still learning it.
I love NixOS but I really wish it had some form of containerization by default for all packages like flatpak and I didn't have to monkey with the config to install a package/change a setting. Other than that it is literally the perfect distro, every bit of my os config can be duplicated from a single git repo.
We trust open source apps because nobody would add malicious codes in his app and then release the source code to public. It doesn't matter if someone actually looks into it or not, but having the guts to publish the source codes alone brings a lot of trust on the developer. If the developer was shady, he would rather hide or try to hide the source code and make it harder for people to find it out.
Since it's publicly available and used widely enough, there would be 'those' people who like finding cracks in code or just have knack for looking deep through all kinds of data.
Not everyone is malicious and that part of humanity is something we have to trust in.
What about the various NPM packages written by one guy. Who then moved on to other things then gave control of that package to someone else that seemed legit. Only for them to slowly add melicious code to that once trusted package that is used by a large number of other packages?
Or someone raising a pull request for a new feature or something that on the surface looks legit on its own. But when combined with other PRs or existing code ends up in a vulnerability that can be exploited.
I don't really think auditing is a compelling argument for FOSS. You can hire accredited companies to audit and statically analyse closed source code, and one could argue that marketable software legally has to meet different (and stricter) criteria due to licensing (MIT, GPL, and BSD are AS IS licenses), that FOSS do not have to meet.
The most compelling argument for FOSS (for me) is that innovation is done in the open. When innovation is done in the open, more people can be compelled to learn to code, and redundant projects can be minimised (i.e. just contribute to an existing implementation, rather than inventing a new). It simply is the most efficient way to author software.
I'm probably wearing rose tinted glasses, but the garage and bedroom-coders of the past, whom developed on completely open systems moved the whole industry forward at a completely different pace than today.
one could argue that marketable software legally has to meet different (and stricter) criteria due to licensing (MIT, GPL, and BSD are AS IS licenses), that FOSS do not have to meet.
LOL, only if by that weasel-word "marketable" you mean "sold for business use along with a support contract and/or SLA)." Otherwise, proprietary software targeting consumers has just as many disclaimers as Free Software does.
(Also, I'm not even going to bother addressing the silly biased framing attempting to disparage Free Software as not marketable.)
Did you fabricate that CPU? Did you write that compiler? You gotta trust someone at some point. You can either trust someone because you give them money and it's theoretically not in their interest to screw you (lol) or because they make an effort to be transparent and others (maybe you, maybe not) can verify their claims about what the software is.
It usually boils down to this, something can be strictly better but not perfect.
The ability to audit the code is usually strictly better than closed source. Though I'm sure an argument could be made about exposing the code base to bad actors I generally think it's a worthy trade off.
I would say the best with open source is that if the devs do not have time to look at your request then you can make a pr and if they won't approve it in time then you can fork it with the fix, that is what lemmy.world did for example. I have also needed to do just that for a few packages. Also if the docs are too simplified then you can just check out the code yourself. It have helped many times.
As a packager, I totally relate to this: we generally don't have the resources to follow the upstream development of the projects we rely on, let alone audit all the changes they make between releases.
Open source software still has security advantages — we can communicate directly with the maintainers, backport security fixes and immediately release them to users, fix bugs that affect the distribution, etc. — but I agree that it's not a silver bullet.
IDK why, but this had me imagining someone adding malicious code to a project, but then also being highly proactive with commenting his additions for future developers.
"Here we steal the user's identity and sell it on the black market for a tidy sum. Using these arguments..."
Even audited source code is not safe. Supply-chain attacks are possible. A lot of times, there's nothing guaranteeing the audited code is the code that's actually running.
is there not a way to check if thw sourvw and releasw arent the same? would be cool if github / gitlab / etc.. produced a version automatically or there was some instant way to check
Heartbleed is the only counter example anyone needs to know that open source isn't perfect. Intelligence agencies were likely sucking up encrypted traffic because nobody was paying attention to the most commonly used TLS library in the world
Sure, open source isn't perfect. No software of any reasonable size is. Anyone claiming otherwise is an idiot and should be ignored. And yeah sure, heartbleed vuln existed for 2 years before discovery. But don't forget the NSA held onto the EternalBlue vuln for over 5 years before the shadowbrokers leaked their tools.
Man we would have been so much better with plaintext communications everywhere, right?
You cite heartbleed as a negative but a) SSL would never have proliferated as it has without openssl and b) the fix was out in under a week and deployed widely even faster.
The alternative, proprietary crypto, would have all the same problems including the current laggards, but likely without everyone understanding what happened and how bad it was. In fact, it probably wouldn't have been patched because some manager would've decided it wasn't worth it vs new features.
I think the point that’s more relevant to the original post is that while the speed with which fixes were rolled out were admirable, the flaw existed for years before anybody noticed it.
I don't disagree with this, but your point about automatic audits... It's always a learning curve to prevent silly shit like heartbleed from getting into the system. But the idea that there was no check against this when it was first PR'd seems almost absurd. This is why sticking hard to API and design specs and building testing around them is so important.
Ha! It's not just whether you know how but whether you actually do it.
I remember one a few years back, a fairly large project (I don't remember the name though), very active community but no one LOOKED. That's part of the problem.
I think that new 1 billion token AI paper that just came out is going to be auditing all code for us instantly before downloading it. Its going to revolutionize security in open source. Probably a business opportunity there.
Free software has only promised its users the Four Freedoms, which are the freedoms to use, share, modify, and share modified copies of the software. That is not an inherent guarantee that it is more secure.
Even if you yourself don't know how to work with code, you can always enlist the community or a trusted friend to exercise freedoms on your behalf. This is like saying right to repair is meaningless because you don't know how to repair your own stuff.
I mean, what's a "proper audit"?
most audits my company does are a complete smoke and mirrors sham. But they do get certifications. Is that "proper"?
I'm pretty confident that the code-quality of linux is, on average, higher than that of the windows kernel. And that is because not only do other people read and review, the programmer also knows his shit is for everyone to see. So by and large they are more ashamed to submit some stringy mess that barely works
Very true. There was an issue in one of the linix communities a while where someone got away with submitting malicious code. It was eventuslly discovered and corrected, but it does go to show that bad actors can do some serious damage to open source projects.
Although this is fair, those contributors were from a research group from a prestigious university. That makes them much more trustworthy by default, and its natural that a code reviewer will give them more benefit of doubt.
I don't know how to audit code. But I can generally get through. For example, I use Aegis for 2FA OTP. How do we know it's secure? Because I can see very clearly that it doesn't have network access on Android and that it hasn't tried to get network access.
I don't use the term "open source". I say free software because giving someone else control over your computing is unjust. The proprietor of the program has absolute control over how the program works and you can not change it or use alternative versions of it