Skip Navigation
5 comments
  • Yeah, this is becoming a real issue.

    We need better tooling for performing static analysis. I recently updated a version of a package and the audit - which I can in no way perform with any authority - was time consuming because of the extensive dependency tree. I both feel more compelled to do audits, and have started hating them; they're the least fun part of developing OSS, and I really only do it because it's fun. When it stops being fun, I'm going to stop doing it.

    That's entirely aside from the fact that it puts a damper on the entire ecosystem for users, of which I'm also clearly one.

    The OSS community needs (someone smarter and more informed about infosec than me) needs to come up with a response, or this is going to kill OSS as surely as Microsoft never could.

    • I'm far from an expert, but we know it takes a village.

      As far as static analysis goes, I can think of something quite simple. Running strace on your processes to see what sort of syscall and filesystem access the process needs (in a trusted scenario - a maintainers burden). Once that analysis is done, applying proper security features (in unix - seccomp filtering (for syscalls) and landlock (for filesystem access)) could be done to minimize risk.

      A caveat to this, however, can be seen in the xz attack. The attacker forced the landlock feature to not compile or link, which allowed it to have the attack surface needed. So they were practicing good security, however it means nothing if maintainers cannot audit their own commits. That is more of a general programming static analysis I believe you were going for. In which case, I believe many compilers come with verbose static analysis features. Clang-tidy is one, for example. Rust is already quite verbose. Perhaps with more rigid CI/CD restrictions enforced with these analysis tools, such commits would not be able to make it through?

      • I'm happy to participate, but we don't have a process yet.

        Let's say I do audit a specific version of a dependency I use. How do I communicate to others that I've done this? Why would anyone trust me anyway? I've mentioned that 'm not an infosec expert; how much is my audit worth?

        I have before run programs inside firejail and watched for network activity where there shouldn't be any, but even if that is a useful activity, how do I publish my results so that not everyone has to also run the same program in firejail? What do non-technical users do? And this active approach has three problems: 1) you'll only see the malicious activity if you hit the branch path of the attack; looking for it this way is like doing unit tests by running the code and hoping you get 100% code coverage. 2) These supply chain attacks can be sophisticated, and I wouldn't be surprised if you can tell that you're running in firejail and just not execute the code. 3) This approach isn't useful for programs which depend on network connections, or access to secrets - some programs need both. In an extreme example, there'd be no way to expose a supply chain attack embedded in a browser, which often both has access to secrets and who's main purpose is networking.

        The main problem is that we're in the decade of Linux, and a whole population of people are coming in who are not nerds. They're not going to be running strace or firejail. How are we going to make OSS secure for these people?

Pulse of Truth !pulse_of_truth@infosec.pub