I see this all over the place nowadays, even in communities that, I would think, should be security conscious. How is that safe? What's stopping the downloaded script from wiping my home directory? If you use this, how can you feel comfortable?
I understand that we have the same problems with the installed application, even if it was downloaded and installed manually. But I feel the bar for making a mistake in a shell script is much lower than in whatever language the main application is written. Don't we have something better than "sh" for this? Something with less power to do harm?
You have the option of piping it into a file instead, inspecting that file for yourself and then running it, or running it in some sandboxed environment. Ultimately though, if you are downloading software over the internet you have to place a certain amount of trust in the person your downloading the software from. Even if you're absolutely sure that the download script doesn't wipe your home directory, you're going to have to run the program at some point and it could just as easily wipe your home directory at that point instead.
You should try downloading the software from your mind brain, like us elite hackers do it. Just dump the binary from memory into a txt file and exe that shit, playa!
It is kind of cool, when you've actually written your own software and use that. But realistically, I'm still getting the compiler from the internet...
Indeed, looking at the content of the script before running it is what I do if there is no alternative. But some of these scripts are awfully complex, and manually parsing the odd bash stuff is a pain, when all I want to know is : 1) what URL are you downloading stuff from? 2) where are you going to install the stuff?
As for running the program, I would trust it more than a random deployment script. People usually place more emphasis on testing the former, not so much the latter.
You have the option of piping it into a file instead, inspecting that file for yourself and then running it, or running it in some sandboxed environment.
That's not what projects recommend though. Many recommend piping the output of an HTTP transfer over the public Internet directly into a shell interpreter. Even just
curl https://... > install.sh; sh install.sh
would be one step up. The absolute minimum recommendation IMHO should be
curl https://... > install.sh; less install.sh; sh install.sh
but this is still problematic.
Ultimately, installing software is a labourious process which requires care, attention and the informed use of GPG. It shouldn't be simplified for convenience.
Also, FYI, the word "option" implies that I'm somehow restricted to a limited set of options in how I can use my GNU/Linux computer which is not the case.
Showing people that are running curl piped to bash the script they are about to run doesn't really accomplish anything. If they can read bash and want to review the script then they can by just opening the URL, and the people that aren't doing that don't care what's in the script, so why waste their time with it?
Do you think most users installing software from the AUR are actually reading the pkgbuilds? I'd guess it's a pretty small percentage that do.
I mean if you think that it's bad for linux culture because you're teaching newbies the wrong lessons, fair enough.
My point is that most people can parse that they're essentially asking you to run some commands at a url, and if you have even a fairly basic grasp of linux it's easy to do that in whatever way you want. I don't know if I personally would be any happier if people took the time to lecture me on safety habits, because I can interpret the command for myself. curl https://some-url/ | sh is terse and to the point, and I know not to take it completely literally.
What's stopping the downloaded script from wiping my home directory?
What's stopping any Makefile, build script, or executable from running rm -rf ~? The correct answer is "nothing". PPAs are similarly open, things are a little safer if you only use your distro's default package sources, but it's always possible that a program will want to be able to delete something in your home directory, so it always has permission.
Containerized apps are the only way around this, where they get their own home directory.
Don't forget your package manager, running someone's installer as root
It's roughly the same state as when windows vista rolled out UAC in 2007 and everything still required admin rights because that's just how everything worked....but unlike Microsoft, Linux distros never did the thing of splitting off installs into admin vs unprivileged user installers.
flatpak doesn't require any admin to install a new app
nixos doesn't run any code at all on your machine for just adding a package assuming it's already been cached. if it hasn't been cached it's run in a sandbox. the cases other package managers use post install configuration scripts for are a different mechanism which possibly has root access depending on what it is.
This is simpler than the download, ./configure, make, make install steps we had some decades ago, but not all that different in that you wind up with arbitrary, unmanaged stuff.
Preferably use the distro native packages, or else their build system if it's easily available (e.g. AUR in Arch)
You shouldn't install software from someone you don't trust anyway because even if the installation process is save, the software itself can do whatever it has permission to.
"So if you trust their software, why not their install script?" you might ask. Well, it is detectable on server side, if you download the script or pipe it into a shell. So even if the vendor it trustworthy, there could be a malicious middle man, that gives you the original and harmless script, when you download it, and serves you a malicious one when you pipe it into your shell.
it is detectable [...] server side, if you download the script [vs] pipe it into a shell
I presume you mean if you download the script in a browser, vs using curl to retrieve it, where presumably you are piping it to a shell. Because yeah, the user agent is going to reveal which tool downloaded it, of course. You can use curl to simply retrieve the file without executing it though.
Or are you suggesting that curl makes something different in its request to the server for the file, depending on whether it is saving the file to disk vs streaming it to a pipe?
It is actually a passive detection based of the timing of the chunk requests. Because curl by default will only request new chunks when the buffer is freed by the shell executing the given commands. This then can be used to detect that someone is not merely downloading but simultaneously executing it. Here's a writeup about it:
it is detectable on server side, if you download the script or pipe it into a shell
Irrelevant. This is just an excuse people use to try and win the argument after it is pointed out to them that there's actually no security issue with curl | bash.
It's waaaay easier to hide malicious code in a binary than it is in a Bash script.
You can still see the "hidden" shell script that is served for Bash - just pipe it through tee and then into Bash.
Can anyone even find one single instance of that trick ever actually being used in the wild (not as a demo)?
I never tried to win any argument. Hell I was not even aware that I'm participating in one. I just wanted to share the info, that even if the vendor is absolutely trustworthy and even if you validated the script by downloading and looking at it, there's still another hole that's not obvious to see.
Yes it's unlikely, but again, I never said it were. There are also arguments you can run curl with, to tell it to do the download first and then push it through the pipe afterwards, though I don't know them by heart now.
It won't cost you anything to set those parameters, when you insist to use curl | bash, just in the off chance that someone's trying to do what I mentioned.
But I'm also someone who usually validates their downloads with a checksum so maybe I'm just weird. Who knows.
Yeah I guess if they were being especially nefarious they could supply two different scripts based on user-agent. But I meant what you said anyways… :) I download and then read through the script. I know this is a common thing and people are wary of doing it, but has anyone ever heard of there being something disreputable in one of this scripts? I personally haven’t yet.
It's not much different from downloading and compiling source code, in terms of risk. A typo in the code could easily wipe home or something like that.
Obviously the package manager repo for your distro is the best option because there's another layer of checking (in theory), but very often things aren't in the repos.
The solution really is just backups and snapshots, there are a million ways to lose files or corrupt them.
The security concerns are often overblown. The bigger problem for me is I don't know what kind of mess it's going to make or whether I can undo it. If it's a .deb or even a tarball to extract in /usr/local then I know how to uninstall.
I will still use them sometimes but for things I know and understand - e.g. rustup will put things in ~/.rustup and update the PATH in my shell profile and because I know that's what it does I'm happy to use the automation on a new system.
So tell me: if I download and run a bash script over https, or a .deb file over https and then install it, why is the former a "security nightmare" and the latter not?
Unpopular opinion, these are handy for quickly installing in a new vm or container (usually throwaway) where one don't have to think much unless the script breaks. People don't install thing on host or production multiple times, so anything installed there is usually vetted and most of the times from trusted sources like distro repos.
For normal threat model, it is not much different from downloading compiled binary from somewhere other than well trusted repos. Windows software ecosystem is famously infamous for exactly the same but it sticks around still.
On the other hand, as a software author, your options are: spend a lot of time maintaining packages for Arch, Alpine, Void, Nix, Gentoo, Gobo, RPM, Debian, and however many other distro package managers; or wait for someone else to do it, which will often be "never".
The non-rolling distros can take a year to update a package, even if they decide to include it.
Honestly, it's a mess, and I think we're in that awkward state Linux was in when everyone seemed to collectively realize sysv init sucks, and you saw dinit, runit, OpenRC, s6, systemd, upstart, and initng popping up - although, many of these were started after systemd; it's just for illustration. Most distributions settled on systemd, for better or worse. Now we see something similar: the profusion of package managers really is a Problem, and people are trying to address it with solutions like Snap, AppImages, and Flatpack.
As a software developer, I'd like to see distros standardize on a package manager, but on the other hand, I really dislike systemd and feel as if everyone settling on the wrong package manager (cough Snap) would be worse than the current chaos. I don't know if they're mutually exclusive objectives.
For my money, I'd go with pacman. It's easy to write PKGBUILDs and to get packages into AUR, but requires users to intentionally use AUR. I wish it had a better migration process (AUR packages promoted to community, for instance). It's fairly trivial for a distribution to "pin" releases so that users aren't using a rolling upgrade.
Alpine's is also good nice, and they have a really decent, clearly defined migration path from testing to community; but the barrier for entry to get packages in is harder, and clearly requires much more work by a community of volunteers, and it can occasionally be frustrating for everyone: for us contributors who only interact with the process a couple of time a year, it's easy to forget how they require things to be run, causing more work for reviewers; and sometimes an MR will just languish until someone has time to review it. There are some real heroes over there doing some heavy lifting.
I'm about to go on a journey for contribution to Void, which I expect to be similar to Alpine.
Redhat and deb? All I can do is build packages for them and host them myself, and hope users can figure out how to find and install stuff without it being in The Official Repos.
Oh, Nix. I tried, but the package definitions are a nightmare and just being enough of Nix on your computer to where you can test and submit builds takes GB of disk space. I actively dislike working with Nix. GUIX is nearly as bad. I used to like Lisp - it's certainly an interesting and educational tool - but I've really started to object to more and more as I encounter it in projects like Nyxt and GUIX, where you're forced to use it if you want to do any customization.
But this is the world of OSS: you either labor in obscurity; or you self-promote your software - which I hate: if I wanted to do marketing, I'd be in marketing. Or you hope enough users in enough distributions volunteer to manage packages for their distros that people can get to it. And you still have to address the issue of making it easy for people to use your software. curl <URL> | sh is, frankly, a really elegant, easy solution for software developers... of only it weren't for the fact that the world is full of shitty, unethical people forcing us to distrust each other.
It's all sub-optimal, and needs a solution. I'm not convinced the various containerizations are the right direction; does "rg" really need to be run in a container? Maybe it makes sense for big suites with a lot of dependencies, like Gimp, but even so, what's the solution for the vast majority of OSS software which are just little CLI or TUI tools?
Distributions aren't going to standardize on Arch's APKBUILD, or Alpine's almost identical but just slightly different enough to not be compatible PKGBUILD; and Snap, AppImage, and Flatpack don't seem to be gaining broad traction. I'm starting to think something like a yay that installs into $HOME. Most systems are single user, anyway; something that leverages Arch's huge package repository(s), but can be used by any user regardless of distribution. I know Nix can be used like this, but then, it's Nix, so I'd rather not.
The non-rolling distros can take a year to update a package, even if they decide to include it.
There is a reason why they do this. For stable release distros, particularly Debian, they refuse to update packages beyond fixing vulnerabilities as part of a way to ensure that the system changes minimally. This means that for example, if a software depends on a library, it will stay working for the lifecycle of a stable release. Sometimes latest isn't the greatest.
Distributions aren’t going to standardize on Arch’s APKBUILD, or Alpine’s almost identical but just slightly different enough to not be compatible PKGBUILD
You swapped PKBUILD and APKBUILD 🙃
I’m starting to think something like a yay that installs into $HOME.
Homebrew, in theory, could do this. But they insist on creating a separate user and installing to that user's home directory
As an Arch user, yeah, PKGBUILDs are a very good solution, at least for specifically Arch Linux (or other distros having the same directory-tree best practices). I have implemented a dozen or so projects in PKGBUILDs, and 150 or so from the AUR. It gives users a very easy way to essentially manually install yet control stuff. And you can just put it into the AUR, so other users can either just use it, or first read through, understand, maybe adapt and then use it. It shows that there is no need for packages to solely be either the authors, nor the distro maintainers responsibility.
It isn’t more dangerous than running a binary downloaded from them by any other means. It isn’t more dangerous than downloaded installer programs common with Windows.
TBH macOS has had the more secure idea of by default using sandboxes applications downloaded directly without any sort of installer. Linux is starting to head in that direction now with things like Flatpak.
Those just don't get installed. I refuse to install stuff that way. It's to reminiscent of installing stuff on windows. "Pssst, hey bud, want to run this totally safe executable on your PC? It won't do anything bad. Pinky promise". Ain't happening.
The only exception I make is for nix on non-nixos machines because thwt bootstraps everything and I've read that script a few times.
It's convenience over security, something that creeps in anywhere there is popularity. For those who just want x or y to work without needing to spend their day in the terminal - they're great.
You'd expect these kinds of script to be well tested against their targets and for the user to have/identify the correct target. Their sources should at least point out the security issue and advise to grab and inspect before straight up piping it though. Some I have seen do this.
Running them like this means you put 100% trust in the author, the source and your DNS. Not a big ask for some. Unthinkable for others.
To answer the question, no - you’re not the only one. People have written and talked about this extensively.
Personally, I think there’s a lot more nuance to the answer. Also a lot has been written about this.
You mention “communities that are security conscious”. I’m not sure in which ways you feel this practice to be less secure than alternatives. I tend to be pretty security conscious, to the point of sometimes being annoying to my team mates. I still use this installation method a lot where it makes sense, without too much worry. I also skip it other times.
Without knowing a bit more about your specific worries and for what kinds of threat you feel this technique is bad, it’s difficult to respond specifically.
Feel is fine, and if you’re uncomfortable with something, the answer is generally to either avoid it (by reading the script and executing the relevant commands yourself, or by skipping using this software altogether, for instance), or to understand why you’re uncomfortable and rationally assess whether that feeling is based on reality or imagination - or to which degree of each.
You ask why I feel this is less secure: it seems the lowest possible bar when it comes to controlling what gets installed on your system. The script may or may not give you a choice as to where things get installed. It could refuse to install or silently overwrite stuff if something already exists. If install fails, it may or may not leave data behind, in directories I may or may not know about. It may or may not run a checksum on the downloaded data before installing. Because it's a competely free-form script, there is no standard I can expect. For an application, I would read the documentation to learn more, but these scripts are not normally documented (other than "use this to install"). That uncertainty, to me, is insecure/unsafe.
What's stopping the downloaded script from wiping my home directory? If you use this, how can you feel comfortable?
You're not wrong, but there's an element of trust in anything like this and it's all about your comfort level. How can you truly trust any code you didn't write and complie yourself. Actually, how do you trust the compiler.
And let's be honest, even if you trust my code implicitly (Hey, I'm a bofh, what could go wrong?) then that simply means that you're trusting me not to do anything malicious to your system.
Even if your trust is well-placed in that regard, I don't need to be malicious to wipe your system or introduce a configuation error that makes you vulnerable to others, it's perfectly possible to do all that by just being incompetent. Or even being a normally competent person who was just having a bad day while writing the script you're running now. Ooops.
To be fair that's because Linux funnels you to the safeguard-free terminal where it's much harder to visualize what's going on and fewer checks to make sure you're doing what you mean to be doing. I know it's been a trend for a long time where software devs think they are immune from mistakes but...they aren't. And nor is anyone else.
When I modded some subreddits I had an automod rule that would target curl-bash pipes in comments and posts, and remove them. I took a fair bit of heat over that, but I wasn't backing down.
I had a lot of respect for Tteck and had a couple discussions with him about that and why I was doing that. I saw that eventually he put a notice up that pretty much said what I did about understanding what a script does, and how the URL you use can be pointed to something else entirely long after the commandline is posted.
You could just read the script file first.. Or YOLO trust it like you trust any file downloaded from a relatively safe source.. At least you can read a script.
I do, but some of these scripts are quite complex and hard to parse. When all I would really need to do this myself is a direct download URL and unzip/untar in a folder of my choice, it's a pain.
I always try to avoid these, unless the application I'm installing has it's own package management functionality, like Rustup or Nix. Everything else should be handled by the system package manager.
I usually just take a look at the code with a get request. Then if it looks good, then run manually. Most of the time, it's fine. Sometimes there's something that would break something on the system.
I haven't seen anything explicitly nefarious, but it's better to be safe than sorry.
What does curl even do? Unstraighten? Seems like any other command I’d blindly paste from an internet thread into a terminal window to try to get something on Linux to work.
curl sends requests,
curl lemmy.world would return the html of lemmy.worlds homepage.
piping it into bash means that you are fetching a shell script, and running it.
I also feel incredibly uncomfortable with this. Ultimately it comes down to if you trust the application or not. If you do then this isn't really a problem as regardless they're getting code execution on your machine. If you don't, well then don't install the application. In general I don't like installing applications that aren't from my distro's official repositories but mostly because I like knowing at least they trust it and think it's safe, as opposed to any software that isn't which is more of an unknown.
Also it's unlikely for the script to be malicious if the application is not. Further, I'm not sure a manual install really protects anyone from anything. Inexperienced users will go through great lengths and jump through some impressive hoops to try and make something work, to their own detriment sometimes. My favorite example of this is the LTT Linux challenge. apt did EVERYTHING it could think to do to alert that the steam package was broken and he probably didn't want to install it, and instead of reading the error he just blindly typed out the confirmation statement. Nothing will save a user from ruining their system if they're bound and determined to do something.
In this case apt should have failed gracefully. There is no reason for it to continue if a package is broken. If you want to force a broken package, that can be it's own argument.
I'm not sure that would've made a difference. It already makes you go out of your way to force a broken package. This has been discussed in places before but the simple fact of the matter is a user that doesn't understand what they're doing will perservere. Putting up barriers is a good thing to do to protect users, spending all your time and effort to cover every edge case is a waste of time because users will find ways to shoot themselves in the foot.
Most packages managers can run arbitrary code on install or upgrade or removal. You are trusting the code you choose to run on your system no matter where you get it from. Remember the old bug in ubuntu that ran a rm -rf / usr/.. instead of rm -rf /usr/... and wiped a load of peoples systems?
Flatpacks, Apparmor and snaps are better in this reguard as they are somewhat more sandboxed and can restrict what the applications have access to.
But really if the install script is from the authors of the package then it should be just as trustworthy as the package. But generally I download and read the install scripts as there is no standard they are following and I don't want them touching random system files in ways I am not aware of or cannot undo easily. Sometimes they are just detecting the OS and picking relevant packages to install - maybe with some thrid party repos. Other times they mess with your home partition and do a bunch of stuff including messing with bashrc files to add things to your PATH which I don't like. I would never run a install script that is not from the author of the application though and be very wary of install scripts from a smaller package with fewer users.
Just direct it into a file, read the script, and run it if you're happy. It's just a shorthand that doesn't require saving the script that will only be used once.
Yeah I hate this stuff too, I usually pipe it into a file figure out what it's doing and manually install the program from there.
FWIW I've never found anything malicious from these scripts but my internal dialogue starts screaming when I see these in the wild, I don't want to run some script and not know what it's touching malicious or not it's a PITA.
As a linux user, I like to know what's happening under the hood as best I can and these scripts go against that
Am I the only one who cringes when I have to update my system?
How do I know the maintainers of the repo haven't gone rogue and are now distributing malware?
DAE get anxious when running code on computer?
I think for the sake of security we should just use rocks, stones, and such to destroy all computers, as this would prevent malicious software from being executed.
I realise you're trolling but actually yes. This is why I use Debian stable where possible - if egregious malware shows up it will probably be discovered by all the folks using rolling distros first.
I understand that we have the same problems with the installed application, even if it was downloaded and installed manually. But I feel the bar for making a mistake in a shell script is much lower than in whatever language the main application is written.
So you are concerned with security, but you understand that there aren't actually any security concerns... and actually you're worried about coding mistakes in shitty Bash?