Skip Navigation
46 comments
  • "Traditionally, the /opt directory is used for installing/storing the files of third-party applications that are not available from the distribution’s repository.

    The normal practice is to keep the software code in opt and then link the binary file in the /bin directory so that all the users can run it."

    https://linuxhandbook.com/linux-directory-structure/

  • Let's say you want to compile and install a program for yourself from its source code form. There's generally a lot of choice here:

    You could (theoretically) use / as its installation prefix, meaning its binaries would then probably go underneath /bin, its libraries underneath /lib, its asset files underneath /share, and so on. But that would be terrible because it would go against all conventions. Conventions (FHS etc.) state that the more "important" a program is, the closer it should be to the root of the filesystem ("/"). Meaning, /bin would be reserved for core system utilities, not any graphical end user applications.

    You could also use /usr as installation prefix, in which case it would go into /usr/bin, /usr/lib, /usr/share, etc... but that's also a terrible idea, because your package manager respectively the package maintainers of the packages you install from your distribution use that as their installation prefix. Everything underneath /usr (except /usr/local) is under the "administration" of your distro's packages and package manager and so you should never put other stuff there.

    /usr/local is the exception. It's where it's safe to put any other stuff. Then there's also /opt. Both are similar. Underneath /usr/local, a program would be traditionally split up based on file type - binaries would go into /usr/local/bin, etc. - everything's split up. But as long as you made a package out of the installation, your package manager knows what files belong to this program, so not a big deal. It would be a big deal if you installed it without a package manager though - then you'd probably be unable to find any of the installed files when you want to remove them. /opt is different in that regard - here, everything is underneath /opt/

    <programname>

    /, so all files belonging to a program can easily be found. As a downside, you'd always have to add /opt/

    <programname>

    / to your $PATH if you want to run the program's executable directly from the commandline. So /opt behaves similar to C:\Program Files\ on Windows. The other locations are meant to be more Unix-style and split up each program's files. But everything in the filesystem is a convention, not a hard and fast rule, you could always change everything. But it's not recommended.

    Another option altogether is to just install it on a per-user basis into your $HOME somewhere, probably underneath ~/.local/ as an installation prefix. Then you'd have binaries in ~/.local/bin/ (which is also where I place any self-writtten scripts and small single scripts/executables), etc. Using a hidden directory like .local also means you won't clutter your home directory visually so much. Also, ~/.local/share, ~/.local/state and so on are already defined by the XDG FreeDesktop standards anyway, so using ~/.local is a great idea for installing stuff for your user only.

    Hope that helps clear up some confusion. But it's still confusing overall because the FHS is a historically grown standard and the Unix filesystem tree isn't really 100% rational or well-thought out. It's a historically grown thing. Modern Linux applications and packaging strategies do mitigate some of its problems and try to make things more consistent (e.g. by symlinking /bin to /usr/bin and so on), but there are still several issues left over. And then you have 3rd party applications installed via standalone scripts doing what they want anyway. It's a bit messy but if you follow some basic conventions and sane advice then it's only slightly messy. Always try to find and prefer packages built for your distribution for installing new software, or distro-independent packages like Flatpaks. Only as a last resort you should run "installer scripts" which do random things without your package manager knowing about anything they install. Such installer scripts are the usual reason why things become messy or even break. And if you build software yourself, always try to create a package out of it for your distribution, and then install that package using your package manager, so that your package manager knows about it and you can easily remove or update it later.

  • It is my understanding that /opt is short for optional, a bit like how /etc does actually mean et cetera, and their modern use cases were evolved rather than designed. /etc became a catch-all for system settings and configurations, and /opt became a place for executable binaries other than those managed by the local package manager. Which often meant "a big monolithic binary that arrives in a .tar.gz file." /opt became less popular to use when /usr/local/bin was made for basically the same purpose.

    In modern practice, /opt gets used as a drunk tank for irritatingly unfriendly monolithic software. A lot of the time, software that isn't managed by the system's package manager gets put in /usr/local/bin, but occasionally if it's a pain in the ass it'll get put in /opt.

  • General use, when you can install software through your system's package manager then that's the preferred way to get software on your system. For the most part, those applications live under /usr

    If for some reason you prefer to install the package manually, best practice is to install it outside /usr to avoid potential conflicts with existing system libraries. The /opt ("optional") system is a common place to install these apps. Many modern install scripts already default to using /opt

    It's also convenient for backing up those apps.

  • That is where you put your optional packages, everyone has his own use for it. I use it to stocke my docker containers config

46 comments