Skip Navigation

Noob Question Thread: Ask Any Questions About Linux!

I thought I'll make this thread for all of you out there who have questions but are afraid to ask them. This is your chance!

I'll try my best to answer any questions here, but I hope others in the community will contribute too!

412 comments
  • How do symlinks work from the point of view of software?

    Imagine I have a file in my downloads folder called movie.mp4, and I have a symlink to it in my home folder.

    Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it's a filesystem level stuff and software (player) basically has no idea if a file I'm opening is a symlink or the original movie.mp4?

    Can I use sync software (like Dropbox, Gdrive or whatever) to sync symlinks? Can I use sync software to sync actual files, but only have symlinks in my sync folder?

    Is there a rule of thumb to predict how software behaves when dealing with symlinks?

    I just don't grok symbolic links.

    • A symlink works more closely to the first way you described it. The software opening a symlink has to actually follow it. It's possible for a software to not follow the symlink (either intentionally or not).

      So your sync software has to actually be able to follow symlinks. I'm not familiar with how gdrive and similar solutions work, but I know this is possible with something like rsync

    • A symlink is a file that contains a shortcut (text string that is automatically interpreted and followed by the operating system) reference to another file or directory in the system. It's more or less like Windows shortcut.

      If a symlink is deleted, its target remains unaffected. If the target is deleted, symlink still continues to point to non-existing file/directory. Symlinks can point to files or directories regardless of volume/partition (hardlinks can't).

      Different programs treat symlinks differently. Majority of software just treats them transparently and acts like they're operating on a "real" file or directory. Sometimes this has unexpected results when they try to determine what the previous or current directory is.

      There's also software that needs to be "symlink aware" (like shells) and identify and manipulate them directly.

      You can upload a symlink to Dropbox/Gdrive etc and it'll appear as a normal file (probably just very small filesize), but it loses the ability to act like a shortcut, this is sometimes annoying if you use a cloud service for backups as it can create filename conflicts and you need to make sure it's preserved as "symlink" when restored. Most backup software is "symlink aware".

    • Software opens a symlink the same way as a regular file. The kernel reads a path stored in a symlink and then opens a file with that path (or returns a error if unable to do this for some reason). But if a program needs to perform specific actions on symlinks, it is able to check the file type and resolve symlink path.

      To determine how some specific software handle symlinks, read its documentation. It may have settigs like "follow symlinks" or "don't follow symlinks".

    • Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it’s a filesystem level stuff and software (player) basically has no idea if a file I’m opening is a symlink or the original movie.mp4?

      Others have answered well already, I just will say that symlinks work at the filesystem level, but the operating system is specially programmed to work with them. When a program asks the operating system to open a file at a given path, the OS will automatically "reference" the link, meaning it will detect a symlink and jump to the place where the symlink is pointing.

      A program may choose to inspect whether a file is a symlink or not. By default, when a program opens a file, it simply allows the operating system to reference the file path for it.

      But some apps that work on directories and files together (like "find", "tar", "zip", or "git") do need to worry about symlinks, and will check if a path is a symlink before deciding whether to reference it. For example, you can ask the "find" command to list only symlinks without referencing them: find -type l

    • ELI5: when a computer stores something like a file or a folder, it needs to know where it lives and where its contents are stored. Normally where the a file or folder lives is the same place as where its contents are. But there are times where a file may live in one place and its contents are elsewhere. That’s a symlink.

      So for your video example, the original video is located in Downloads so the video file will say I am movie.mp4 and I live i live in downloads, and my contents are in downloads. While the symlink says, I am movie.mp4 I live in home, and my contents are in downloads over there.

      For a video player, it doesn’t care if the file and the content is in the same place, it just need to know where the content lives.

      Now how software will treat a symlink as an absolute. For example if you have 2 PCs synced with cloud storage, and both downloads and home is being synced between your 2 pcs. Your cloud storage will look at the symlink, access the content from pc1 and put your movie.mp4 in pc2’s downloads and home. But it will also put the contents in both places in pc2 since to it, the results are the same. One could make software sync without breaking the symlink, but it depends on the developer and the scope of the software.

    • its a pointer.

      E: Okay so someone downvoted “it’s a pointer”. Here goes. both hard links and symbolic links are pointers.

      The hard link is a pointer to a spot on the block device, whereas the symbolic link is a pointer to the location in the filesystems list of shit.

      That location in the filesystems list of shit is also a pointer.

      So like if you have /var/2girls1cup.mov, and you click it, the os looks in the file system and sees that /var/2girls1cup.mov means 0x123456EF and it looks there to start reading data.

      If you make a symlink to /var/2girls1cup.mov in /bin called “ls” then when you type “ls”, the os looks at the file in /bin/ls, sees that it points to /var/2girls1cup.mov, looks in the file system and sees that it’s at 0x123456EF and starts reading data there.

      If you made a hard link in /bin called ls it would be a pointer to the location on the block device, 0x123456EF. You’d type “ls” and the os would look in the file system for /bin/ls, see that /bin/ls means 0x123456EF and start reading data from there.

      Okay but who fucking cares? This is stupid!

      If you made /bin/ls into /var/2girls1cup.mov with a symlink then you could use normal tools to work with it, looking at where it points, it’s attributes etc and like delete just the link or fully follow (dereference) the link and delete all the links in the chain including the last one which is the filesystems pointer to 0x123456EF called /var/2girls1cup.mov in our example.

      If you made /bin/ls into a hardlink to 0x123456EF, then when you did stuff to it the os wouldn’t know it’s also called /var/2girls1cup.mov and when /bin/ls didn’t work as expected you’d have to diff the output of mediainfo on both files to see that it’s the same thing and then look where on the hard drive /var/2girls1cup.mov and /bin/ls point to and compare em to see oh, someone replaced my ls with a shock video using a hard link.

      When you delete the /bin/ls hardlink, the os deletes the entry in the file system pointing to 0x123456EF and you are able to put normal /bin/ls back again. Deleting the hard link wouldn’t actually remove the data that comprises that file off the drive because “deleting” a “file” is just removing the file systems record that there’s something there to be aware of.

      If instead of deleting the /bin/ls hardlink, you opened it up and replaced the video portion of its data with the music video to never gonna give you up, then when someone tried to open /var/2girls1cup.mov they’d instead see that music video.

      if that is, the file wasn’t moved to another place on the block device when you changed it. Never gonna give you up has a much longer running time than 2girls1cup and without significant compression the os is gonna end up putting /bin/ls in a different place in the block device that can accommodate the longer data stream. If the os does that when you get done modifying your 2girls1cup /bin/ls into rickroll then /bin/ls will point to 0x654321EF or something and only you will experience astleys dulcet tones when you use ls, the old 0x123456EF location will still contain the data that /var/2girls1cup.mov is meant to point to and you will have played yourself.

      Okay with all that said: how does the os know what to do when one of its standard utilities encounters a symlink? They have a standard behavior! It’s usually to “follow” (dereference) the link. What the fuck good would a symbolic link be if it didn’t get treated normally? Sometimes though, like with “ls” or “rm” you might want to see more information or just delete the link. In those cases you gotta look at how the software you’re trying to use treats links.

      Or you can just make some directories and files with touch and try what you wanna do and see what happens, that’s what I do.

    • Symlinks are fully transparent for all software just opening the file etc.

      If the software really cares about this (like file managers) they can simply ask the Linux kernel for additional information, like what type of file it is.

  • Why do programs install somewhere instead of asking me where to?

    EDIT: Thank you all, well explained.

    • Someone already gave an answer, but the reason it's done that way is because on Linux, generally programs don't install themselves - a package manager installs them. Windows (outside of the windows store) just trusts programs to install themselves, and include their own uninstaller.

    • Because Linux and the programs themselves expect specific files to be placed in specific places, rather than bunch of files in a single program directory like you have in Windows or (hidden) MacOS.

      If you compile programs yourself you can choose to put things in different places. Some software is also built to be more self contained, like the Linux binaries of Firefox.

    • you install program A, it needs and installs libpotato then later you install program B that depends on libfries, and libfries depends on libpotato, however since you already have libpotato installed, only program B and libfries are installed The intelligence behind this is called a package manager

      In windows when you install something, it usually installs itself as a standalone thing and complains/reaks when dependencies are not met - e.g having to install Visual C++ 2005-202x for games, JRE for java programs etc

      instead of making you install everything that you need to run something complex, the package manager does this for you and keep tracks of where files are

      and each package manager/distribution has an idea of where some files be stored

    • I wish every single app installed in the same directory. Would make life so much easier.

    • Expanding on the other explanations. On Windows, it's fairly common for applications to come with a copy of everything they use in the form of DLL files, and you end up with many copies of various versions of those.

      On Linux, the package manager manages all of that. So if say, an app needs GTK, then the package manager makes sure GTK is also installed. And since your distribution's package manager manages everything and mostly all from source code, you get a version of the app specifically compiled for that version of GTK the distribution provides.

      So if we were to do it kind of the Windows way, it would very, very quickly become a mess because it's not just one big self contained package you drop in C:\Program Files. Linux follows the FSH which roughly defines where things should be. Binaries go to /usr/bin, libraries to /usr/lib, shared files go to /usr/shared. A bunch of those locations are somewhat special, for example .desktop files in /usr/share/applications show up in the menu to launch them. That said Linux does have a location for big standalone packages: that's usually /opt.

      There's advantages and inconveniences with both methods. The Linux way has the advantage of being able to update libraries for all apps at once, and reduce clutter and things are generally more organized. You can guess where an icon file will be located most of the time because they all go to the same place, usually with a naming convention as well.

    • Because dependencies. You also should not be installing things you download of the internet nor should you use install scripts.

      The way you install software is your distros package manager or flatpak

    • different strokes.

      windows comes from the personal computing world and retains a bunch of stuff from it to this very day for no good reason, in this case there used to be no guarantee that a particular installation target would have the target directory mapped in a consistent way so the installer would make a guess and give the user a chance to change it.

      if that sounds stupid, it is. no one writes in assembly anymore, they target the OS and nowadays the OS will have a consistent set of folders to install stuff to. we all know where the program "should" be installed to already.

      but it didn't used to be like that in the PC world! used to be your computer wasn't a fixed purpose windows computer from the jump, never to be anything else. there were different OSes that people would use regularly and even different DOS environments which a person could use to run programs under. Hard disks weren't disks inside the machine, but big beige external disks that you'd plug up, set beside the computer and access after booting. in that setup where a programmer targeted DOS (if they cared about the execution environment at all and didn't just write for the processor) it made sense to ask where someone was gonna want to install their software, and to what extent they'd even want to start dirtying up the media they paid good money for with some knuckleheads weird files from some goofy program on a stack of floppy disks.

      linux comes from the unix world, where the question of where something installs is easy and straightforward: it installs in $PATH. what is $PATH? it's where the os will look when you try to run something to see if it can run any program by that name. if a program isn't installed in $PATH then when you type its' name in and hit enter the computer won't know what the hell youre talking about and you'll have to type it's whole ass location out and hit enter.

      Why didn't unix systems that linux imitates ask you where to install stuff? because usually it wasn't your choice! linux was unix for personal computers and unix was run on systems that took up whole rooms with all sorts of equipment. you might be the user of that system but never have access to the room with all the spinning disks and flashing lights, stuck on a terminal dialing in over a serial line.

      so the assumption was that you'd have a variable in your user environment that would say where things were installed but not that you'd have the ability to change it or even install things.

      so why in a linux environment would you ever install anything outside of $PATH or even want to be sure where something's installed at all?

      even under linux it can be useful to do either. installing outside of path keeps programs from being accidentally autocompleted or invoked. installing in a particular component of $PATH ($PATH can be many directories!) lets you put serious business programs that demand maximum performance on faster media.

      so why the hell won't linux systems give you the option of installing in a specific location or outside of $PATH altogether?

      they will, but unlike windows, they don't ask you. unless you specifically ask to do that unique and very abnormal operation, they just do the usual thing. when you want to install weirdly you gotta dig into your package manager and packaging system. sometimes you unzip a package and change a line in a file then zip it back up and install from your modified version.

  • Any word on the next generation of matrix math acceleration hardware? Is anything currently getting integrated into the kernel? Where are the gource branches looking interesting for hardware pulls and merges?

  • I'm running Endeavour OS (KDE Plasma) and ran into a weird issue with my graphics. It's like windows sometimes flicker and flight with each other, some fullscreen videos won't play and just lock to a gray screen instead (e.g. in Steam, though YouTube is oddly fine), and most 3D games are super choppy and unplayable.

    I'm not asking how to fix this, I just want to know how I start troubleshooting! I haven't done anything special with my system, and I think the issue started after a normal pacman update. My GPU is a GeForce GTX 1060.

    Any suggestions to get started? I don't even know if the issue is Nvidia drivers, X, window manager, KDE, etc.

    EDIT: The problem was Wayland. Fixed by logging in with X11 instead!

    • Start by checking what windowing system you're using as its a fundamental part of problem solving. It's a little confusing how to do this, the top answer in this Stack exchange thread works well.

      If you're running the latest KDE then you've almost certainly been moved to Wayland and that will be the source of your problems. Wayland and Nvidia drivers don't work well together, and KDE have defaulted to Wayland in the latest release. I have had very similar issues to you with the move to wayland and have not been able to fix them - they're too fundamental and depend on updates to wayland and/or Nvidia drivers.

      I know you don't want a solution but there isn't one at the moment, so you'd be wasting your time. The solution is to log out, then on the log in screen select Plasma (X11) as your session and log in again.

      Personally I have had to abandon KDE as I get a different set of problems in X11. I'm on OpenSuSE Tumbleweed so have little choice inrolling back to the previously functioning version of KDE - I'm using Cinnamon instead and contemplating switching to a different Linux distro, probably OpenSuSE Leap in favour of stability over cutting edge.

      Meanwhile I have the latest KDE running on another device with AMD GPU without issue.

      In terms of when it'll be fixed, there is a change being made to Wayland which will effect how it and the Nvidia drivers interact (something called Explicit sync). It's just been merged into wayland so presumably will appear downstream in the coming next few months in rolling distributions. There have been articles suggesting this is going to fix most problems but personally I think this is a little brave but fingers crossed.

      • Oof, yep, that's all it was. I just fired up Elden Ring and it runs great under X11. Thanks a million!

        I'd heard Wayland and Nvidia don't play nicely together, but forgot KDE had officially made the switch. I'm sure I approved the install a while back but probably assumed it was all stable and compatible now. Guess that's what I get for not reading the release notes!

    • Try switching to different versions of your graphics driver and/or kernel. Nvidia cards get really finicky about the version matchups, especially as they age. Try different combinations of the versions that are available via pacman, and maybe it’ll work. You may need to start keeping an eye on updates to your kernel and graphics driver to see if a new update fixes your issue. Welcome to life with an nvidia card. I bought an nvidia card once in 2013. By 2016 I had to start playing this game on upgrades. At one point, the graphics driver was causing kernel panics until I downgraded both and waited a few months. Very happy with AMD.

      • Thanks, I'll try that. I figured an update would fix it by now (it's been a few weeks) but maybe I do need to roll back.

        And yes my other machine has an AMD card. This will be my last one from Nvidia since I've fully switched to Linux.

    • Look in /var/log/Xorg.0.log for Xorg errors.

      Check if OpenGL is okay by running glxinfo (from the package mesa-utils) and checking in the first few lines for "direct rendering: Yes".

      Check if Vulkan is okay by running vulkaninfo (from the package vulkan-tools) and seeing... if it throws errors at you, I guess. There are probably some specific things you could look for but I'm not familiar enough with Vulkan yet.

      You could sudo dmesg and read through looking for problems, but there might be a lot of noise to sift through. I'd start by piping it through grep -i nvidia to look for driver-specific stuff.

      Might be worth running nvidia-settings and poking around to see if anything seems amiss. Not sure what you'd actually be looking for, but yeah.

      Sometimes switching from linux and nvidia to linux-lts and nvidia-lts can help if the problem is in the kernel or driver. Remember to switch both of these at the same time, since drivers need to match the kernel.

      You could also try switching from the nvidia drivers to nouveau. Might offer temporary relief and help narrow down where the problem is, at the expense of probably worse performance in heavy games. Ought to be fine for 2D gaming and general desktopping.

      Trying a different window manager is always an option. Don't know how much hassle that is when you use a full DE; I've always been the "just grab individual lightweight pieces and slap 'em together" sort so I don't have any real experience with KDE. But yeah. Find out what the right way to change WM is for your system, then try swapping over to Openbox or something minimal like that and see what happens.

      Related to WM/DE, it could be an issue with the compositor maybe. Look up whatever KDE's compositor is and see if you can turn it off and run a different one?

      • This looks super helpful, thanks!

        I'm a little nervous about swapping entirely over to nouveau for testing (well, moreso switching back) but I'm sure I can find a guide.

        Update: No need, the problem was just Wayland vs X11.

  • What is the practical difference between Arch and Debian based systems? Like what can you actually do on one that you can't on the other?

    • The practical difference is the package manager; Debian-based systems use dpkg/APT with the .deb package format, Arch uses Pacman with .pkg packages.

      Debian-based distros use a stable release cycle, so there are version numbers. The ecosystem is maintained for each version for an extended period of time, so if you have a workflow that requires a specific era of software, you can stick with an older version of the OS to maintain compatibility. This does not necessarily mean the software remains unpatched; security or stability patches are applied, this tends to mean the system is stable. Arch-based distros use a rolling release, basically what they said they were going to do with Windows 10 being the "last" version of Windows and they'd just keep updating it. Upside: Newest versions of packages all the time. Downside: Newest versions of packages all the time. You get the latest features, and the latest bugs.

      Debian-based distros don't have a unified method of distributing software beyond the standard repositories. Ubuntu tried with PPAs, which kind of sucked. Arch has the Arch User Repository, or AUR.

      Arch itself is designed to be an a la carte operating system. It starts out as a fairly minimal environment and the user will install the components they want and only the components they want, though many Arch-based distros like Manjaro and EndeavorOS offer pre-configured images. Debian was one of the earliest distros shipped ready to go as a complete OS; I know of no system that offers the "here's a shell and a package manager, install it yourself" experience on the Debian family tree.

      But given an installed and configured Debian and Arch machine, what can one do that the other can't? As in, can it run [application]? Very little.

    • You can “do” the same thing in Debian as you can arch, the main difference is packaging philosophy, Debian packages are older and more stable, while in Arch world you typically have the newest version of software packages as late as a few weeks from their release (the caveat being breakage is a bit more likely), Arch also has user repositories where the community can contribute unofficial packages

    • You can do pretty much the same things on either. The difference is one is a rolling release with fresh fairly untested packages and the other is a fixed stable system with no major changes happening.

  • what is hyprland

    why do ppl use the CLI for things like making and moving files? i find the GUI easier and faster as well as less prone to mistakes

    what is wayland and xorg, and why does everyone argue about them

    • it's faster for me to type out cp -r source/directory destination/directory than it is to open a file manager, navigate to my source, ctrl-a ctrl-c navigate to my destination, ctrl-v. this is not always true. look at the work done by the plan9 people to learn more

      idk what hyprland is specifically, but it's either a window manager or compositor or something for use with wayland.

      wayland and xorg are ways to do graphical user interfaces in unix systems. wayland is supposed to fix problems that have long been solved or worked around in xorg. it's new and doesn't workor support everything. xorg is old and has problems but it works very well.

    • The CLI has many advantages over a gui. For one, actions are reific, repeatable, and scriptable. This saves time as you can reuse previous commands and edit them appropriately for the current situation. This makes it easy to look back and verify what you have done. The command line is also a much more stable interface. GUIs change all the time and it’s hard to remember where things might be located. The structure of the gnu operating system accessed via the command line facilitates the discovery of installed commands/programs and documentation. You can record these actions once and repeat them on many machines. You can script common activities (eg bulk file renaming) that make file and data management easier.

    • Hyperland. Don't know. Apparently reading someone else's comment, it has to do with Wayland.

      Which leads to answering out of order about Wayland and Xorg. Both are windowing systems, major components of the GUI/desktop environment. Xorg, aka X or X11, is older than Linux; it dates back to the early 80's. It just wasn't designed to handle things like multiple monitors with variable refresh rate and all the wacky stuff we have now. It's amazing it's hung on this long but the sober fact is X is old and busted.

      Wayland is the new hotness meant to replace Xorg. It works a bit different, some old software won't work with it so there have to be converters, and there's some issues with Nvidia compatibility with Wayland. There are very few people who just want to stubbornly stay with X, but Wayland still doesn't work well for their use case, which is why there is much discussion about it.

      I use the CLI for things like making and moving files for a lot of reasons.

      • I'm interacting with another machine through SSH
      • I'm maintaining a server that has no GUI installed
      • I'm doing something kind of weird like using scp to send a file from one computer to another via an SSH tunnel
      • I'm working on a large batch of files.
      • I'm doing something complex or multi-part to a bunch of files.

      For example, when I ripped my DVD collection, I had an issue where the software generated file names like S4D2E3.mp4, or Season 4 Disc 2 Episode 3. I was able to copy-paste a list of the episode names of an entire season into a text file, and then using the CLI I iterated through the lines of that file renaming each video file and moved it to the correct storage directory. Saved a lot of manual F2ing.

      Of course, I didn't type those lines of bash each time, I saved it as a script and then ran that each time.

      Learn a little bit of regex, how to use vim, how pipes work, and a bit about stuff like imagemagick or pandoc or ffmpeg and you'll see why Bash is so handy.

    • Xorg is a display server for Linux ecosystem. Every ecosystem has a display server. It is what makes it possible for you to have graphical applications with movable windows that can talk to each other, or have a mouse cursor that can click on things.

      Wayland is a replacement for Xorg because Xorg is old and its developers said an alternative is needed. Wayland has differences that I won't discuss here, but I'll be happy to do so if you ask.

      Hyprland is a wayland compositor. A compositor is basically an implementation of wayland (there are many) and gives you a windowing system that you can run graphical applications through. It is usually a lot more minimal than having a full graphical desktop like KDE or Gnome.

      Hyprland belongs to a class of comositors called "tiling", which forces windows to be in a tiling formation. In other words, windows do not overlap or stack on top of each other. Hyprland stands out in having a lot of eye candy and visual effects.

      I use CLI for moving files, etc. After you use it for a while, you find out it can be more efficient, faster, and more pleasant to work with.

    • hyprland

      A wayland compositor and tiling window manager. The lead developer of the project is a Polish transphobic workaholic.

      why do ppl use the CLI for things like making and moving files? i find the GUI easier and faster as well as less prone to mistakes

      If you understand how shell scripting works you can easily automate menial tasks. CLI is also an interface shared by all operating systems so if you know how to work around in a shell you're not bound to any particular workflow/desktop GUI. Keep using GUIs though, they exist for a reason.

      what is wayland and xorg, and why does everyone argue about them

      Both are display protocols that are in charge of displaying graphics to your screen. Xorg is over 30 years old while wayland is only about 15 years old. The polemic about xorg was that the codebase was unmanageable and the design architecture of the program was inherently flawed (example: screenlocker getting access to your entire screen including apps and desktop, making writing malware for x11 a 3 line python script). X11 was designed during a time when people were using actual real life terminals and mainframes. Wayland is much more modern and akin to how modern graphics APIs are handled (for the most part)

      Wayland at its core has and always will be design by committee so a lot of the arguing is necessary (though sometimes long-winded) to make sure to not repeat xorg's mistakes. Protocols take months if not years to be merged into wayland and those protocols have to be implemented by wayland compositors themselves rather than sharing 1 program altogether like with xorg.

      Watch this video for more information, explains it much better and is from an actual wayland board member.

      Why YOU should write a Wayland compositor – Victoria Brekenfeld – HiP22 Berlin

    • IDK exactly what hyprland is. I think it is a Wayland DE or WM or something. Seems to be popular on unixporn.

      I like CLI 'cause I use it a lot for servers and find a file manager to usually be more than I need, though sometimes being able to see all the icons in a GUI or whatever can be nice. I also use a linux server that I manage through the terminal, so I am used to it.

      Wayland and xorg are display servers. They basically help draw windows on the screen, and help with input to different windows and things. Xorg has been around a while and has good support, but has security issues. The main one that I know of is that all keystrokes are sent to all running apps in xorg, so making keyloggers is really easy. Wayland is more secure in that regard, but has performance and stability issues, especially on nvidia GPUs. People argue about them because people like to argue, idk.

      Edit: Wayland user in case you want to know bias.

  • How do programs that measure available space like 'lsblk', 'df', 'zfs list' etc see hardlinks and estimate disk space.

    If I am trying to manage disk space, does the file system correctly display disk space (for example a zfs list)? Or does it think that I have duplicate files/directories because it can't tell what is a hardlink?

    Also, during move operations, zfs dataset migrations, etc... does the hardlinked file continue tracking where the original is? I know it is almost impossible at a system level to discern which is the original.

    • I'm not super familiar with ZFS so I can't elaborate much on those bits, but hardlinks are just pointers to the same inode number (which is a filesystem's internal identifier for every file). The concept of a hardlink is a file-level concept basically. Commands like lsblk, df etc work on a filesystem level - they don't know or care about the individual files/links etc, instead, they work based off the metadata reported directly by the filesystem. So hardlinks or not, it makes no difference to them.

      Now this is contrary to how tools like du, ncdu etc work - they work by traversing thru the directories and adding up the actual sizes of the files. du in particular is clever about it - if one or more hardlinks to a file exists in the same folder, then it's smart enough to count it only once. Other file-level programs may or may not take this into account, so you'll have to verify their behavior.

      As for move operations, it depends largely on whether the move is within the same filesystem or across filesystems, and the tools or commands used to perform the move.

      When a file or directory is moved within the same filesystem, it generally doesn't affect hardlinks in a significant way. The inode remains the same, as do the data blocks on the disk. Only the directory entries pointing to the inode are updated. This means if you move a file that has hardlinks pointing to it within the same filesystem, all the links still point to the same inode, and hence, to the same content. The move operation does not affect the integrity or the accessibility of the hardlinks.

      Moving files or directories across different filesystems (including external storage) behaves differently, because each filesystem has its own set of inodes.

      • The move operation in this scenario is effectively a copy followed by a delete. The file is copied to the target filesystem, which assigns it a new inode, and then the original file is deleted from the source filesystem.
      • If the file had hardlinks within the original filesystem, these links are not copied to the new filesystem. Instead, they remain as separate entities pointing to the now-deleted file's original content (until it's actually deleted). This means that after the move, the hardlinks in the original filesystem still point to the content that was there before the move, but there's no link between these and the newly copied file in the new filesystem.

      I believe hardlinks shouldn't affect zfs migrations as well, since it should preserve the inode and object ID information, as per my understanding.

      • This really clears things up for me, thanks! I guess I am not so "new" (been using linux for 8 years now), but every article I read on hardlinks just confused me. This is much of a more "layman's" explanation for me!

    • I believe that zfs has its own disk usage utilities

  • Is explicit sync a good enough solution to make wayland gaming with nvidia a reality(+ remove window flickering like some people claim it will)? It's the last obstacle I find now trying to move my main pc to linux, and I don't really want to use x11.

    Pd. Lesson learned, next time I'll get an AMD gpu.

  • How can I hide a pinned post without blocking the poster? It bothers me having this at the top of my list all the time, like some reminder on my phone I can't ack and make go away.

412 comments