i’ve instaled opensuse tumbleweed a bunch of times in the last few years, but i always used ext4 instead of btrfs because of previous bad experiences with it nearly a decade ago. every time, with no exceptions, the partition would crap itself into an irrecoverable state

this time around i figured that, since so many years had passed since i last tried btrfs, the filesystem would be in a more reliable state, so i decided to try it again on a new opensuse installation. already, right after installation, os-prober failed to setup opensuse’s entry in grub, but maybe that’s on me, since my main system is debian (turns out the problem was due to btrfs snapshots)

anyway, after a little more than a week, the partition turned read-only in the middle of a large compilation and then, after i rebooted, the partition died and was irrecoverable. could be due to some bad block or read failure from the hdd (it is supposedly brand new, but i guess it could be busted), but shit like this never happens to me on extfs, even if the hdd is literally dying. also, i have an ext4 and an ufs partition in the same hdd without any issues.

even if we suppose this is the hardware’s fault and not btrfs’s, should a file system be a little bit more resilient than that? at this rate, i feel like a cosmic ray could set off a btrfs corruption. i hear people claim all the time how mature btrfs is and that it no longer makes sense to create new ext4 partitions, but either i’m extremely unlucky with btrfs or the system is in fucking perpetual beta state and it will never change because it is just good enough for companies who can just, in the case of a partition failure, can just quickly switch the old hdd for a new one and copy the nightly backup over to it

in any case, i am never going to touch btrfs ever again and i’m always going to advise people to choose ext4 instead of btrfs

    • Atemu@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      27 days ago

      the 1TB drive just magically lost 300+GB of capacity that shows up in use but there is nothing using it

      How did you verify that “nothing” is using it? That’s not a trivial task with btrfs because any given btrfs filesystem can contain an arbitrary amount of filesystem roots and that filesystem roots can be duplicated in seconds.

      If you have ever done a snapshot or enabled automatic snapshots via e.g. snapper or btrbk, data that you have since deleted may still be present in a snapshot. Use btrfs subvolume list / to list all subvolumes and snapshots.

      If you ever feel lost in analysing btrfs data usage, you can use btdu to visually explore where data is located. Note that it never shows 100% accurate usage as it’s based on probabilistic sampling. It’s usually accurate enough to figure out what’s using your data after a little while though, so let it scan for a good minute.