• Semi-Hemi-Lemmygod@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    Pro tip: Defragmenting only works on spinning drives because it puts the data nearer to the spindle so seek times are shorter. Solid-state drives wear out faster if you defragment them, since every write involves a little bit of damage.

    • vocornflakes@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      I was about to throw hands, but then I learned something new about how SSDs store data in pre-argument research. My poor SSDs. I’ve been killing them.

    • Alawami@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      5 months ago

      Random reads are still slower than sequential in SSD. try torrenting for a year on SSD, then benchmark then defragment then benchmark. it will be very measureable difference. you may need some linux filesystem like XFS as im not sure if there is a way to defrag SSDs in windows.

      • LazerFX@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        That’s because the drive was written to its limits; the defrag runs a TRIM command that safely releases and resets empty sectors. Random reads and sequential reads /on clean drives that are regularly TRIMmed/ are within random variance of each other.

        Source: ran large scale data collection for a data centre when SSDs were relatively new to the company so focused a lot on it, plus lots of data from various sectors since.