Lots of people here teach you the 3-2-1 rule. Which is how it’s supposed to be and stick to that if you’re a business or have valuable data… But that’s also not the whole picture.
I think more important than the actual number of backups is to make sure they work. I’ve seen computers where the backup or cloud sync failed and no one noticed. And after the harddisk got damaged they got aware of the fact that the last successful backup ran 9 months ago… Or people started to save things in a different directory and that drive wasn’t part of the backup. Or the backup was encrypted and the key got lost together with the original data.
I personally am a bit cheap on the third backup. I replace that with an old external drive and copy my vacation pictures there every half a year or so. Just don’t store that next to the computer so everything burns down together. I’d say that’s more than enough. And your cloud backup already does 99% of the job. It’s at a (physically) different location and does all the really important tasks (for home use.)
Correct. What it appears to be and what it is, are often two very different things. And people often underestimate situations like desaster recovery… Everything is fine and dandy on the day you configure the backup job. But once you need it, that day is a desaster and everything has gone wrong. Now you need your plan to work flawlessly. And there are a lot of things that can go wrong, I’ve only highlighted a few of them. And lots of people have been burned by that before. There is only one way to make sure it works, and that is to test the whole procedure. And ideally not just once right after you configured it because things can go wrong later on, too.
I’d say yes for home use that’s perfectly fine.
Lots of people here teach you the 3-2-1 rule. Which is how it’s supposed to be and stick to that if you’re a business or have valuable data… But that’s also not the whole picture.
I think more important than the actual number of backups is to make sure they work. I’ve seen computers where the backup or cloud sync failed and no one noticed. And after the harddisk got damaged they got aware of the fact that the last successful backup ran 9 months ago… Or people started to save things in a different directory and that drive wasn’t part of the backup. Or the backup was encrypted and the key got lost together with the original data.
I personally am a bit cheap on the third backup. I replace that with an old external drive and copy my vacation pictures there every half a year or so. Just don’t store that next to the computer so everything burns down together. I’d say that’s more than enough. And your cloud backup already does 99% of the job. It’s at a (physically) different location and does all the really important tasks (for home use.)
Monitoring if the backup task succeeded is important but that’s tue easy part of ensuring it works.
A backup is only working if it can be restored. If you don’t test that you can restore it in case of disaster, you don’t really know if it’s working.
Correct. What it appears to be and what it is, are often two very different things. And people often underestimate situations like desaster recovery… Everything is fine and dandy on the day you configure the backup job. But once you need it, that day is a desaster and everything has gone wrong. Now you need your plan to work flawlessly. And there are a lot of things that can go wrong, I’ve only highlighted a few of them. And lots of people have been burned by that before. There is only one way to make sure it works, and that is to test the whole procedure. And ideally not just once right after you configured it because things can go wrong later on, too.
Yes, absolutely. Ideally there would be an automated check that runs periodically and alerts if things don’t work as expected.