Guest
[Entry]
"I believe there is an issue in that TimeMachine backups use file links, so that it doesn't need to duplicate the files for each backup version. S3 doesn't understand file links, so either copies the file, taking huge space and bandwidth, or doesn't, meaning downloading a complete version is very fiddly, depending on the --[no-]follow-symlinks parameter, default is to follow them. (I presume the backup services get round this.)
I just needed a backup of individual files, rather than a fully bootable, versioned dump, so I set up a bucket in S3, then used the following bash script:
until AWS_SHARED_CREDENTIALS_FILE=""/path/to/credentials/in/mounted/encrypted/file"" \ AWS_MAX_ATTEMPTS=99999 \ aws s3 sync /path/to/backup/ s3://mybucket/path/ \ --storage-class DEEP_ARCHIVE \ --region my-region-1 \ --output json \ --sse-c AES256 \ --sse-c-key ""fileb:///path/to/encryption/key/in/mounted/encrypted/file"" \ --no-follow-symlinks \ --exclude ""*.ssh/*"" \ --exclude ""*.aws/*"" do echo ""Retrying whole backup unless you ctrl+C within 2 secs"" sleep 2 done
The encrypted file can be binary, although I kept it unicode so I can store a copy. I put both my AWS credentials and the encryption key in encrypted drives that I can mount while I back up.
I chose deep archive - request to download can take 24 hours, but it is 0.01 cent per gigabyte month.
I timed transfer rates to each AWS location then chose the quickest, cheapest one (prices vary slightly).
This script then loops until sync returns 0 (I've a very unreliable connection).
Cost me $24 to upload the files (400,000 PutObject requests), then $6 a year to store 450GBs.
Files will not be consistent, so this isn't any good for eg, backing up application state at a point in time. But it is a good last resort back up of files, should all my offline backups fail (this was inspired by a home robbery in which laptop and disks were stolen - fortunately they missed a hidden hard disk)."
|