Home » Questions » Computers [ Ask a new question ]

Time machine backup on S3

Time machine backup on S3

Is there any possibility to use Amazon S3 for Mac OS X time machine backups?

Asked by: Guest | Views: 266
Total answers/comments: 5
Guest [Entry]

"I've used arq for years and love it. It is not timemachine (bummer) but does do automatic backups to either Amazon's S3 or Glacier.

Update

As of 5/4/2015, Arq supports incremental backups the following services:

Amazon s3 and Glacier
Google Drive
Microsoft OneDrive
DropBox
Google Cloud Storage
SFTP to your own server
DreamObjects
Other S3 compatible services"
Guest [Entry]

"Edit: I tried this and it didn't work. (Time Machine cannot see the mounted volume/bucket.)

You may be able to use Panic's Transmit app to [mount an S3 bucket as
a local Volume][1] and then point Time Machine to that mounted volume
as the destination.

I haven't tried this yet, but I plan to.

[1]: library.panic.com/transmit/td-install/"
Guest [Entry]

"Automatically backup your Mac to Amazon S3 *

There are some great tools already in existence that can do most of the heavy lifting for you. The primary tool for doing remote directory syncs is called s3sync which is a script written in Ruby. Lucky for us OS X comes with Ruby pre-installed so there isn’t much work to get it working.
Here is my step-by-step guide to getting your machine setup to do automatic daily backups to Amazon. I developed these steps on my MacBook Air running Leopard however they should work for previous versions of OS X as well.
Continue Reading...

* I cannot confirm the success of this method"
Guest [Entry]

Also remember even though it's backed by S3, S3 isn't a file system. You cant read or append to an existing object, it needs to be completely rewritten (Put) every time it changes. I'm not sure what this will translate to in terms of costs with the Macs Sparse Bundling format used by Apples Time Machine where the volume is divided into 8MB chunks.
Guest [Entry]

"I believe there is an issue in that TimeMachine backups use file links, so that it doesn't need to duplicate the files for each backup version. S3 doesn't understand file links, so either copies the file, taking huge space and bandwidth, or doesn't, meaning downloading a complete version is very fiddly, depending on the --[no-]follow-symlinks parameter, default is to follow them. (I presume the backup services get round this.)

I just needed a backup of individual files, rather than a fully bootable, versioned dump, so I set up a bucket in S3, then used the following bash script:

until AWS_SHARED_CREDENTIALS_FILE=""/path/to/credentials/in/mounted/encrypted/file"" \
AWS_MAX_ATTEMPTS=99999 \
aws s3 sync /path/to/backup/ s3://mybucket/path/ \
--storage-class DEEP_ARCHIVE \
--region my-region-1 \
--output json \
--sse-c AES256 \
--sse-c-key ""fileb:///path/to/encryption/key/in/mounted/encrypted/file"" \
--no-follow-symlinks \
--exclude ""*.ssh/*"" \
--exclude ""*.aws/*""
do echo ""Retrying whole backup unless you ctrl+C within 2 secs""
sleep 2
done

The encrypted file can be binary, although I kept it unicode so I can store a copy. I put both my AWS credentials and the encryption key in encrypted drives that I can mount while I back up.

I chose deep archive - request to download can take 24 hours, but it is 0.01 cent per gigabyte month.

I timed transfer rates to each AWS location then chose the quickest, cheapest one (prices vary slightly).

This script then loops until sync returns 0 (I've a very unreliable connection).

Cost me $24 to upload the files (400,000 PutObject requests), then $6 a year to store 450GBs.

Files will not be consistent, so this isn't any good for eg, backing up application state at a point in time. But it is a good last resort back up of files, should all my offline backups fail (this was inspired by a home robbery in which laptop and disks were stolen - fortunately they missed a hidden hard disk)."