duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] mixed storage classes on S3


From: hamish-duplicity
Subject: Re: [Duplicity-talk] mixed storage classes on S3
Date: Mon, 8 Aug 2022 18:43:53 +1000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.12.0

On 8/8/22 18:24, edgar.soldin--- via Duplicity-talk wrote:
On 08.08.2022 07:30, hamish-duplicity--- via Duplicity-talk wrote:
I am backing up to S3, and I set --s3-use-ia to set the storage class. I am also using an S3 life cycle rule to transition the files to Glacier (flexible retrieval) after 120 days.

I think it would be useful to keep the metadata (manifest and signatures) in a different class than the data, so that I can purge my local cache of the metadata but get it back more readily if needed.

according to the source
 https://gitlab.com/duplicity/duplicity/-/blob/main/duplicity/backends/s3_boto3_backend.py#L117-131
you can set multiple `--s3-use-` options.

manifests currently will never use glacier, glacier_ir, deep_archive but use the other given class instead. eg.
 --s3-use-ia --s3-use-glacier
would put all files up as class glacier except the manifests which will be saved as standard_ia .

Thanks, that's good to know. I would guess that the manifest files are excluded because they are so small, and S3 Glacier has a minimum effective object size of 128kb.


Unfortunately the S3 life cycle rules don't let you match by filename patterns, so the only way to do this is to add tags to the files (either when uploaded, or later, possibly with a lambda), or by setting the storage class differently on the files when they are uploaded. Although then the data would go straight to glacier rather than waiting 120 days.


Would anyone have any suggestions on this?

afaiu others use the `--file-prefix-*` options for this as S3 is able to work with those.


Alas, I'm storing my backups in subdirectories of the bucket (as I back up multiple hosts into the same bucket), so the S3 rules still won't match.


regards

Hamish




reply via email to

[Prev in Thread] Current Thread [Next in Thread]