Move destination files to cold archive?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Move destination files to cold archive?

duplicity-talk mailing list
I have an existing backup solution that creates a local pool of blocks/files named by their sha256 hashes that grows with each backup and I would like to transfer only the delta to a cloud backup. Duplicity seems to be perfect to do exactly that.

Unfortunately the upload is very slow and an initial backup will take weeks. Will duplicity still work flawlessly when interrupted multiple times per backup and be able to resume the same backup at exactly the right place?

My preferred solution would be to create a local duplicity backup (this will be fast and without interruption) and use an independant simple script to transfer all resulting files to a cold (cloud) archive and then delete all files that are copied (as I already have a local backup. I will keep the local archive directory, but will duplicity work correctly if I delete files at its "destination"? With the archive it should not be necessary to read remote files, but will the missing/invisible files cause any problems?

Kind regards,
Frank
_______________________________________________
Duplicity-talk mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Reply | Threaded
Open this post in threaded view
|

Re: Move destination files to cold archive?

duplicity-talk mailing list
I'm not sure, but if the files themselves do not change, but only accumulate, duplicity may not be your solution.  Duplicity is looking to compare files using librsync, then transferring only the deltas to remote.  It really does not work well if the remote files are missing.  It would treat that as needing another full backup.  I'm sure that's not what you want.

...Ken


On Fri, Sep 13, 2019 at 6:52 AM Frank-Ulrich Sommer via Duplicity-talk <[hidden email]> wrote:
I have an existing backup solution that creates a local pool of blocks/files named by their sha256 hashes that grows with each backup and I would like to transfer only the delta to a cloud backup. Duplicity seems to be perfect to do exactly that.

Unfortunately the upload is very slow and an initial backup will take weeks. Will duplicity still work flawlessly when interrupted multiple times per backup and be able to resume the same backup at exactly the right place?

My preferred solution would be to create a local duplicity backup (this will be fast and without interruption) and use an independant simple script to transfer all resulting files to a cold (cloud) archive and then delete all files that are copied (as I already have a local backup. I will keep the local archive directory, but will duplicity work correctly if I delete files at its "destination"? With the archive it should not be necessary to read remote files, but will the missing/invisible files cause any problems?

Kind regards,
Frank_______________________________________________
Duplicity-talk mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/duplicity-talk

_______________________________________________
Duplicity-talk mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Reply | Threaded
Open this post in threaded view
|

Re: Move destination files to cold archive?

duplicity-talk mailing list
On 05.10.2019 17:32, Kenneth Loafman via Duplicity-talk wrote:
> Unfortunately the upload is very slow and an initial backup will take
> weeks. Will duplicity still work flawlessly when interrupted multiple times
> per backup and be able to resume the same backup at exactly the right place?

it should, but i wouldn't advise it.

> My preferred solution would be to create a local duplicity backup (this
> will be fast and without interruption) and use an independant simple script
> to transfer all resulting files to a cold (cloud) archive and then delete
> all files that are copied (as I already have a local backup. I will keep
> the local archive directory, but will duplicity work correctly if I delete
> files at its "destination"? With the archive it should not be necessary to
> read remote files, but will the missing/invisible files cause any problems?

not sure if this is what you mean, but the following is the used workaround used by some with slow unstable upload channels.

1. do a backup to a local file:// target
2. sync the local backup folder with the software of your choice (eg. rsync) to a remote target

disadvantage: you need to keep the local backup completely (using additional space)
advantage: you keep a backup on two different file systems

..ede/duply.net

_______________________________________________
Duplicity-talk mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/duplicity-talk