Leverage the new chunked remotes to automatically resume downloads.
Sort of like rsync, although of course not as efficient since this
needs to start at a chunk boundry.
But, unlike rsync, this method will work for S3, WebDAV, external
special remotes, etc, etc. Only directory special remotes so far,
but many more soon!
This implementation will also properly handle starting a download
from one remote, interrupting, and resuming from another one, and so on.
(Resuming interrupted chunked uploads is similarly doable, although
slightly more expensive.)
This commit was sponsored by Thomas Djärv.
This avoids a proliferation of hash directories when using new-style
chunking, and should improve performance since chunks are accessed
in sequence and so should have a common locality.
Of course, when a chunked key is encrypted, its hash directories have no
relation to the parent key.
This commit was sponsored by Christian Kellermann.
Slightly tricky as they are not normal UUIDBased logs, but are instead maps
from (uuid, chunksize) to chunkcount.
This commit was sponsored by Frank Thomas.
Added new fields for chunk number, and chunk size. These will not appear
in normal keys ever, but will be used for chunked data stored on special
remotes.
This commit was sponsored by Jouni K Seppanen.
Suppose A is 10 mb, and B is 20 mb, and the upload speed is the same. If B
starts first, when A will overwrite the file it is uploading for the 1st chunk.
Then A uploads the second chunk, and once A is done, B finishes the 1st chunk
and uploads its second. We now have 1(from A), 2(from B), so data loss.