resume interrupted chunked downloads
Leverage the new chunked remotes to automatically resume downloads. Sort of like rsync, although of course not as efficient since this needs to start at a chunk boundry. But, unlike rsync, this method will work for S3, WebDAV, external special remotes, etc, etc. Only directory special remotes so far, but many more soon! This implementation will also properly handle starting a download from one remote, interrupting, and resuming from another one, and so on. (Resuming interrupted chunked uploads is similarly doable, although slightly more expensive.) This commit was sponsored by Thomas Djärv.
This commit is contained in:
parent
13bbb61a51
commit
9d4a766cd7
6 changed files with 88 additions and 32 deletions
11
Types/Key.hs
11
Types/Key.hs
|
@ -13,8 +13,8 @@ module Types.Key (
|
|||
stubKey,
|
||||
key2file,
|
||||
file2key,
|
||||
isChunkKey,
|
||||
nonChunkKey,
|
||||
chunkKeyOffset,
|
||||
|
||||
prop_idempotent_key_encode,
|
||||
prop_idempotent_key_decode
|
||||
|
@ -49,9 +49,6 @@ stubKey = Key
|
|||
, keyChunkNum = Nothing
|
||||
}
|
||||
|
||||
isChunkKey :: Key -> Bool
|
||||
isChunkKey k = isJust (keyChunkSize k) && isJust (keyChunkNum k)
|
||||
|
||||
-- Gets the parent of a chunk key.
|
||||
nonChunkKey :: Key -> Key
|
||||
nonChunkKey k = k
|
||||
|
@ -59,6 +56,12 @@ nonChunkKey k = k
|
|||
, keyChunkNum = Nothing
|
||||
}
|
||||
|
||||
-- Where a chunk key is offset within its parent.
|
||||
chunkKeyOffset :: Key -> Maybe Integer
|
||||
chunkKeyOffset k = (*)
|
||||
<$> keyChunkSize k
|
||||
<*> (pred <$> keyChunkNum k)
|
||||
|
||||
fieldSep :: Char
|
||||
fieldSep = '-'
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue