Fix bug that prevented uploads to remotes using new-style chunking from resuming after the last successfully uploaded chunk.

"checkPresent baser" was wrong; the baser has a dummy checkPresent action
not the real one. So, to fix this, we need to call preparecheckpresent to
get a checkpresent action that can be used to check if chunks are present.

Note that, for remotes like S3, this means that the preparer is run,
which opens a S3 handle, that will be used for each checkpresent of a
chunk. That's a good thing; if we're resuming an upload that's already many
chunks in, it'll reuse that same http connection for each chunk it checks.
Still, it's not a perfectly ideal thing, since this is a different http
connection that the one that will be used to upload chunks. It would be
nice to improve the API so that both use the same http connection.
This commit is contained in:
Joey Hess 2015-07-16 15:01:10 -04:00
parent 5de3b4d07a
commit afe6a53bca
3 changed files with 37 additions and 3 deletions

View file

@ -184,12 +184,14 @@ specialRemote' cfg c preparestorer prepareretriever prepareremover preparecheckp
-- chunk, then encrypt, then feed to the storer
storeKeyGen k f p enc = safely $ preparestorer k $ safely . go
where
go (Just storer) = sendAnnex k rollback $ \src ->
go (Just storer) = preparecheckpresent k $ safely . go' storer
go Nothing = return False
go' storer (Just checker) = sendAnnex k rollback $ \src ->
displayprogress p k f $ \p' ->
storeChunks (uuid baser) chunkconfig k src p'
(storechunk enc storer)
(checkPresent baser)
go Nothing = return False
checker
go' _ Nothing = return False
rollback = void $ removeKey encr k
storechunk Nothing storer k content p = storer k content p