confirmed, and open todo for something mentioned in this bug

This commit is contained in:
Joey Hess 2020-03-20 12:00:07 -04:00
parent 82ee256525
commit 24255b3c96
No known key found for this signature in database
GPG key ID: DB12DB0FF05F8F38
3 changed files with 24 additions and 0 deletions

View file

@ -9,3 +9,5 @@ Could http connections be reused?
Could multiple files be uploaded in parallel?
Apparently files are also upload to a temporary location and renamed after successful upload. This adds additional latency and thus parallel uploads could provide a speed up?
[[!tag confirmed]]

View file

@ -0,0 +1,20 @@
[[!comment format=mdwn
username="joey"
subject="""comment 1"""
date="2020-03-20T15:48:35Z"
content="""
Indeed, regular webdav special remote uses prepareDav
which sets up a single DAV context that is used for all stores,
but export does not and so creates a new context each time.
S3 gets around this using an MVar that contains the S3 handle.
Webdav should be able to do the same.
(The upload to a temp location is necessary, otherwise resuming an
interrupted upload would not be able to check which files had been fully
uploaded yet in some situations. Or something like that. I forget the exact
circumstances, but it's documented in a comment where storeExport is defined
in Types.Remote.)
(Opened [[todo/support_concurrency_for_export]].)
"""]]

View file

@ -0,0 +1,2 @@
Making git-annex export support -J should be doable and could speed up
exports a lot with some remotes. --[[Joey]]