incremental verification for web special remote
Except when configuration makes curl be used. It did not seem worth
trying to tail the file when curl is downloading.
But when an interrupted download is resumed, it does not read the whole
existing file to hash it. Same reason discussed in
commit 7eb3742e4b
; that could take a long
time with no progress being displayed. And also there's an open http
request, which needs to be consumed; taking a long time to hash the file
might cause it to time out.
Also in passing implemented it for git and external special remotes when
downloading from the web. Several others like S3 are within striking
distance now as well.
Sponsored-by: Dartmouth College's DANDI project
This commit is contained in:
parent
88b63a43fa
commit
d154e7022e
15 changed files with 101 additions and 67 deletions
|
@ -808,9 +808,9 @@ checkUrlM external url =
|
|||
mkmulti (u, s, f) = (u, s, f)
|
||||
|
||||
retrieveUrl :: Retriever
|
||||
retrieveUrl = fileRetriever $ \f k p -> do
|
||||
retrieveUrl = fileRetriever' $ \f k p iv -> do
|
||||
us <- getWebUrls k
|
||||
unlessM (withUrlOptions $ downloadUrl k p us (fromRawFilePath f)) $
|
||||
unlessM (withUrlOptions $ downloadUrl k p iv us (fromRawFilePath f)) $
|
||||
giveup "failed to download content"
|
||||
|
||||
checkKeyUrl :: CheckPresent
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue