Fix resume of download of url when the whole file content is already actually downloaded
Don't much like that there's no way to distinguish between having the whole content and having an old version of the file that's bigger, but of course resuming a http transfer can always yield the wrong result if the file on the http server is changing, and git-annex will detect that when it verifies the downloaded content. This work is supported by the NIH-funded NICEMAN (ReproNim TR&D3) project.
This commit is contained in:
parent
c24bdfd689
commit
ff9bd9620e
4 changed files with 40 additions and 1 deletions
|
@ -348,7 +348,14 @@ download' noerror meterupdate url file uo =
|
|||
-- This could be improved by fixing
|
||||
-- https://github.com/aristidb/http-types/issues/87
|
||||
Just crh -> crh == B8.fromString ("bytes */" ++ show sz)
|
||||
Nothing -> False
|
||||
-- Some http servers send no Content-Range header when
|
||||
-- the range extends beyond the end of the file.
|
||||
-- There is no way to distinguish between the file
|
||||
-- being the same size on the http server, vs
|
||||
-- it being shorter than the file we already have.
|
||||
-- So assume we have the whole content of the file
|
||||
-- already, the same as wget and curl do.
|
||||
Nothing -> True
|
||||
|
||||
-- Resume download from where a previous download was interrupted,
|
||||
-- when supported by the http server. The server may also opt to
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue