avoid url resume from 0

When downloading an url and the destination file exists but is empty,
avoid using http range to resume, since a range "bytes=0-" is an unusual
edge case that it's best to avoid relying on working.

This is known to fix a case where importfeed downloaded a partial feed from
such a server. Since importfeed uses withTmpFile, the destination always exists
empty, so it would particularly tickle such problem servers. Resuming from 0
is otherwise possible, but unlikely.
This commit is contained in:
Joey Hess 2019-06-20 12:26:17 -04:00
parent 06ea1c4228
commit 759fd9ea68
No known key found for this signature in database
GPG key ID: DB12DB0FF05F8F38
4 changed files with 36 additions and 2 deletions

View file

@ -375,13 +375,13 @@ download' noerror meterupdate url file uo =
ftpport = 21
downloadconduit req = catchMaybeIO (getFileSize file) >>= \case
Nothing -> runResourceT $ do
Just sz | sz > 0 -> resumeconduit req' sz
_ -> runResourceT $ do
liftIO $ debugM "url" (show req')
resp <- http req' (httpManager uo)
if responseStatus resp == ok200
then store zeroBytesProcessed WriteMode resp
else showrespfailure resp
Just sz -> resumeconduit req' sz
where
req' = applyRequest uo $ req
-- Override http-client's default decompression of gzip