This commit is contained in:
Joey Hess 2018-03-24 10:18:45 -04:00
parent 2b5b5eb8d9
commit 380dba1a2c
No known key found for this signature in database
GPG key ID: DB12DB0FF05F8F38

View file

@ -0,0 +1,33 @@
[[!comment format=mdwn
username="joey"
subject="""comment 1"""
date="2018-03-24T13:54:31Z"
content="""
Note that with --retry 10, curl will back off 10 times, with wait doubling,
so it could get stuck for 20+ minutes. That seems too long, but the right
number of retries seems to depend on how overloaded the http server is; you
may need a number that would otherwise be excessive in order to get a high
enough probability of success.
Also worth noting that http *has* status codes such as 503 that are
intended to be used when the client should wait and retry; 500 is not such
a code.
If this is done at the wget/curl level, it will also need to be done
when using the http-client library (which does not currently retry on any
code, AFAICs).
And, it could just as easily be a S3 or webdav server that is throwing the
http retry codes, and the libraries for those will have their own retrying
behavior. (And it could even be a ssh server ot other non-http protocol
that connections to fail intermittently.)
Putting all this together, I'm wondering if the http level is the right
place to put this retrying. It's not a matter of complying with the http
spec; it seems to need user configuration in order to handle their
particular use case.
git-annex already does generic retrying as long as some data was
received, to recover from broken connections. That could be extended to
support a config option that enables a number of retries.
"""]]