When a uuid is not known, rescan for new repositories. Easy.
When a repository is removed, it will also get removed from the server
state on the next scan. But until a new uuid is seen, there will not be
a scan. This leaves the server trying to serve a uuid whose repository
is gone. That seems buggy. While getting just fails, dropping fails the
first time, but seems to leave the server in an unusable state, so the
next drop attempt hangs. The server is still able to serve other uuids,
only the one whose repository was removed has that problem.
--jobs is usually an Annex option setter, but --directory runs in IO, so
would not have that available. So instead moved the option parser into
the command's Options.
Untested, but it compiles, so.
Known problems:
* --jobs is not available to startIO
* Does not notice when new repositories are added to a directory.
* Does not notice when repositories are removed from a directory.
Since old ones had a buggy git bundle command.
In particular, git 2.30.2 has a git bundle that supports --stdin, but does
not read from it, and so fails to create a bundle.
While not using --stdin would perhaps work, it limits the number of revs
that get included in the bundle to the command line length limit.
But the real kicker is that at the same time --stdin got fixed, a bug also
got fixed that made git bundle skip including refs when they had the same
sha as other refs it included. Which would lead to data loss. So best to
avoid that buggy thing.
This fixes support for proxying after last commit broke it.
Note that withP2PConnections is called at server startup, and so only
proxies seen at that point will appear in the map and be used. It was
already the case that a proxy added after p2phttp was running would not
be served.
I think that is possibly a bug, but at least this commit doesn't
introduce the problem, though it might make it harder to fix it.
As bugs go, it's probably not a big deal, because after all,
git configs needs to be set in the local repository, followed by
git-annex updateproxy being run, to set up proxying. If someone is doing
that, they can restart their http server I suppose.
This is early groundwork for making p2phttp support serving multiple
repositories from a single daemon.
So far only 1 repository is served still. And this commit breaks support
for proxying!
When remote.name.annexUrl is an annex+http(s) url, that uses the same
hostname as remote.name.url, which is itself a http(s) url, they are
assumed to share a username and password.
This avoids unnecessary duplicate password prompts.
The NullSoftInstaller does not install git-remote-annex. For that
matter, it does not install git-annex-shell either. I don't know quite
how it would make sense to do so, without hard links.
It could contain 3 copies of the same binary.
Logically, this should make it need a lot less memory when files have
been changed many times. In my tests, it didn't seem to change memory
use at all. Unsure why, it is working. It's possible the Response is not
getting garbage collected due to pinning. But as far as I can see, all
parts of it that are retained get copied in a way that won't keep the
whole thing pinned in memory.
Fix infinite loop and memory blowup when importing from an unversioned S3
bucket that is large enough to need pagination.
I don't think there actually ever will be a Marker element, a delimiter is
not set.
Probably this code path was never tested with pagination! Also the aws
library's lack of any docs made it easy to mess up.
Versioned buckets seem to not have the same problem. The API docs for
ListObjectVersions say that NextKeyMarker will always be provided when
paginating.