When a uuid is not known, rescan for new repositories. Easy.
When a repository is removed, it will also get removed from the server
state on the next scan. But until a new uuid is seen, there will not be
a scan. This leaves the server trying to serve a uuid whose repository
is gone. That seems buggy. While getting just fails, dropping fails the
first time, but seems to leave the server in an unusable state, so the
next drop attempt hangs. The server is still able to serve other uuids,
only the one whose repository was removed has that problem.
--jobs is usually an Annex option setter, but --directory runs in IO, so
would not have that available. So instead moved the option parser into
the command's Options.
Untested, but it compiles, so.
Known problems:
* --jobs is not available to startIO
* Does not notice when new repositories are added to a directory.
* Does not notice when repositories are removed from a directory.
This fixes support for proxying after last commit broke it.
Note that withP2PConnections is called at server startup, and so only
proxies seen at that point will appear in the map and be used. It was
already the case that a proxy added after p2phttp was running would not
be served.
I think that is possibly a bug, but at least this commit doesn't
introduce the problem, though it might make it harder to fix it.
As bugs go, it's probably not a big deal, because after all,
git configs needs to be set in the local repository, followed by
git-annex updateproxy being run, to set up proxying. If someone is doing
that, they can restart their http server I suppose.
This is early groundwork for making p2phttp support serving multiple
repositories from a single daemon.
So far only 1 repository is served still. And this commit breaks support
for proxying!
When remote.name.annexUrl is an annex+http(s) url, that uses the same
hostname as remote.name.url, which is itself a http(s) url, they are
assumed to share a username and password.
This avoids unnecessary duplicate password prompts.
The NullSoftInstaller does not install git-remote-annex. For that
matter, it does not install git-annex-shell either. I don't know quite
how it would make sense to do so, without hard links.
It could contain 3 copies of the same binary.
Logically, this should make it need a lot less memory when files have
been changed many times. In my tests, it didn't seem to change memory
use at all. Unsure why, it is working. It's possible the Response is not
getting garbage collected due to pinning. But as far as I can see, all
parts of it that are retained get copied in a way that won't keep the
whole thing pinned in memory.
Fix infinite loop and memory blowup when importing from an unversioned S3
bucket that is large enough to need pagination.
I don't think there actually ever will be a Marker element, a delimiter is
not set.
Probably this code path was never tested with pagination! Also the aws
library's lack of any docs made it easy to mess up.
Versioned buckets seem to not have the same problem. The API docs for
ListObjectVersions say that NextKeyMarker will always be provided when
paginating.
This is to reduce user confusion when their annex.largefiles matches it,
or is not set.
Note that, when annex.dotfiles is set, but a dotfile is not matched by
annex.largefiles, the "non-large file" message will be displayed. That
makes sense because whether the file is a dotfile does not matter with that
configuration.
Also, this slightly optimised the annex.dotfiles path in passing,
by avoiding the slight slowdown caused by the check added in commit
876d5b6c6f in that case.
Assistant and smudge also updated.
This does add a small amount of extra work, getting the TopFilePath.
Not enough to be concerned by.
Also improve documentation to make clear that files inside dotdirs are
treated as dotfiles.
Sponsored-by: Eve on Patreon