Fix infinite loop and memory blowup when importing from an unversioned S3
bucket that is large enough to need pagination.
I don't think there actually ever will be a Marker element, a delimiter is
not set.
Probably this code path was never tested with pagination! Also the aws
library's lack of any docs made it easy to mess up.
Versioned buckets seem to not have the same problem. The API docs for
ListObjectVersions say that NextKeyMarker will always be provided when
paginating.
This is to reduce user confusion when their annex.largefiles matches it,
or is not set.
Note that, when annex.dotfiles is set, but a dotfile is not matched by
annex.largefiles, the "non-large file" message will be displayed. That
makes sense because whether the file is a dotfile does not matter with that
configuration.
Also, this slightly optimised the annex.dotfiles path in passing,
by avoiding the slight slowdown caused by the check added in commit
876d5b6c6f in that case.
Assistant and smudge also updated.
This does add a small amount of extra work, getting the TopFilePath.
Not enough to be concerned by.
Also improve documentation to make clear that files inside dotdirs are
treated as dotfiles.
Sponsored-by: Eve on Patreon
Introduced in version 10.20241031 that broke cloning from a special remote
retrieveKeyFile changed to use createAnnexDirectory, which means that the
path passed to it needs to be under .git
git-remote-annex is probably the only thing in git-annex where that was not
the case. And there's no real reason it cannot be the case with it either.
Just use withOtherTmp.