Added Annex.cleanup, which is a general purpose interface for adding
actions to run at the end.
Remotes with the old git-annex-shell will commit every time, and have no
commit command, so hide stderr when running the commit command.
Eventually, git-annex might try running this after making changes to
a remote. I have not yet thought of a good way for it to tell which
remotes it needs to run it on though. It can't just do it when
shutting down a cached ssh connection, because ssh connection caching
is optional, and that would not handle local remotes not accessed over ssh
either.
To avoid commits of data to the git-annex branch after each command
is run, set annex.alwayscommit=false. Its data will then be committed
less frequently, when a merge or sync is done.
I was able to reproduce this on linux using the kernel's nfs server and
mounting localhost:/. Determined that removing the directory fails when
the just-deleted file in it was locked. Considered dropping the lock
before removing the directory, but this would complicate parts of the code
that should not need to worry about locking. So instead, ignore the failure
to remove the directory in this case.
While I was at it, made it attempt to remove both levels of hash
directories, in case they're empty.
This is the last memory leak that prevents git-annex from running
in constant space, as far as I can see. I can now run git annex find
dummied up to repeatedly find the same file over and over, on millions
olf files, and memory stays entirely constant.
useful when adding hundreds of thousands of files on a system with plenty
of memory.
git add gets quite slow in such a large repository, so if the system has
more than the ~32 mb of memory the queue can use by default, it's a useful
optimisation to increase the queue size, in order to decrease the number
of times git add is run.
The list of files had to be retained until the end so it could be deleted.
Also, a list of update-index lines was generated and only then fed into it.
Now everything streams in constant space.
When hashing the files, the entire list of shas was read strictly.
That was entirely unnecessary, since there's a cleanup action run
after they're consumed.