Avoid MVar deadlock issue, which I don't understand.
Have not taken the time to debug it fully, because it turns out I don't
need to resolve merge conflicts when a new branch ref is written... I
think.
Ensure the git-annex branch is merged when doing a manual pull.
Otherwise it can get out of sync, since git-annex normally only merges it
once per run.
SampleMVar won't work; between getting the current value and changing
it, another thread could made a change, which would get lost.
TMVar works well; this update situation is handled by atomic transactions.
Note that, since this always pushes branch synced/master to the remote, it
assumes that master has already gotten all the commits that are on the
remote merged in. Otherwise, fast-forward prevention may prevent the push.
That's probably ok, because the next stage is to automatically detect
incoming pushes and merge.
It's possible for there to be multiple queued changes all adding the same
file, and for those changes to be reordered. Maybe. This check will guard
against that ending up adding the wrong version of the file last.
Rethought how to keep track of pending adds that need to be retried later.
The commit thread already run up every second when there are changes,
so let's keep pending adds queued as changes until they're safe to add.
Also, the committer is now smarter about avoiding empty commits when
all the adds are currently unsafe, or in the rare case that an add event
for a symlink is not received in time. It may avoid them entirely.
This seems to work as before for inotify, and is untested for kqueue.
(Actually commit batching seems to be improved for inotify, although I'm
not sure why. I'm seeing only two commits made during large batch
operations, and the first of those is the non-batch mode commit.)
Kqueue needs to remember which files failed to be added due to being open,
and retry them. This commit gets the data in place for such a retry thread.
Broke KeySource out into its own file, and added Eq and Ord instances
so it can be stored in a Set.
There is indeed a race waiting for LinkChanges:
1. file annexed, link made
2. link deleted
3. inotify event for link creation runs, but as link is gone, handler is not run
Defer adding files to the annex until commit time, when during a batch
operation, a bundle of files will be available. This will allow for
checking a them all with a single lsof call.
The tricky part is that adding the file causes a symlink change inotify.
So I made it wait for an appropriate number of symlink changes to be
received before continuing with the commit. This avoids any delay
in the commit process. It is possible that some unrelated symlink change is
made; if that happens it'll commit it and delay committing the newly added
symlink for 1 second. This seems ok. I do rely on the expected symlink
change event always being received, but only when the add succeeds.
Another way to do it might be to directly stage the symlink, and then
ignore the redundant symlink change event. That would involve some
redundant work, and perhaps an empty commit, but if this code turns
out to have some bug, that'd be the best way to avoid it.
FWIW, this change seems to, as a bonus, have produced better grouping
of batch changes into single commits. Before, a large batch change would
result in a series of commits, with the first containing only one file,
and each of the rest bundling a number of files. Now, the added wait for
the symlink changes to arrive gives time for additional add changes to
be processed, all within the same commit.
A few places catch IO errors after calling runThreadState,
but since the MVar was not restored, it'd later deadlock trying to read
from it.
I'd like to catch all exceptions here, but I could not get the types
to unify.