This avoids the expensive transfer scan relying on its list of remotes
to scan being accurate throughout, which it will not be when the user
pauses syncing to a remote.
I feel it's ok to queue transfers to *any* known remote, not just the ones
being scanned.
Note that there are still small races where after syncing to a remote is
paused, a transfer can be queued for it. Not just in the expensive transfer
scan, but in the cheap failed transfer scan, and elsewhere.
I noticed this while offline (so that lack of solar power is good for something).
Apparently it tries to bind multicast to lo, and that fails.
If this happens, catch it, and retry until a real network interface becomes
available.
It may be that this should tie into the NetWatcher, and rebind whenever
an interface comes up. Needs testing..
Both when queueing downloads, and uploads, consults the preferred content
settings.
I didn't make it check yet when requeing failed transfers or queuing
deferred downloads; dealing with the preferred content settings (or indeed,
other settings) changing while the assistant is running still needs work.
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
This is handled differently for inotify, which can track modifications of
existing files, and kqueue, which cannot (TTBOMK). On the inotify side,
the TransferWatcher just waits for the file to be updated and reads the new
bytesComplete. On the kqueue side, the TransferPoller has to re-read the
file every update (currently 0.5 seconds, might need to increase that).
I did think about working around kqueue's limitations by somehow creating
a new file each time the size changed. But cleaning up all the files that
would result seemed difficult. And really, this is not a lot worse than
the TransferWatcher's behavior for downloads, which stats a file every 0.5
seconds. As long as the OS has decent file caching behavior..
cp is used here, but we can just watch the size of the destination file
This commit made from within the ruins of an old mill, overlooking a
beautiful waterfall.
This doesn't avoid it sometimes attempting to commit when there are no
changes. Typically that happens when a change is pushed in from another
repo; the watcher sees the file and tries to stage it, resulting in an
empty commit. Really fixing that would probably use more CPU than
occasionally trying to make an empty commit.
However, this does save a lot of unnecessary work, as those empty commits
had to be synced out, which no longer happens.
This ensures file propigate takes place in situations such as: Usb drive A
is connected to B. A's master branch is already in sync with B, but it is
being used to sneakernet some files around, so B downloads those. There is no
master branch change, so C does not request these files. B needs to upload
the files it just downloaded on to C, etc.
My first try at this, I saw loops happen. B uploaded to C, which then
tried to upload back to B (because it had not received the updated
git-annex branch from B yet). B already had the file, but it still created
a transfer info file from the incoming transfer, and its watcher saw
that be removed, and tried to upload back to C.
These loops should have been fixed by my previous commit. (They never
affected ssh remotes, only local ones, it seemed.) While C might still try
to upload to B, or to some other remote that already has the file, the
extra work dies out there.
I was seeing some interesting crashes after the previous commit,
when making file changes slightly faster than the assistant could keep up.
error: Ref refs/heads/master is at 7074f8e0a11110c532d06746e334f2fec6af6ab4 but expected 95ea86008d72a40d97a81cfc8fb47a0da92166bd
fatal: cannot lock HEAD ref
Committer crashed: git commit [Param "--allow-empty-message",Param "-m",Param "",Param "--allow-empty",Param "--quiet"] failed
Pusher crashed: thread blocked indefinitely in an STM transaction
Clearly the the merger ended up running at the same time as the committer,
and with both modifying HEAD the committer crashed. I fixed that by
making the Merger run its merge inside the annex monad, which avoids
it running concurrently with other git operations. Also by making
the committer not crash if git fails.
What I don't understand is why the pusher then crashed with a STM deadlock.
That must be in either the DaemonStatusHandle or the FailedPushMap,
and the latter is only used by the pusher. Did the committer's crash somehow
break STM?
The BlockedIndefinitelyOnSTM exception is described as:
-- |The thread is waiting to retry an STM transaction, but there are no
-- other references to any @TVar@s involved, so it can't ever continue.
If the Committer had a reference to a TVar and crashed, I can sort of see
this leading to that exception..
The crash was quite easy to reproduce after the previous commit, but
after making the above change, I have yet to see it again. Here's hoping.
Now when a download is queued and there's no known remote to get it from,
it's added to a deferred download list, which will be retried later.
The Merger thread tries to queue any deferred downloads when it receives
a push to the git-annex branch.
Note that the Merger thread now also forces an update of the git-annex
branch. The assistant was not updating this branch before, and it saw a
(mostly) correct view of state, but now that incoming pushes go to
synced/git-annex, it needs to be merged in.
Don't expose these as branches in refs/heads/. Instead hide them away in
refs/synced/ where only show-ref will find them.
Make unused only look at branches and tags, not these other things,
so it won't care if some stale sync ref used to use a file.
This means they don't need to be deleted, which could have
led to an incoming sync being missed.
The fallback branches pushed to contain the uuid of the pusher, which is
ugly. That's why syncing doesn't normally use this method.
The merger deletes fallback branches after merging them, to contain the
ugliness, and so unused doesn't look at data from these branches.
(The fallback git-annex branch is left behind for now.)
Finally.
Last bug fixes here: Send PairResp with same UUID in the PairReq.
Fix off-by-one in code that filters out our own pairing messages.
Also reworked the pairing alerts, which are still slightly buggy.
Pair requests the the same UUID are part of the same pairing session,
which allows us to detect attempts to brute force the shared secret,
as that will result in pair requests with the same UUID that are
not verified with the right secret.