Converted several threads to run in the monad.
Added a lot of useful combinators for working with the monad.
Now the monad includes the name of the thread.
Some debugging messages are disabled pending converting other threads.
I now have this topology working:
assistant ---> {bare repo, special remote} <--- assistant
And, I think, also this one:
+----------- bare repo --------+
v v
assistant ---> special remote <--- assistant
While before with assistant <---> assistant connections, both sides got
location info updated after a transfer, in this topology, the bare repo
*might* get its location info updated, but the other assistant has no way to
know that it did. And a special remote doesn't record location info,
so transfers to it won't propigate out location log changes at all.
So, for these to work, after a transfer succeeds, the git-annex branch
needs to be pushed. This is done by recording a synthetic commit has
occurred, which lets the pusher handle pushing out the change (which will
include actually committing any still journalled changes to the git-annex
branch).
Of course, this means rather a lot more syncing action than happened
before. At least the pusher bundles together very close together pushes,
somewhat. Currently it just waits 2 seconds between each push.
This ensures file propigate takes place in situations such as: Usb drive A
is connected to B. A's master branch is already in sync with B, but it is
being used to sneakernet some files around, so B downloads those. There is no
master branch change, so C does not request these files. B needs to upload
the files it just downloaded on to C, etc.
My first try at this, I saw loops happen. B uploaded to C, which then
tried to upload back to B (because it had not received the updated
git-annex branch from B yet). B already had the file, but it still created
a transfer info file from the incoming transfer, and its watcher saw
that be removed, and tried to upload back to C.
These loops should have been fixed by my previous commit. (They never
affected ssh remotes, only local ones, it seemed.) While C might still try
to upload to B, or to some other remote that already has the file, the
extra work dies out there.
When multiple downloads of a key are queued, it starts the first, but leaves the
other downloads in the queue. This ensures that we don't lose a queued
download if the one that got started failed.
Run code that pops off the next queued transfer and adds it to the active
transfer map within an allocated transfer slot, rather than before
allocating a slot. Fixes the transfers display, which had been displaying
the next transfer as a running transfer, while the previous transfer was
still running.
Used by the assistant, rather than copy, this is faster because it avoids
using git ls-files, avoids checking the location log redundantly, and
runs in oneshot mode, avoiding making a commit to the git-annex branch
for every file transferred.
This seems to work pretty well.
Handled the process groups like this:
- git-annex processes started by the assistant for transfers are run in their
own process groups.
- otherwise, rely on the shell to allocate a process group for git-annex
There is potentially a problem if some other program runs git-annex
directly (not using sh -c) The program and git-annex would then be in
the same process group. If that git-annex starts a transfer and it's
canceled, the program would also get killed. May or may not be a desired
result.
Also, the new updateTransferInfo probably closes a race where it was
possible for the thread id to not be recorded in the transfer info, if
the transfer info file from the transfer process is read first.
This doesn't quite work, because canceling a transfer sends a signal
to git-annex, but not to rsync (etc).
Looked at making git-annex run in its own process group, which could then
be killed, and would kill child processes. But, rsync checks if it's
process group is the foreground process group and doesn't show progress if
not, and when git has run git-annex, if git-annex makes a new process
group, that is not the case. Also, if git has run git-annex, ctrl-c
wouldn't be propigated to it if it made a new process group.
So this seems like a blind alley, but recording it here just in case.
Putting the transfer on the currentTransfers atomically introduced a bug:
It checks to see if the transfer is in progress, and cancels it.
Fixed by moving that check inside the STM transaction.
The fun part was making it move things from TransferQueue to currentTransfers
entirely atomically. Which will avoid inconsistent display if the WebApp
renders the current status at just the wrong time. STM to the rescue!
I've convinced myself that nothing in DaemonStatus can deadlock,
as it always keepts the TMVar full. That was the only reason it was in the
Annex monad.
This should fix OSX/BSD issues with not noticing transfer information
files with kqueue. Now that threads are used, the thread can manage the
transfer slot allocation and deallocation by itself; much cleaner.
Check first if a transfer needs to be done, using the location log only
(for speed), and avoid occupying a slot if not. Always write a transfer
info file, and keep it open throughout the tranfer process.
Now transfers to remotes seem reliable.
There's still a bug; if the child updates its transfer info file,
then the data from it will superscede the TransferInfo, losing the
info that we should wait on this child.