If an add failed, we should lose the KeySource, since it, presumably,
differs due to a change that was made to the file.
(The locked down file is already deleted.)
Making this a tset of lists of Changes, rather than a tset of Changes
makes refilling it, in batch mode, much more efficient. Rather than needing
to add every Change it's collected one at a time, it can add them in one
fast batch operation.
It would be more efficient yet to use a Set, but that would need an Eq
instance for InodeCache.
This cleaned up the code quite a bit; now the committer just looks at the
Change to see if it's a change that needs to have a transfer queued for it.
If I later want to add dropping keys for files that were removed, or
something like that, this should make it straightforward.
This also fixes a bug. In direct mode, moving a file out of an archive
directory failed to start a transfer to get its content. The problem
was that the file had not been committed to git yet, and so the transfer
code didn't want to touch it, since fileKey failed to get its key.
Only starting transfers after a commit avoids this problem.
Noticed that, At startup or network reconnect, git push messages were sent,
often before presence info has been gathered, so were not sent to any
buddies.
To fix this, keep track of which buddies have seen such messages,
and when new presence is received from a buddy that has not yet seen it,
resend.
This is done only for push initiation messages, so very little data needs
to be stored.
This fixes the issue mentioned in the last commit.
Turns out just collecting UUID of clients behind a XMPP remote is
insufficient (although I should probably still do it for other reasons),
because a single remote repo might be connected via both XMPP and local
pairing. So a way is needed to know when a push was received from any
client using a given XMPP remote over XMPP, as opposed to via ssh.
Make manualPull send push requests over XMPP.
When reconnecting with remotes, those that are XMPP remotes cannot
immediately be pulled from and scanned, so instead maintain a set of
(probably) desynced remotes, and put XMPP remotes on it. (This set could be
used in other ways later, if we can detect we're out of sync with other
types of remotes.)
The merger handles detecting when a XMPP push is received from a desynced
remote, and triggers a scan then, if they have in fact diverged.
This has one known bug: A single XMPP remote can have multiple clients
behind it. When this happens, only the UUID of one client is recorded
as the UUID of the XMPP remote. Pushes from the other XMPP clients will not
trigger a scan. If the client whose UUID is expected responds to the push
request, it'll work, but when that client is offline, we're SOL.
Noticed that when pairing, sometimes both sides start to push, and the other
side sends a PushRequest, and the two deadlock, neither doing anything.
(Timeout eventually breaks this.) So, let both run at the same time.
My reasoning is that StartingPush could be received after another push
starts being received, and it would be better to respond to it afterwards
than not.
It might even work, although nothing yet triggers XMPP pushes.
Also added a set of deferred push messages. Only one push can run at a
time, and unrelated push messages get deferred. The set will never grow
very large, because it only puts two types of messages in there, that
can only vary in the client doing the push.
Maybe the spec allows it, but broadcasting self-directed presence info to
all buddies is just insane.
I had to bring back the IQ messages for self-pairing, while still using
directed presence for other pairing. Ugly.