Unless the request is for repo uuid we already know. This way, if A1 pairs
with friend B1, and B1 pairs with device B2, then B1 can request A1 pair
with it and no confirmation is needed. (In future, may want to try to do
that automatically, to make a more robust network.)
Observed that the pushed refs were received, but not merged into master.
The merger never saw an add event for these refs. Either git is not writing
to a new file and renaming it into place, or the inotify code didn't notice
that. Changed it to also watch for modify events and that seems to have
fixed it!
(Except for the actual streaming of receive-pack through XMPP, which
can only run once we've gotten an appropriate uuid in a push initiation
message.)
Pushes are now only initiated when the initiation message comes from a
known uuid. This allows multiple distinct repositories to use the same xmpp
address.
Note: This probably breaks initial push after xmpp pairing, because at that
point we may not know about the paired uuid, and so reject the push from
it. It won't break in simple cases, because the annex-uuid of the remote
is checked. However, when there are multiple clients behind a single xmpp
address, only uuid of the first is recorded in annex-uuid, and so any
pushes from the others will be rejected (unless the first remote pushes their
uuids to us beforehand.
Without this, a very large batch add has commits of sizes approx
5000, 2500, 1250, etc down to 10, and then starts over at 5000.
This fixes it so it's 5000+ every time.
That hook updates associated file bookkeeping info for direct mode.
But, everything already called addAssociatedFile when adding/changing a
file. It only needed to also call removeAssociatedFile when deleting a file,
or a directory.
This should make bulk adds faster, by some possibly significant amount.
Bulk removals may be a little slower, since it has to use catKeyFile now
on each removed file, but will still be faster than adds.
There's a tradeoff between making less frequent commits, and
needing to use memory to store all the changes that are coming
in. At 10 thousand, it needs 150 mb of memory. 5 thousand drops
that down to 90 mb or so.
This also turns out to have significant imact on total run time.
I benchmarked 10k changes taking 27 minutes. But two 5k batches
took only 21 minutes.
If an add failed, we should lose the KeySource, since it, presumably,
differs due to a change that was made to the file.
(The locked down file is already deleted.)
Turns out that a lot of the time spent in a bulk add was just updating the
add alert to rotate through each file that was added. Showing one alert
makes for a significant speedup.
Also, when the webapp is open, this makes it take quite a lot less cpu
during bulk adds.
Also, it lets the user know when a bulk add happened, which is sorta
nice..
This better handles error messages formatted for console display, by
adding a <br> after each line.
Hmm, I wonder if it'd be worth pulling in a markdown formatter, and running
the messages through it?
In the case of the inotify limit warning, particularly, if it happens once
it will be happening repeatedly, and so combining alerts resulted in a
much too large alert message that took up a lot of memory and was too
large for the webapp to display.
Making this a tset of lists of Changes, rather than a tset of Changes
makes refilling it, in batch mode, much more efficient. Rather than needing
to add every Change it's collected one at a time, it can add them in one
fast batch operation.
It would be more efficient yet to use a Set, but that would need an Eq
instance for InodeCache.