This means that anyone serving up the webapp to users as a service
(ie, without providing any git-annex binary at all to the user) still needs
to provide a link to the source code for it, including any modifications
they may make.
This may make git-annex be covered by the AGPL as a whole when it is built
with the webapp. If in doubt, you should ask a lawyer.
When git-annex is built with the webapp disabled, no AGPLed code is used.
Even building in the assistant does not pull in AGPLed code.
This is handled differently for inotify, which can track modifications of
existing files, and kqueue, which cannot (TTBOMK). On the inotify side,
the TransferWatcher just waits for the file to be updated and reads the new
bytesComplete. On the kqueue side, the TransferPoller has to re-read the
file every update (currently 0.5 seconds, might need to increase that).
I did think about working around kqueue's limitations by somehow creating
a new file each time the size changed. But cleaning up all the files that
would result seemed difficult. And really, this is not a lot worse than
the TransferWatcher's behavior for downloads, which stats a file every 0.5
seconds. As long as the OS has decent file caching behavior..
cp is used here, but we can just watch the size of the destination file
This commit made from within the ruins of an old mill, overlooking a
beautiful waterfall.
This doesn't avoid it sometimes attempting to commit when there are no
changes. Typically that happens when a change is pushed in from another
repo; the watcher sees the file and tries to stage it, resulting in an
empty commit. Really fixing that would probably use more CPU than
occasionally trying to make an empty commit.
However, this does save a lot of unnecessary work, as those empty commits
had to be synced out, which no longer happens.
This ensures file propigate takes place in situations such as: Usb drive A
is connected to B. A's master branch is already in sync with B, but it is
being used to sneakernet some files around, so B downloads those. There is no
master branch change, so C does not request these files. B needs to upload
the files it just downloaded on to C, etc.
My first try at this, I saw loops happen. B uploaded to C, which then
tried to upload back to B (because it had not received the updated
git-annex branch from B yet). B already had the file, but it still created
a transfer info file from the incoming transfer, and its watcher saw
that be removed, and tried to upload back to C.
These loops should have been fixed by my previous commit. (They never
affected ssh remotes, only local ones, it seemed.) While C might still try
to upload to B, or to some other remote that already has the file, the
extra work dies out there.
I was seeing some interesting crashes after the previous commit,
when making file changes slightly faster than the assistant could keep up.
error: Ref refs/heads/master is at 7074f8e0a11110c532d06746e334f2fec6af6ab4 but expected 95ea86008d72a40d97a81cfc8fb47a0da92166bd
fatal: cannot lock HEAD ref
Committer crashed: git commit [Param "--allow-empty-message",Param "-m",Param "",Param "--allow-empty",Param "--quiet"] failed
Pusher crashed: thread blocked indefinitely in an STM transaction
Clearly the the merger ended up running at the same time as the committer,
and with both modifying HEAD the committer crashed. I fixed that by
making the Merger run its merge inside the annex monad, which avoids
it running concurrently with other git operations. Also by making
the committer not crash if git fails.
What I don't understand is why the pusher then crashed with a STM deadlock.
That must be in either the DaemonStatusHandle or the FailedPushMap,
and the latter is only used by the pusher. Did the committer's crash somehow
break STM?
The BlockedIndefinitelyOnSTM exception is described as:
-- |The thread is waiting to retry an STM transaction, but there are no
-- other references to any @TVar@s involved, so it can't ever continue.
If the Committer had a reference to a TVar and crashed, I can sort of see
this leading to that exception..
The crash was quite easy to reproduce after the previous commit, but
after making the above change, I have yet to see it again. Here's hoping.
Now when a download is queued and there's no known remote to get it from,
it's added to a deferred download list, which will be retried later.
The Merger thread tries to queue any deferred downloads when it receives
a push to the git-annex branch.
Note that the Merger thread now also forces an update of the git-annex
branch. The assistant was not updating this branch before, and it saw a
(mostly) correct view of state, but now that incoming pushes go to
synced/git-annex, it needs to be merged in.
Don't expose these as branches in refs/heads/. Instead hide them away in
refs/synced/ where only show-ref will find them.
Make unused only look at branches and tags, not these other things,
so it won't care if some stale sync ref used to use a file.
This means they don't need to be deleted, which could have
led to an incoming sync being missed.
The fallback branches pushed to contain the uuid of the pusher, which is
ugly. That's why syncing doesn't normally use this method.
The merger deletes fallback branches after merging them, to contain the
ugliness, and so unused doesn't look at data from these branches.
(The fallback git-annex branch is left behind for now.)
Now other repositories can configure special remotes, and when their
configuration has propigated out, they'll appear in the webapp's list of
repositories, with a link to enable them.
Added support for enabling rsync special remotes, and directory special
remotes that are on removable drives. However, encrypted directory special
remotes are not supported yet. The removable drive configuator doesn't
support them yet anyway.
Turns out sClose was working fine.. but it was not being run on every
opened socket. The upstream bug is that multicastSender can crash
on an invalid (or ipv6) address and when this happens it's already
opened a socket, which just goes missing with no way to close it.
A simple fix to the library can avoid this, as I describe here:
https://github.com/audreyt/network-multicast/issues/2
In the meantime, just skipping ipv6 addresses will fix the fd leak.
Finally.
Last bug fixes here: Send PairResp with same UUID in the PairReq.
Fix off-by-one in code that filters out our own pairing messages.
Also reworked the pairing alerts, which are still slightly buggy.
Pair requests the the same UUID are part of the same pairing session,
which allows us to detect attempts to brute force the shared secret,
as that will result in pair requests with the same UUID that are
not verified with the right secret.
They work fine. But I had to go to a lot of trouble to get Yesod to render
routes in a pure function. It may instead make more sense to have each
alert have an assocated IO action, and a single route that runs the IO
action of a given alert id. I just wish I'd realized that before the past
several hours of struggling with something Yesod really doesn't want to
allow.
The remote computer may not support mDNS. Instead, pass over the uname -a
hostname, and the IP address, and leave best hostname calculation to the
remote side.
Pair requests are sent on all network interfaces, and contain the best
available hostname to use to contact the host on that interface.
Added a pairing in progress page.
Revert "reduce some boilerplate using ghc extensions", because it caused
overlapping instances for Text.
Actually 3 forms in one, this handles the initial passphrase entry, and the
confirmation, and also varys wording if the same user or a different user
is confirming.
Roughed out a data type that models the whole pairing conversation,
and can be serialized to implement it. And a state machine to run
that conversation. Not yet hooked up to any transport such as multicast
UDP.
Avoid trying to git push/pull to special remotes, but still do transfer
scans of them, after git pull from any other remotes, so we know about
any values that have been placed on them.
I think this makes sense.. Unless the assistant is running on the server,
the repo won't be updated, so it might as well be bare.
Non-bare repos will be handled by the pairing configurator, later.
The code to maintain that TChan in parallel with the list was buggy,
the two were not always the same. And all that TChan was needed for was
blocking on the next transfer, which can be accomplished just as well by
checking the size and retrying, thanks to STM.
Also, this is faster, and uses less memory. Total win.
I had an intuition that throwTo might be blocking because an exception was
caught and the exception handler was running. This seems to be the case,
and is avoided by using try. However, I can't really find anywhere in
throwTo's documentation that justifies this behavior.
When multiple downloads of a key are queued, it starts the first, but leaves the
other downloads in the queue. This ensures that we don't lose a queued
download if the one that got started failed.