Now when a download is queued and there's no known remote to get it from,
it's added to a deferred download list, which will be retried later.
The Merger thread tries to queue any deferred downloads when it receives
a push to the git-annex branch.
Note that the Merger thread now also forces an update of the git-annex
branch. The assistant was not updating this branch before, and it saw a
(mostly) correct view of state, but now that incoming pushes go to
synced/git-annex, it needs to be merged in.
Don't expose these as branches in refs/heads/. Instead hide them away in
refs/synced/ where only show-ref will find them.
Make unused only look at branches and tags, not these other things,
so it won't care if some stale sync ref used to use a file.
This means they don't need to be deleted, which could have
led to an incoming sync being missed.
The fallback branches pushed to contain the uuid of the pusher, which is
ugly. That's why syncing doesn't normally use this method.
The merger deletes fallback branches after merging them, to contain the
ugliness, and so unused doesn't look at data from these branches.
(The fallback git-annex branch is left behind for now.)
Now other repositories can configure special remotes, and when their
configuration has propigated out, they'll appear in the webapp's list of
repositories, with a link to enable them.
Added support for enabling rsync special remotes, and directory special
remotes that are on removable drives. However, encrypted directory special
remotes are not supported yet. The removable drive configuator doesn't
support them yet anyway.
Turns out sClose was working fine.. but it was not being run on every
opened socket. The upstream bug is that multicastSender can crash
on an invalid (or ipv6) address and when this happens it's already
opened a socket, which just goes missing with no way to close it.
A simple fix to the library can avoid this, as I describe here:
https://github.com/audreyt/network-multicast/issues/2
In the meantime, just skipping ipv6 addresses will fix the fd leak.
Finally.
Last bug fixes here: Send PairResp with same UUID in the PairReq.
Fix off-by-one in code that filters out our own pairing messages.
Also reworked the pairing alerts, which are still slightly buggy.
Pair requests the the same UUID are part of the same pairing session,
which allows us to detect attempts to brute force the shared secret,
as that will result in pair requests with the same UUID that are
not verified with the right secret.
They work fine. But I had to go to a lot of trouble to get Yesod to render
routes in a pure function. It may instead make more sense to have each
alert have an assocated IO action, and a single route that runs the IO
action of a given alert id. I just wish I'd realized that before the past
several hours of struggling with something Yesod really doesn't want to
allow.
The remote computer may not support mDNS. Instead, pass over the uname -a
hostname, and the IP address, and leave best hostname calculation to the
remote side.
Pair requests are sent on all network interfaces, and contain the best
available hostname to use to contact the host on that interface.
Added a pairing in progress page.
Revert "reduce some boilerplate using ghc extensions", because it caused
overlapping instances for Text.
Actually 3 forms in one, this handles the initial passphrase entry, and the
confirmation, and also varys wording if the same user or a different user
is confirming.
Roughed out a data type that models the whole pairing conversation,
and can be serialized to implement it. And a state machine to run
that conversation. Not yet hooked up to any transport such as multicast
UDP.
Avoid trying to git push/pull to special remotes, but still do transfer
scans of them, after git pull from any other remotes, so we know about
any values that have been placed on them.
I think this makes sense.. Unless the assistant is running on the server,
the repo won't be updated, so it might as well be bare.
Non-bare repos will be handled by the pairing configurator, later.
The code to maintain that TChan in parallel with the list was buggy,
the two were not always the same. And all that TChan was needed for was
blocking on the next transfer, which can be accomplished just as well by
checking the size and retrying, thanks to STM.
Also, this is faster, and uses less memory. Total win.
I had an intuition that throwTo might be blocking because an exception was
caught and the exception handler was running. This seems to be the case,
and is avoided by using try. However, I can't really find anywhere in
throwTo's documentation that justifies this behavior.
When multiple downloads of a key are queued, it starts the first, but leaves the
other downloads in the queue. This ensures that we don't lose a queued
download if the one that got started failed.
Run code that pops off the next queued transfer and adds it to the active
transfer map within an allocated transfer slot, rather than before
allocating a slot. Fixes the transfers display, which had been displaying
the next transfer as a running transfer, while the previous transfer was
still running.
Currently only the web special remote is readonly, but it'd be possible to
also have readonly drives, or other remotes. These are handled in the
assistant by only downloading from them, and never trying to upload to
them.
The expensive transfer scan now scans a whole set of remotes in one pass.
So at startup, or when network comes up, it will run only once.
Note that this can result in transfers from/to higher cost remotes being
queued before other transfers of other content from/to lower cost remotes.
Before, low cost remotes were scanned first and all their transfers came
first. When multiple transfers are queued for a key, the lower cost ones
are still queued first. However, this could result in transfers from slow
remotes running for a long time while transfers of other data from faster
remotes waits.
I expect to make the transfer queue smarter about ordering
and/or make it allow multiple transfers at a time, which should eliminate
this annoyance. (Also, it was already possible to get into that situation,
for example if the network was up, lots of transfers from slow remotes
might be queued, and then a disk is mounted and its faster transfers have
to wait.)
Also note that this means I don't need to improve the code in
Assistant.Sync that currently checks if any of the reconnected remotes
have diverged, and if so, queues scans of all of them. That had been very
innefficient, but now doesn't matter.
Used by the assistant, rather than copy, this is faster because it avoids
using git ls-files, avoids checking the location log redundantly, and
runs in oneshot mode, avoiding making a commit to the git-annex branch
for every file transferred.
There are multiple reasons to do this:
* The local network may be up solid, but a route to a networked remote
is having trouble. Any transfers to it that fail should be retried.
* Someone might have wicd running, but like to bring up new networks
by hand too. This way, it'll eventually notice them.
The problem with using it here is that, if a removable drive is scanned
and gets disconnected during the scan, testing for all the files will
indicate it doesn't have them, and the scan is logged as completed
successfully, without necessary transfers being queued.
Found a very cheap way to determine when a disconnected remote has
diverged, and has new content that needs to be transferred: Piggyback on
the git-annex branch update, which already checks for divergence.
However, this does not check if new content has appeared locally while
disconnected, that should be transferred to the remote.
Also, this does not handle cases where the two git repos are in sync,
but their content syncing has not caught up yet.
This code could have its efficiency improved:
* When multiple remotes are synced, if any one has diverged, they're
all queued for transfer scans.
* The transfer scanner could be told whether the remote has new content,
the local repo has new content, or both, and could optimise its scan
accordingly.
This deals with interruptions in network connectevity, by listening
for a new network interface coming up (using dbus to see when
network-manager or wicd do it), and forcing a rescan of
A paused transfer's thread keeps running, keeping the slot in use.
This is intentional; pausing a transfer should not let other
queued transfers to run in its place.
This seems to work pretty well.
Handled the process groups like this:
- git-annex processes started by the assistant for transfers are run in their
own process groups.
- otherwise, rely on the shell to allocate a process group for git-annex
There is potentially a problem if some other program runs git-annex
directly (not using sh -c) The program and git-annex would then be in
the same process group. If that git-annex starts a transfer and it's
canceled, the program would also get killed. May or may not be a desired
result.
Also, the new updateTransferInfo probably closes a race where it was
possible for the thread id to not be recorded in the transfer info, if
the transfer info file from the transfer process is read first.
This doesn't quite work, because canceling a transfer sends a signal
to git-annex, but not to rsync (etc).
Looked at making git-annex run in its own process group, which could then
be killed, and would kill child processes. But, rsync checks if it's
process group is the foreground process group and doesn't show progress if
not, and when git has run git-annex, if git-annex makes a new process
group, that is not the case. Also, if git has run git-annex, ctrl-c
wouldn't be propigated to it if it made a new process group.
So this seems like a blind alley, but recording it here just in case.
Should work (untested) for transfers being run by other processes.
Not yet by transfers being run by the assistant. killThread does not
kill processes forked off by a thread. To fix this, will probably
need to make `git annex getkey` and `git annex sendkey` commands that
operate on keys, and write their own transfer info. Then the assistant
can run them, and kill them, as needed.
This commit includes a paydown on technical debt incurred two years ago,
when I didn't know that it was bad to make custom Read and Show instances
for types. As the routes need Read and Show for Transfer, which includes a
Key, and deriving my own Read instance of key was not practical,
I had to finally clean that up.
So the compact Key read and show functions are now file2key and key2file,
and Read and Show are now derived instances.
Changed all code that used the old instances, compiler checked.
(There were a few places, particularly in Command.Unused, and the test
suite where the Show instance continue to be used for legitimate
comparisons; ie show key_x == show key_y (though really in a bloom filter))
The TMVar is supposed to be left empty once the map is empty, but the code
neglected to do that, so the next time takeMVar got an empty map, which
is not handled since that was supposed to never happen..
Also, avoid any possibility of this crash. If an empty map somehow creeps
in, just retry.
This should work on linux (xdg-open) and OSX (open). If the program
is not in $PATH, it falls back to opening a browser window/tab with file:///
The only tricky bit is the javascript code, that handles clicking on the
link. This is to avoid unnecessary page refreshes. Until I added the
return false at the end, the <a>'s normal click event also fired, so two
file browsers opened. I have not checked portability extensively.
30 characters would mostly work, but 20 is safer due to some wider letters
like 'w'. Of course this is very heuristic based on filesize anyway.
(Bootstrap does a surprisingly bad job at dealing with overlong words
in the sidebar.)
Now an alert tracks files that have recently been added. As a large file
is added, it will have its own alert, that then combines with the tracker
when dones.
Also used for combining sanity checker alerts, as it could possibly want to
display a lot.
git annex assistant --autostart will start separate daemons in each
listed autostart repo
running the webapp outside any git-annex repo will open it on the
first listed autostart repo
This allows me to not build-depend on blaze-markup, which was causing
me some trouble when tring to build with cabal on debian. Seems debian
ships Text.Blaze.Renderer.String in two packages.
Unifying poll results, it's Annex in lowercase. :)
When cwd is HOME, use ~/Desktop/annex, unless there's no Desktop directory;
then use use ~/annex
If cwd is not $HOME, use cwd
Now the javascript does an ajax call at the start to request the url
to use to poll, and the notification id is generated then, once we know
javascript is working.
Depending on how the webapp was started up and whether the user clicked on
any links in it, window.close() may be disallowed by browser security
policy. Also if that fails, display a modal dialog that nicely blackens out
the webapp.
TODO: avoid Escape closing it. Bootstrap's docs are unclear about how to do
that.
Putting the transfer on the currentTransfers atomically introduced a bug:
It checks to see if the transfer is in progress, and cancels it.
Fixed by moving that check inside the STM transaction.
This may be customised differently than the main page later on, but
for now the important thing is that this constantly refreshed page does not
allocate a new NotificationHandle each time it's loaded.
WebApp now shows changes with no delay. Comparing a running git-annex get
and the webapp side-by-side, they both show each new transfer at the same
time.
The fun part was making it move things from TransferQueue to currentTransfers
entirely atomically. Which will avoid inconsistent display if the WebApp
renders the current status at just the wrong time. STM to the rescue!
I've convinced myself that nothing in DaemonStatus can deadlock,
as it always keepts the TMVar full. That was the only reason it was in the
Annex monad.
This avoids forking another process, avoids polling, fixes a race,
and avoids a rare forkProcess thread hang that I saw once time
when starting the webapp.
Had to switch to toWaiAppPlain to avoid a seeming bug in toWaiApp;
chromium only received a partial copy of jquery. Always the same length
each time, which makes me think it's a bug in the compression, although
a bug in the autohead middleware is also a possibility.
Anyway, there's little need for compression for a local webapp. Not wasting
time compressing things is probably a net gain.
Similarly, I've not worried about minifying this yet. Although that would
avoid bloating the git-annex binary quite so much.
Very happy to have a reusable autoUpdate widget that can make any Yesod
widget automatically refresh!
Also added support for non-javascript browsers, falling back to meta
refresh.
Also, the home page is now rendered with the webapp status on it, before
any refreshing is done.
The webapp is now a constantly updating clock! I accomplished this amazing
feat using "long polling", with some jquery and a little custom java
script.
There are more modern techniques, but this one works everywhere.
Broke hamlet out into standalone files.
I don't like the favicon display; it should be served from /favicon.ico,
but I could only get the static site to serve /static/favicon.ico, so
I had to use a <link rel=icon> to pull it in. I looked at
Yesod.Default.Handlers.getFaviconR, but it doesn't seem to embed
the favicon into the binary?
Best dbus events I could find were setupDone from org.kde.Solid.Device.
There may be some spurious events, but that's ok, the code will only
check to see if new mounts are available.
It does not try to auto-start this service if it's not running.
This should fix OSX/BSD issues with not noticing transfer information
files with kqueue. Now that threads are used, the thread can manage the
transfer slot allocation and deallocation by itself; much cleaner.
Check first if a transfer needs to be done, using the location log only
(for speed), and avoid occupying a slot if not. Always write a transfer
info file, and keep it open throughout the tranfer process.
Now transfers to remotes seem reliable.
There's still a bug; if the child updates its transfer info file,
then the data from it will superscede the TransferInfo, losing the
info that we should wait on this child.
Added knownRemotes to DaemonStatus. This list is not entirely trivial to
calculate, and having it here should make it easier to add/remove remotes
on the fly later on. It did require plumbing the daemonstatus through to
some more threads.
The reason the DirWatcher had to wait for program termination was because
it used withINotify, so when it finished, its watcher threads were killed.
But since I have two DirWatcher threads now, that was not good, and could
perhaps explain the MVar problem I saw yesterday. In any case, fixed this
part of the code by making the DirWatcher return a handle that can be used
to stop it, and now the main Assistant thread is the only one calling
waitForTermination.
Avoid MVar deadlock issue, which I don't understand.
Have not taken the time to debug it fully, because it turns out I don't
need to resolve merge conflicts when a new branch ref is written... I
think.
Ensure the git-annex branch is merged when doing a manual pull.
Otherwise it can get out of sync, since git-annex normally only merges it
once per run.
SampleMVar won't work; between getting the current value and changing
it, another thread could made a change, which would get lost.
TMVar works well; this update situation is handled by atomic transactions.
Note that, since this always pushes branch synced/master to the remote, it
assumes that master has already gotten all the commits that are on the
remote merged in. Otherwise, fast-forward prevention may prevent the push.
That's probably ok, because the next stage is to automatically detect
incoming pushes and merge.
It's possible for there to be multiple queued changes all adding the same
file, and for those changes to be reordered. Maybe. This check will guard
against that ending up adding the wrong version of the file last.
Rethought how to keep track of pending adds that need to be retried later.
The commit thread already run up every second when there are changes,
so let's keep pending adds queued as changes until they're safe to add.
Also, the committer is now smarter about avoiding empty commits when
all the adds are currently unsafe, or in the rare case that an add event
for a symlink is not received in time. It may avoid them entirely.
This seems to work as before for inotify, and is untested for kqueue.
(Actually commit batching seems to be improved for inotify, although I'm
not sure why. I'm seeing only two commits made during large batch
operations, and the first of those is the non-batch mode commit.)
Kqueue needs to remember which files failed to be added due to being open,
and retry them. This commit gets the data in place for such a retry thread.
Broke KeySource out into its own file, and added Eq and Ord instances
so it can be stored in a Set.
There is indeed a race waiting for LinkChanges:
1. file annexed, link made
2. link deleted
3. inotify event for link creation runs, but as link is gone, handler is not run
Defer adding files to the annex until commit time, when during a batch
operation, a bundle of files will be available. This will allow for
checking a them all with a single lsof call.
The tricky part is that adding the file causes a symlink change inotify.
So I made it wait for an appropriate number of symlink changes to be
received before continuing with the commit. This avoids any delay
in the commit process. It is possible that some unrelated symlink change is
made; if that happens it'll commit it and delay committing the newly added
symlink for 1 second. This seems ok. I do rely on the expected symlink
change event always being received, but only when the add succeeds.
Another way to do it might be to directly stage the symlink, and then
ignore the redundant symlink change event. That would involve some
redundant work, and perhaps an empty commit, but if this code turns
out to have some bug, that'd be the best way to avoid it.
FWIW, this change seems to, as a bonus, have produced better grouping
of batch changes into single commits. Before, a large batch change would
result in a series of commits, with the first containing only one file,
and each of the rest bundling a number of files. Now, the added wait for
the symlink changes to arrive gives time for additional add changes to
be processed, all within the same commit.
A few places catch IO errors after calling runThreadState,
but since the MVar was not restored, it'd later deadlock trying to read
from it.
I'd like to catch all exceptions here, but I could not get the types
to unify.