This fixes the issue mentioned in the last commit.
Turns out just collecting UUID of clients behind a XMPP remote is
insufficient (although I should probably still do it for other reasons),
because a single remote repo might be connected via both XMPP and local
pairing. So a way is needed to know when a push was received from any
client using a given XMPP remote over XMPP, as opposed to via ssh.
Make manualPull send push requests over XMPP.
When reconnecting with remotes, those that are XMPP remotes cannot
immediately be pulled from and scanned, so instead maintain a set of
(probably) desynced remotes, and put XMPP remotes on it. (This set could be
used in other ways later, if we can detect we're out of sync with other
types of remotes.)
The merger handles detecting when a XMPP push is received from a desynced
remote, and triggers a scan then, if they have in fact diverged.
This has one known bug: A single XMPP remote can have multiple clients
behind it. When this happens, only the UUID of one client is recorded
as the UUID of the XMPP remote. Pushes from the other XMPP clients will not
trigger a scan. If the client whose UUID is expected responds to the push
request, it'll work, but when that client is offline, we're SOL.
Watcher wants to rewrite symlink to fix it. But in direct mode, the symlink
could be replaced at any time with file content that has finished being
transferred by some other process. So, just don't touch it.
FWIW, I audited the rest of the assistant for places where it removes
files, and the rest is ok. I have not audited the rest of git-annex.
assistant: Fix bug in direct mode that could occur when a symlink is moved
out of an archive directory, and resulted in the file not being set to
direct mode when it was transferred.
The bug was that the direct mode mapping was not up-to-date when the
transferrer finished. So, finding no direct mode place to store the object,
it was put into .git/annex in indirect mode.
To fix this, just make the watcher update the direct mode mapping to
include the new file before it starts the transfer. (Seems we don't need to
update it to remove the old file if the link was moved, because the direct
mode code will notice it's not present and the mapping gets updated for its
removal later.)
The reason this was a race, and was probably not seen often is because
the committer came along and updated the direct mode mapping as part of
adding the moved symlink. But when the file was sufficiently small or
the remote sufficiently fast, this could happen after the transfer
finished.
Looking through the git sources (documentation is unclear),
it seems commit doesn't ever trigger git-gc, mostly fetching and merging
seems to. I cannot easily override the setting in all those places, so
instead set gc.auto in git config when initializing a repository with
the assistant.
This does mean that the user cannot set gc.auto=0 and completely avoid
repacks, as the assistant does it daily. But, it only does it after there
are 100x the default number of loose objects, so this is probably not going
to be too annoying.
Pass subcommand as a regular param, which allows passing git parameters
like -c before it. This was already done in the pipeing set of functions,
but not the command running set.
A transfer is queued, but if the file has already been transferred to the
remote before, the transfer is skipped. In this case, it needs to perform
any actions it would normally take after finishing the transfer, like
dropping the local object.
This cannot completely guard against a runaway log event, and only runs
every hour anyway, but it should avoid most problems with very
long-running, active assistants using up too much space.
Refactored annex link code into nice clean new library.
Audited and dealt with calls to createSymbolicLink.
Remaining calls are all safe, because:
Annex/Link.hs: ( liftIO $ createSymbolicLink linktarget file
only when core.symlinks=true
Assistant/WebApp/Configurators/Local.hs: createSymbolicLink link link
test if symlinks can be made
Command/Fix.hs: liftIO $ createSymbolicLink link file
command only works in indirect mode
Command/FromKey.hs: liftIO $ createSymbolicLink link file
command only works in indirect mode
Command/Indirect.hs: liftIO $ createSymbolicLink l f
refuses to run if core.symlinks=false
Init.hs: createSymbolicLink f f2
test if symlinks can be made
Remote/Directory.hs: go [file] = catchBoolIO $ createSymbolicLink file f >> return True
fast key linking; catches failure to make symlink and falls back to copy
Remote/Git.hs: liftIO $ catchBoolIO $ createSymbolicLink loc file >> return True
ditto
Upgrade/V1.hs: liftIO $ createSymbolicLink link f
v1 repos could not be on a filesystem w/o symlinks
Audited and dealt with calls to readSymbolicLink.
Remaining calls are all safe, because:
Annex/Link.hs: ( liftIO $ catchMaybeIO $ readSymbolicLink file
only when core.symlinks=true
Assistant/Threads/Watcher.hs: ifM ((==) (Just link) <$> liftIO (catchMaybeIO $ readSymbolicLink file))
code that fixes real symlinks when inotify sees them
It's ok to not fix psdueo-symlinks.
Assistant/Threads/Watcher.hs: mlink <- liftIO (catchMaybeIO $ readSymbolicLink file)
ditto
Command/Fix.hs: stopUnless ((/=) (Just link) <$> liftIO (catchMaybeIO $ readSymbolicLink file)) $ do
command only works in indirect mode
Upgrade/V1.hs: getsymlink = takeFileName <$> readSymbolicLink file
v1 repos could not be on a filesystem w/o symlinks
Audited and dealt with calls to isSymbolicLink.
(Typically used with getSymbolicLinkStatus, but that is just used because
getFileStatus is not as robust; it also works on pseudolinks.)
Remaining calls are all safe, because:
Assistant/Threads/SanityChecker.hs: | isSymbolicLink s -> addsymlink file ms
only handles staging of symlinks that were somehow not staged
(might need to be updated to support pseudolinks, but this is
only a belt-and-suspenders check anyway, and I've never seen the code run)
Command/Add.hs: if isSymbolicLink s || not (isRegularFile s)
avoids adding symlinks to the annex, so not relevant
Command/Indirect.hs: | isSymbolicLink s -> void $ flip whenAnnexed f $
only allowed on systems that support symlinks
Command/Indirect.hs: whenM (liftIO $ not . isSymbolicLink <$> getSymbolicLinkStatus f) $ do
ditto
Seek.hs:notSymlink f = liftIO $ not . isSymbolicLink <$> getSymbolicLinkStatus f
used to find unlocked files, only relevant in indirect mode
Utility/FSEvents.hs: | Files.isSymbolicLink s = runhook addSymlinkHook $ Just s
Utility/FSEvents.hs: | Files.isSymbolicLink s ->
Utility/INotify.hs: | Files.isSymbolicLink s ->
Utility/INotify.hs: checkfiletype Files.isSymbolicLink addSymlinkHook f
Utility/Kqueue.hs: | Files.isSymbolicLink s = callhook addSymlinkHook (Just s) change
all above are lower-level, not relevant
Audited and dealt with calls to isSymLink.
Remaining calls are all safe, because:
Annex/Direct.hs: | isSymLink (getmode item) =
This is looking at git diff-tree objects, not files on disk
Command/Unused.hs: | isSymLink (LsTree.mode l) = do
This is looking at git ls-tree, not file on disk
Utility/FileMode.hs:isSymLink :: FileMode -> Bool
Utility/FileMode.hs:isSymLink = checkMode symbolicLinkMode
low-level
Done!!
git annex init probes for crippled filesystems, and sets direct mode, as
well as `annex.crippledfilesystem`.
Avoid manipulating permissions of files on crippled filesystems.
That would likely cause an exception to be thrown.
Very basic support in Command.Add for cripped filesystems; avoids the lock
down entirely since doing it needs both permissions and hard links.
Will make this better soon.
Making the pre-commit hook look at git diff-index to find changed direct
mode files and update the mappings works pretty well.
One case where it does not work is when a file is git annex added, and then
git rmed, and then this is committed. That's a no-op commit, so the hook
probably doesn't even run, and it certianly never notices that the file
was deleted, so the mapping will still have the original filename in it.
For this and other reasons, it's important that the mappings still be
treated as possibly inconsistent.
Also, the assistant now allows the pre-commit hook to run when in direct
mode, so the mappings also get updated there.
New setting, can be used to disable autocommit of changed files by the
assistant, while it still does data syncing and other tasks.
Also wired into webapp UI
It used to not log to daemon.log when a repository was first created, and
when starting the webapp. Now both do. Redirecting stdout and stderr to the
log is tricky when starting the webapp, because the web browser may want to
communicate with the user. (Either a console web browser, or web.browser = echo)
This is handled by restoring the original fds when running the browser.
since some systems may have configuration problems or other issues that
prevent web browsers from connecting to the right localhost IP for the
webapp.
Tested on both ipv4 and ipv6 localhost. Url for the latter looks like:
http://[::1]:50676
The expensive scan uses lookupFile, but in direct mode, that doesn't work
for files that are present. So the scan was not finding things that are
present that need to be uploaded. (It did find things not present that
needed to be downloaded.)
Now lookupFile also works in direct mode. Note that it still prefers
symlinks on disk to info committed to git, in direct mode. This is
necessary to make things like Assistant.Threads.Watcher.onAddSymlink
work correctly, when given a new symlink not yet checked into git (or
replacing a file checked into git).
Would like to also have restart UI, but that's rather harder to do,
seems it'd need to start another copy of the webapp, and redirect the
browser to its new url, but running two assistants in the same repo at
the same time isn't good.
Now there's a Config type, that's extracted from the git config at startup.
Note that laziness means that individual config values are only looked up
and parsed on demand, and so we get implicit memoization for all of them.
So this is not only prettier and more type safe, it optimises several
places that didn't have explicit memoization before. As well as getting rid
of the ugly explicit memoization code.
Not yet done for annex.<remote>.* configuration settings.
When a file is changed in direct mode, the old content is probably lost
(at least from the local repo), and bookeeping needs to be updated to
reflect this.
Also, synthetic add events are generated at assistant startup, so
make it detect when the file has not really changed, and avoid re-adding
it.
This does add the overhead of querying the runing git cat-file for the
key that's recorded in git for the file, each time a file is added or
modified in direct mode.
git add --update cannot be used, because it'll stage typechanged direct
mode files. Intead, use ls-files to find deleted files, and stage them
ourselves.
It seems that no commit was made before when the scan staged deleted files.
(Probably masked since if files were added, a commit happened then..)
Now that I'm doing the staging, I was also able to fix that bug.
This allows it to use Build.SysConfig to always install the programs
configure detected. Amoung other fixes, this ensures the right uuid
generator and checksum programs are installed.
I also cleaned up the handling of lsof's path; configure now checks for
it in PATH, but falls back to looking for it in sbin directories.
* get/copy --auto: Transfer data even if it would exceed numcopies,
when preferred content settings want it.
* drop --auto: Fix dropping content when there are no preferred content
settings.
Noticed that when pairing, sometimes both sides start to push, and the other
side sends a PushRequest, and the two deadlock, neither doing anything.
(Timeout eventually breaks this.) So, let both run at the same time.
This should help prevent git-annex clients receiving messages that
were intended for normal clients they're sharing the account with.
Changed XMPP protocol use to always send chat messages directed at the
specific client, as the negative priority blocks less directed messages.
It might even work, although nothing yet triggers XMPP pushes.
Also added a set of deferred push messages. Only one push can run at a
time, and unrelated push messages get deferred. The set will never grow
very large, because it only puts two types of messages in there, that
can only vary in the client doing the push.
Maybe the spec allows it, but broadcasting self-directed presence info to
all buddies is just insane.
I had to bring back the IQ messages for self-pairing, while still using
directed presence for other pairing. Ugly.
Testing between Google Talk and prosody, the directed IQ messages
were not received. Google Talk probably only relays them between
clients using the same account.
I first tried even more directed presence, with each client JID being sent
a separate presence, but that didn't work on Google Talk, particularly
it was ignored when one client sent it to another client using the same
account.
So, presence directed at the user@host of the client to pair with. Tested
working between Google Talk and prosody (in both directions), as well
as between two clients with the same account on Google Talk, and
two clients with the same account on prosody.
Only problem with this form of directed presence is that if I also use it
for git pushes, more clients than are interested in a push's data will
receive it. So I may need some better approach, or a hybrid between
directed IQ and directed presence.
Still wait 1 minute after a change before waiting on the next change, but don't
wait at the start, when we might get a pull that contains config changes
right away.
Currently have three old versions of functions that more reworking is
needed to remove: getDaemonStatusOld, modifyDaemonStatusOld_, and
modifyDaemonStatusOld
This is a nice win; much less code runs in Annex, so other threads have
more chances to run concurrently.
I do notice that renaming a file has gone from 1 to 2 commits. I think this
is due to the above improvement letting the committer run more frequently,
so it commits the rm first.
Converted several threads to run in the monad.
Added a lot of useful combinators for working with the monad.
Now the monad includes the name of the thread.
Some debugging messages are disabled pending converting other threads.
I now have this topology working:
assistant ---> {bare repo, special remote} <--- assistant
And, I think, also this one:
+----------- bare repo --------+
v v
assistant ---> special remote <--- assistant
While before with assistant <---> assistant connections, both sides got
location info updated after a transfer, in this topology, the bare repo
*might* get its location info updated, but the other assistant has no way to
know that it did. And a special remote doesn't record location info,
so transfers to it won't propigate out location log changes at all.
So, for these to work, after a transfer succeeds, the git-annex branch
needs to be pushed. This is done by recording a synthetic commit has
occurred, which lets the pusher handle pushing out the change (which will
include actually committing any still journalled changes to the git-annex
branch).
Of course, this means rather a lot more syncing action than happened
before. At least the pusher bundles together very close together pushes,
somewhat. Currently it just waits 2 seconds between each push.
Adjust build deps to ensure that only a fixed version of the library will
be used.
Also, removed the bound thread stuff, which I now think was (probably)
a red herring.
MountWatcher can't do this, because it uses the session dbus,
and won't have access to the new DBUS_SESSION_BUS_ADDRESS if a new session
is started.
Bumped dbus library version, FD leak in it is fixed.
Currently relies on SRV being set, or the JID's hostname being the server
hostname and the port being default. Future work: Allow manual
configuration of user name, hostname, and port.
Now when the dbus connection is dropped, it'll fall back to polling.
I could make it try to reconnect, but there's a FD leak in the dbus
library, so not yet.
This *may* solve the segfault I was seeing when the XMPP library called
startTLS. My hypothesis is as follows:
* TLS is documented
(http://www.gnu.org/software/gnutls/manual/gnutls.html#Thread-safety)
thread safe, but only when a single thread accesses it.
* forkIO threads are not bound to an OS thread, so it was possible for
the threaded runtime to run part of the XMPP code on one thread, and
then switch to another thread later.
So, forkOS, with its bound threads, should be used for the XMPP thread.
Since the crash doesn't happen reliably, I am not yet sure about this fix.
Note that I kept all the other threads in the assistant unbound, because
bound threads have significantly higher overhead.
Seems presence notifications are not sent to clients that have marked
themselves unavailable. (Testing with google talk.)
This is the death knell for the presence hack, because it has to stay
available, and even the toggle to unavailable and back could cause it to
miss a notification. Still, flipped it so it basically works, for now.
Lacking error handling, reconnection, credentials configuration,
and doesn't actually do anything when it receives an incoming notification.
Other than that, it might work! :)
Hooked up everything that needs to notify on pushes. Note that
syncNewRemote does not notify. This is probably ok, and I'd need to thread
more state through to make it do so.
This is only set up to support a single push notification method; I didn't
use a NotificationBroadcaster. Partly because I don't yet know what info
about pushes needs to be communicated, so my data types are only
preliminary.
Monitors git-annex branch for changes, which are noticed by the Merger
thread whenever the branch ref is changed (either due to an incoming push,
or a local change), and refreshes cached config values for modified config
files.
Rate limited to run no more often than once per minute. This is important
because frequent git-annex branch changes happen when files are being
added, or transferred, etc.
A primary use case is that, when preferred content changes are made,
and get pushed to remotes, the remotes start honoring those settings.
Other use cases include propigating repository description and trust
changes to remotes, and learning when a remote has added a new special
remote, so the webapp can present the GUI to enable that special remote
locally.
Also added a uuid.log cache. All other config files already had caches.
This can result in the file being dropped, or being downloaded, or even
being dropped from some other repo.
It's even possible to create a file in a directory where content is not
wanted, which will make the assistant immediately send it elsewhere, and
then drop it.
This was complicated quite a bit by needing to check numcopies. I optimised
that, so it only looks up numcopies once per file, no matter how many
remotes it checks to drop from. Although it did just occur to me that
it might be better to first check if it wants to drop content, and only
then check numcopies..
This avoids the expensive transfer scan relying on its list of remotes
to scan being accurate throughout, which it will not be when the user
pauses syncing to a remote.
I feel it's ok to queue transfers to *any* known remote, not just the ones
being scanned.
Note that there are still small races where after syncing to a remote is
paused, a transfer can be queued for it. Not just in the expensive transfer
scan, but in the cheap failed transfer scan, and elsewhere.
I noticed this while offline (so that lack of solar power is good for something).
Apparently it tries to bind multicast to lo, and that fails.
If this happens, catch it, and retry until a real network interface becomes
available.
It may be that this should tie into the NetWatcher, and rebind whenever
an interface comes up. Needs testing..
Both when queueing downloads, and uploads, consults the preferred content
settings.
I didn't make it check yet when requeing failed transfers or queuing
deferred downloads; dealing with the preferred content settings (or indeed,
other settings) changing while the assistant is running still needs work.
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
This is handled differently for inotify, which can track modifications of
existing files, and kqueue, which cannot (TTBOMK). On the inotify side,
the TransferWatcher just waits for the file to be updated and reads the new
bytesComplete. On the kqueue side, the TransferPoller has to re-read the
file every update (currently 0.5 seconds, might need to increase that).
I did think about working around kqueue's limitations by somehow creating
a new file each time the size changed. But cleaning up all the files that
would result seemed difficult. And really, this is not a lot worse than
the TransferWatcher's behavior for downloads, which stats a file every 0.5
seconds. As long as the OS has decent file caching behavior..
cp is used here, but we can just watch the size of the destination file
This commit made from within the ruins of an old mill, overlooking a
beautiful waterfall.
This doesn't avoid it sometimes attempting to commit when there are no
changes. Typically that happens when a change is pushed in from another
repo; the watcher sees the file and tries to stage it, resulting in an
empty commit. Really fixing that would probably use more CPU than
occasionally trying to make an empty commit.
However, this does save a lot of unnecessary work, as those empty commits
had to be synced out, which no longer happens.
This ensures file propigate takes place in situations such as: Usb drive A
is connected to B. A's master branch is already in sync with B, but it is
being used to sneakernet some files around, so B downloads those. There is no
master branch change, so C does not request these files. B needs to upload
the files it just downloaded on to C, etc.
My first try at this, I saw loops happen. B uploaded to C, which then
tried to upload back to B (because it had not received the updated
git-annex branch from B yet). B already had the file, but it still created
a transfer info file from the incoming transfer, and its watcher saw
that be removed, and tried to upload back to C.
These loops should have been fixed by my previous commit. (They never
affected ssh remotes, only local ones, it seemed.) While C might still try
to upload to B, or to some other remote that already has the file, the
extra work dies out there.
I was seeing some interesting crashes after the previous commit,
when making file changes slightly faster than the assistant could keep up.
error: Ref refs/heads/master is at 7074f8e0a11110c532d06746e334f2fec6af6ab4 but expected 95ea86008d72a40d97a81cfc8fb47a0da92166bd
fatal: cannot lock HEAD ref
Committer crashed: git commit [Param "--allow-empty-message",Param "-m",Param "",Param "--allow-empty",Param "--quiet"] failed
Pusher crashed: thread blocked indefinitely in an STM transaction
Clearly the the merger ended up running at the same time as the committer,
and with both modifying HEAD the committer crashed. I fixed that by
making the Merger run its merge inside the annex monad, which avoids
it running concurrently with other git operations. Also by making
the committer not crash if git fails.
What I don't understand is why the pusher then crashed with a STM deadlock.
That must be in either the DaemonStatusHandle or the FailedPushMap,
and the latter is only used by the pusher. Did the committer's crash somehow
break STM?
The BlockedIndefinitelyOnSTM exception is described as:
-- |The thread is waiting to retry an STM transaction, but there are no
-- other references to any @TVar@s involved, so it can't ever continue.
If the Committer had a reference to a TVar and crashed, I can sort of see
this leading to that exception..
The crash was quite easy to reproduce after the previous commit, but
after making the above change, I have yet to see it again. Here's hoping.
Now when a download is queued and there's no known remote to get it from,
it's added to a deferred download list, which will be retried later.
The Merger thread tries to queue any deferred downloads when it receives
a push to the git-annex branch.
Note that the Merger thread now also forces an update of the git-annex
branch. The assistant was not updating this branch before, and it saw a
(mostly) correct view of state, but now that incoming pushes go to
synced/git-annex, it needs to be merged in.
Don't expose these as branches in refs/heads/. Instead hide them away in
refs/synced/ where only show-ref will find them.
Make unused only look at branches and tags, not these other things,
so it won't care if some stale sync ref used to use a file.
This means they don't need to be deleted, which could have
led to an incoming sync being missed.
The fallback branches pushed to contain the uuid of the pusher, which is
ugly. That's why syncing doesn't normally use this method.
The merger deletes fallback branches after merging them, to contain the
ugliness, and so unused doesn't look at data from these branches.
(The fallback git-annex branch is left behind for now.)
Finally.
Last bug fixes here: Send PairResp with same UUID in the PairReq.
Fix off-by-one in code that filters out our own pairing messages.
Also reworked the pairing alerts, which are still slightly buggy.
Pair requests the the same UUID are part of the same pairing session,
which allows us to detect attempts to brute force the shared secret,
as that will result in pair requests with the same UUID that are
not verified with the right secret.
They work fine. But I had to go to a lot of trouble to get Yesod to render
routes in a pure function. It may instead make more sense to have each
alert have an assocated IO action, and a single route that runs the IO
action of a given alert id. I just wish I'd realized that before the past
several hours of struggling with something Yesod really doesn't want to
allow.
Avoid trying to git push/pull to special remotes, but still do transfer
scans of them, after git pull from any other remotes, so we know about
any values that have been placed on them.
When multiple downloads of a key are queued, it starts the first, but leaves the
other downloads in the queue. This ensures that we don't lose a queued
download if the one that got started failed.
Run code that pops off the next queued transfer and adds it to the active
transfer map within an allocated transfer slot, rather than before
allocating a slot. Fixes the transfers display, which had been displaying
the next transfer as a running transfer, while the previous transfer was
still running.
Currently only the web special remote is readonly, but it'd be possible to
also have readonly drives, or other remotes. These are handled in the
assistant by only downloading from them, and never trying to upload to
them.
The expensive transfer scan now scans a whole set of remotes in one pass.
So at startup, or when network comes up, it will run only once.
Note that this can result in transfers from/to higher cost remotes being
queued before other transfers of other content from/to lower cost remotes.
Before, low cost remotes were scanned first and all their transfers came
first. When multiple transfers are queued for a key, the lower cost ones
are still queued first. However, this could result in transfers from slow
remotes running for a long time while transfers of other data from faster
remotes waits.
I expect to make the transfer queue smarter about ordering
and/or make it allow multiple transfers at a time, which should eliminate
this annoyance. (Also, it was already possible to get into that situation,
for example if the network was up, lots of transfers from slow remotes
might be queued, and then a disk is mounted and its faster transfers have
to wait.)
Also note that this means I don't need to improve the code in
Assistant.Sync that currently checks if any of the reconnected remotes
have diverged, and if so, queues scans of all of them. That had been very
innefficient, but now doesn't matter.
Used by the assistant, rather than copy, this is faster because it avoids
using git ls-files, avoids checking the location log redundantly, and
runs in oneshot mode, avoiding making a commit to the git-annex branch
for every file transferred.
There are multiple reasons to do this:
* The local network may be up solid, but a route to a networked remote
is having trouble. Any transfers to it that fail should be retried.
* Someone might have wicd running, but like to bring up new networks
by hand too. This way, it'll eventually notice them.
The problem with using it here is that, if a removable drive is scanned
and gets disconnected during the scan, testing for all the files will
indicate it doesn't have them, and the scan is logged as completed
successfully, without necessary transfers being queued.
This deals with interruptions in network connectevity, by listening
for a new network interface coming up (using dbus to see when
network-manager or wicd do it), and forcing a rescan of
This seems to work pretty well.
Handled the process groups like this:
- git-annex processes started by the assistant for transfers are run in their
own process groups.
- otherwise, rely on the shell to allocate a process group for git-annex
There is potentially a problem if some other program runs git-annex
directly (not using sh -c) The program and git-annex would then be in
the same process group. If that git-annex starts a transfer and it's
canceled, the program would also get killed. May or may not be a desired
result.
Also, the new updateTransferInfo probably closes a race where it was
possible for the thread id to not be recorded in the transfer info, if
the transfer info file from the transfer process is read first.
This doesn't quite work, because canceling a transfer sends a signal
to git-annex, but not to rsync (etc).
Looked at making git-annex run in its own process group, which could then
be killed, and would kill child processes. But, rsync checks if it's
process group is the foreground process group and doesn't show progress if
not, and when git has run git-annex, if git-annex makes a new process
group, that is not the case. Also, if git has run git-annex, ctrl-c
wouldn't be propigated to it if it made a new process group.
So this seems like a blind alley, but recording it here just in case.
Now an alert tracks files that have recently been added. As a large file
is added, it will have its own alert, that then combines with the tracker
when dones.
Also used for combining sanity checker alerts, as it could possibly want to
display a lot.
This allows me to not build-depend on blaze-markup, which was causing
me some trouble when tring to build with cabal on debian. Seems debian
ships Text.Blaze.Renderer.String in two packages.
Now the javascript does an ajax call at the start to request the url
to use to poll, and the notification id is generated then, once we know
javascript is working.
Depending on how the webapp was started up and whether the user clicked on
any links in it, window.close() may be disallowed by browser security
policy. Also if that fails, display a modal dialog that nicely blackens out
the webapp.
TODO: avoid Escape closing it. Bootstrap's docs are unclear about how to do
that.
Putting the transfer on the currentTransfers atomically introduced a bug:
It checks to see if the transfer is in progress, and cancels it.
Fixed by moving that check inside the STM transaction.
This may be customised differently than the main page later on, but
for now the important thing is that this constantly refreshed page does not
allocate a new NotificationHandle each time it's loaded.
WebApp now shows changes with no delay. Comparing a running git-annex get
and the webapp side-by-side, they both show each new transfer at the same
time.
The fun part was making it move things from TransferQueue to currentTransfers
entirely atomically. Which will avoid inconsistent display if the WebApp
renders the current status at just the wrong time. STM to the rescue!
I've convinced myself that nothing in DaemonStatus can deadlock,
as it always keepts the TMVar full. That was the only reason it was in the
Annex monad.
This avoids forking another process, avoids polling, fixes a race,
and avoids a rare forkProcess thread hang that I saw once time
when starting the webapp.
Had to switch to toWaiAppPlain to avoid a seeming bug in toWaiApp;
chromium only received a partial copy of jquery. Always the same length
each time, which makes me think it's a bug in the compression, although
a bug in the autohead middleware is also a possibility.
Anyway, there's little need for compression for a local webapp. Not wasting
time compressing things is probably a net gain.
Similarly, I've not worried about minifying this yet. Although that would
avoid bloating the git-annex binary quite so much.
Very happy to have a reusable autoUpdate widget that can make any Yesod
widget automatically refresh!
Also added support for non-javascript browsers, falling back to meta
refresh.
Also, the home page is now rendered with the webapp status on it, before
any refreshing is done.
The webapp is now a constantly updating clock! I accomplished this amazing
feat using "long polling", with some jquery and a little custom java
script.
There are more modern techniques, but this one works everywhere.
Broke hamlet out into standalone files.
I don't like the favicon display; it should be served from /favicon.ico,
but I could only get the static site to serve /static/favicon.ico, so
I had to use a <link rel=icon> to pull it in. I looked at
Yesod.Default.Handlers.getFaviconR, but it doesn't seem to embed
the favicon into the binary?
Best dbus events I could find were setupDone from org.kde.Solid.Device.
There may be some spurious events, but that's ok, the code will only
check to see if new mounts are available.
It does not try to auto-start this service if it's not running.
This should fix OSX/BSD issues with not noticing transfer information
files with kqueue. Now that threads are used, the thread can manage the
transfer slot allocation and deallocation by itself; much cleaner.
Check first if a transfer needs to be done, using the location log only
(for speed), and avoid occupying a slot if not. Always write a transfer
info file, and keep it open throughout the tranfer process.
Now transfers to remotes seem reliable.
There's still a bug; if the child updates its transfer info file,
then the data from it will superscede the TransferInfo, losing the
info that we should wait on this child.
Added knownRemotes to DaemonStatus. This list is not entirely trivial to
calculate, and having it here should make it easier to add/remove remotes
on the fly later on. It did require plumbing the daemonstatus through to
some more threads.
The reason the DirWatcher had to wait for program termination was because
it used withINotify, so when it finished, its watcher threads were killed.
But since I have two DirWatcher threads now, that was not good, and could
perhaps explain the MVar problem I saw yesterday. In any case, fixed this
part of the code by making the DirWatcher return a handle that can be used
to stop it, and now the main Assistant thread is the only one calling
waitForTermination.
Avoid MVar deadlock issue, which I don't understand.
Have not taken the time to debug it fully, because it turns out I don't
need to resolve merge conflicts when a new branch ref is written... I
think.
Ensure the git-annex branch is merged when doing a manual pull.
Otherwise it can get out of sync, since git-annex normally only merges it
once per run.
SampleMVar won't work; between getting the current value and changing
it, another thread could made a change, which would get lost.
TMVar works well; this update situation is handled by atomic transactions.