This fixes all instances of " \t" in the code base. Most common case
seems to be after a "where" line; probably vim copied the two space layout
of that line.
Done as a background task while listening to episode 2 of the Type Theory
podcast.
Note that this does not yet use SecureMem. It would probably make sense for
the Password part of a CredPair to use SecureMem, and making that change
is better than passing in a String and having it converted to SecureMem in
this code.
a ssh remote, and pulls.
XMPP is no longer needed in this configuration!
Requires the remote server have git-annex-shell with notifychanges support.
(untested)
This commit was sponsored by Geog Wechslberger.
Removed instance, got it all to build using fromRef. (With a few things
that really need to show something using a ref for debugging stubbed out.)
Then added back Read instance, and made Logs.View use it for serialization.
This changes the view log format.
Make sanity checker run git annex unused daily, and queue up transfers
of unused files to any remotes that will have them. The transfer retrying
code works for us here, so eg when a backup disk remote is plugged in,
any transfers to it are done. Once the unused files reach a remote,
they'll be removed locally as unwanted.
If the setup does not cause unused files to go to a remote, they'll pile
up, and the sanity checker detects this using some heuristics that are
pretty good -- 1000 unused files, or 10% of disk used by unused files,
or more disk wasted by unused files than is left free. Once it detects
this, it pops up an alert in the webapp, with a button to take action.
TODO: Webapp UI to configure this, and also the ability to launch an
immediate cleanup of all unused files.
This commit was sponsored by Simon Michael.
So that remotes that use a persistent network connection are restarted.
A remote might keep open a long duration network connection, and could
fail to deal well with losing the connection. This is particularly a
concern now that we have external special reotes. An external
special remote that is implemented naively might open the connection only
when PREPARE is sent, and if it loses connection, throw errors on each
request that is made.
(Note that the ssh connection caching should not have this problem; if the
long-duration ssh process loses connection, the named pipe is disconnected
and the next ssh attempt will reconnect. Also, XMPP already deals with
disconnection robustly in its own way.)
There's no way for git-annex to know if a lost network connection actually
affects a given remote, which might have a transfer in process. It does not
make sense to force kill the transferkeys process every time the NetWatcher
detects a change. (Especially because the NetWatcher sometimes polls 1
change per hour.)
In any case, the NetWatcher only detects connection to a network, not
disconnection. So if a transfer is in progress over the network, and the
network goes down, that will need to time out on its own.
An alternate approch that was considered is to use a separate transferkeys
process for each remote, and detect when a request fails, and assume that
means that process is in a failing state and restart it. The problem with
that approach is that if a resource is not available and a remote fails
every time, it degrades to starting a new transferkeys process for every
file transfer, which is too expensive.
Instead, this commit only handles the network reconnection case, and restarts
transferkeys only once the network has reconnected and another transfer needs
to be made. So, a transferkeys process will be reused for 1 hour, or until the
next network connection.
----
The NotificationBroadcaster was rewritten to use TMVars rather than MSampleVars,
to allow checking without blocking if a notification has been received.
----
This commit was sponsored by Tobias Brunner.
When an automatic upgrade completes, or when the user clicks on the upgrade
button in one webapp, but also has it open in another browser window/tab,
we have a problem: The current web server is going to stop running in
minutes, but there is no way to send a redirect to the web browser to the
new url.
To solve this, used long polling, so the webapp is always listening for
urls it should redirect to. This allows globally redirecting every open
webapp. Works great! Tested with 2 web browsers with 2 tabs each.
May be useful for other purposes later too, dunno.
The overhead is 2 http requests per page load in the webapp. Due to yesod's
speed, this does not seem to noticibly delay it. Only 1 of the requests
could possibly block the page load, the other is async.
Made alerts be able to have multiple buttons, so the alerts about upgrading
can have a button that enables automatic upgrades.
Implemented automatic upgrading when the program file has changed.
Note that when an automatic upgrade happens, the webapp displays an alert
about it for a few minutes, and then closes. This still needs work.
The webapp will check twice a day, when the network is connected, to see if
it can download a distributon upgrade file. If a newer version is found,
display an upgrade alert.
This will need the autobuilders to set UPGRADE_LOCATION to the url
it can be downloaded from when building git-annex. Only builds with that
set need automatic upgrade alerts.
Currently, the upgrade page just requests the user manually download
and upgrade it. But, all the info is provided to do automated upgrades
in the future.
Note that urls used will need to all be https.
This commit was sponsored by Dirk Kraft.
When starting up the assistant, it'll remind about the current
repository, if it doesn't have checks. And when a removable drive
is plugged in, it will remind if a repository on it lacks checks.
Since that might be annoying, the reminders can be turned off.
This commit was sponsored by Nedialko Andreev.
Added a RemoteChecker thread, that waits for problems to be reported with
remotes, and checks if their git repository is in need of repair.
Currently, only failures to sync with the remote cause a problem to be
reported. This seems enough, but we'll see.
Plugging in a removable drive with a repository on it that is corrupted
does automatically repair the repository, as long as the corruption causes
git push or git pull to fail. Some types of corruption do not, eg
missing/corrupt objects for blobs that git push doesn't need to look at.
So, this is not really a replacement for scheduled git repository fscking.
But it does make the assistant more robust.
This commit is sponsored by Fernando Jimenez.
Extends the index.lock handling to other git lock files. I surveyed
all lock files used by git, and found more than I expected. All are
handled the same in git; it leaves them open while doing the operation,
possibly writing the new file content to the lock file, and then closes
them when done.
The gc.pid file is excluded because it won't affect the normal operation
of the assistant, and waiting for a gc to finish on startup wouldn't be
good.
All threads except the webapp thread wait on the new startup sanity checker
thread to complete, so they won't try to do things with git that fail
due to stale lock files. The webapp thread mostly avoids doing that kind of
thing itself. A few configurators might fail on lock files, but only if the
user is explicitly trying to run them. The webapp needs to start
immediately when the user has opened it, even if there are stale lock
files.
Arranging for the threads to wait on the startup sanity checker was a bit
of a bear. Have to get all the NotificationHandles set up before the
startup sanity checker runs, or they won't see its signal. Perhaps
the NotificationBroadcaster is not the best interface to have used for
this. Oh well, it works.
This commit was sponsored by Michael Jakl
I hope this will be easier to reason about, and less buggy. It was
certianly easier to write!
An immediate benefit is that with a traversable queue of push requests to
select from, the threads can be a lot fairer about choosing which client to
service next.
This will avoid losing any messages received from 1 client when a push
involving another client is running.
Additionally, the handling of push initiation is improved,
it's no longer allowed to run multiples of the same type of push to
the same client.
Still stalls sometimes :(
Observed: With 2 xmpp clients, one would sometimes stop responding
to CanPush messages. Often it was in the middle of a receive-pack
of its own (or was waiting for a failed one to time out).
Now these are always immediately responded to, which is fine; the point
of CanPush is to find out if there's another client out there that's
interested in our push.
Also, in queueNetPushMessage, queue push initiation messages when
we're already running the side of the push they would initiate.
Before, these messages were sent into the netMessagesPush channel,
which was wrong. The xmpp send-pack and receive-pack code discarded
such messages.
This still doesn't make XMPP push 100% robust. In testing, I am seeing
it sometimes try to run two send-packs, or two receive-packs at once
to the same client (probably because the client sent two requests).
Also, I'm seeing rather a lot of cases where it stalls out until it
runs into the 120 second timeout and cancels a push.
And finally, there seems to be a bug in runPush. I have logs that
show it running its setup action, but never its cleanup action.
How is this possible given its use of E.bracket? Either some exception
is finding its way through, or the action somehow stalls forever.
When this happens, one of the 2 clients stops syncing.
(Except for the actual streaming of receive-pack through XMPP, which
can only run once we've gotten an appropriate uuid in a push initiation
message.)
Pushes are now only initiated when the initiation message comes from a
known uuid. This allows multiple distinct repositories to use the same xmpp
address.
Note: This probably breaks initial push after xmpp pairing, because at that
point we may not know about the paired uuid, and so reject the push from
it. It won't break in simple cases, because the annex-uuid of the remote
is checked. However, when there are multiple clients behind a single xmpp
address, only uuid of the first is recorded in annex-uuid, and so any
pushes from the others will be rejected (unless the first remote pushes their
uuids to us beforehand.
If an add failed, we should lose the KeySource, since it, presumably,
differs due to a change that was made to the file.
(The locked down file is already deleted.)
Making this a tset of lists of Changes, rather than a tset of Changes
makes refilling it, in batch mode, much more efficient. Rather than needing
to add every Change it's collected one at a time, it can add them in one
fast batch operation.
It would be more efficient yet to use a Set, but that would need an Eq
instance for InodeCache.