MountWatcher can't do this, because it uses the session dbus,
and won't have access to the new DBUS_SESSION_BUS_ADDRESS if a new session
is started.
Bumped dbus library version, FD leak in it is fixed.
Currently relies on SRV being set, or the JID's hostname being the server
hostname and the port being default. Future work: Allow manual
configuration of user name, hostname, and port.
Now when the dbus connection is dropped, it'll fall back to polling.
I could make it try to reconnect, but there's a FD leak in the dbus
library, so not yet.
I had been using -ignore-package monads-tf to deal with this, but
the XMPP library uses monads-tf, so that also ignores it. Instead,
use PackageImports to force use of mtl in my own code.
When rsyncProgress pipes rsync's stdout, this turns out to cause a ssh
process started by rsync to be left behind as a zombie. I don't know why,
but my recent zombie reaping cleanup was correct, it's just that this other
zombie, that's not directly started by git-annex, was no longer reaped
due to changes in the cleanup. Make rsyncProgress reap the zombie started
by rsync, as a workaround.
FWIW, the process tree looks like this. It seems like the rsync child
is for some reason starting but not waiting on this extra ssh process.
Ssh connection caching may be involved -- disabling it seemed to change
the shape of the tree, but did not eliminate the zombie.
9378 pts/14 S+ 0:00 | \_ rsync -p --progress --inplace -4 -e 'ssh' '-S' ...
9379 pts/14 S+ 0:00 | | \_ ssh ...
9380 pts/14 S+ 0:00 | | \_ rsync -p --progress --inplace -4 -e 'ssh' '-S' ...
9381 pts/14 Z+ 0:00 | \_ [ssh] <defunct>
Nearly everything that's reading from git is operating on a small
amount of output and has been switched to use that. Only pipeNullSplit
stuff continues using the lazy version that yields zombies.
This includes a full parser for the boolean expressions in the log,
that compiles them into Matchers. Those matchers are not used yet.
A complication is that matching against an expression should never
crash git-annex with an error. Instead, vicfg checks that the expressions
parse. If a bad expression (or an expression understood by some future
git-annex version) gets into the log, it'll be ignored.
Most of the code in Limit couldn't fail anyway, but I did have to make
limitCopies check its parameter first, and return an error if it's bad,
rather than erroring at runtime.
This seems to fix a problem I've recently seen where ctrl-c during rsync
leads to `git annex get` moving on to the next thing rather than exiting.
Seems likely that started happening with the switch to System.Process
(d1da9cf221), as the old code took care
to install a default SIGINT handler.
Note that since the bug was only occurring sometimes, I am not 100% sure
I've squashed it, although I seem to have.
This is a workaround for bind failing with EINVAL sometimes on OSX.
I don't know why; EVINAL should mean the socket is already bound to an
address, but this is with a new socket.
Simplified it using existing functions.
I doubt setSticky needs to return the FileMode; if it does for some
reason, it can be changed to use modifyFileMode'
Converted isSticky to a pure function for consistency with isSymlink.
Note that the sticky bit of a file can be tested thus:
isSticky . fileMode <$> getFileStatus file
This is handled differently for inotify, which can track modifications of
existing files, and kqueue, which cannot (TTBOMK). On the inotify side,
the TransferWatcher just waits for the file to be updated and reads the new
bytesComplete. On the kqueue side, the TransferPoller has to re-read the
file every update (currently 0.5 seconds, might need to increase that).
I did think about working around kqueue's limitations by somehow creating
a new file each time the size changed. But cleaning up all the files that
would result seemed difficult. And really, this is not a lot worse than
the TransferWatcher's behavior for downloads, which stats a file every 0.5
seconds. As long as the OS has decent file caching behavior..
Current implementation parses rsync's output a character a time, which
is hardly efficient. It could be sped up a lot by using hGetBufSome,
but that would require going really lowlevel, down to raw C style buffers
(good example of that here: http://users.aber.ac.uk/afc/stricthaskell.html)
But rsync doesn't output very much, so currently it seems ok.
The webapp can only run on one of ipv4 and ipv6, no both. Some web browsers
may not support ipv6, so ipv4 is the safe choice.
The actual problem I ran into with it only listening to ipv6 was that
Utility.Url.exists was failing to connect to it. I doubt that haskell's
HTTP library is ipv4 only. More likely, it was only trying one address,
and tried ipv4 first.
Actually 3 forms in one, this handles the initial passphrase entry, and the
confirmation, and also varys wording if the same user or a different user
is confirming.
To avoid conflict with different liftIO from MonadIO (in some version of
yesod not the one I have here), and because it's generally clearer, since
this module has both Wai and Yesod stuff, to qualify them both.
This deals with interruptions in network connectevity, by listening
for a new network interface coming up (using dbus to see when
network-manager or wicd do it), and forcing a rescan of
Old 1.0.1 version is still supported as well. Cabal autodetects
which version is available, but in the Makefile, WITH_OLD_YESOD
has to be configured appropriately.
I have not squashed all the $newline warnings with the new Yesod.
They should go away eventually anyway as Yesod moves past that transition.
git annex assistant --autostart will start separate daemons in each
listed autostart repo
running the webapp outside any git-annex repo will open it on the
first listed autostart repo
This prevents multiple runs of the assistant in the foreground, and lets
--stop stop foregrounded runs too.
The webapp firstrun case also now writes a pid file, once it's made the git
repo to put it in.
This is a way to send a notification to a set of clients, any of which can be
blocked waiting for a new notification to arrive. A complication is that any
number of clients may be be dead, and we don't want stale notifications for
those clients to pile up and leak memory.
It took me 3 tries to find the solution, which turns out to be simple: An array
of SampleVars, one per client. Using SampleVars means that clients only see the
most recent notification, but when the notification is just "the assistant's
state changed somehow; display a refreshed rendering of it", that's sufficient.
This avoids forking another process, avoids polling, fixes a race,
and avoids a rare forkProcess thread hang that I saw once time
when starting the webapp.
The webapp is now a constantly updating clock! I accomplished this amazing
feat using "long polling", with some jquery and a little custom java
script.
There are more modern techniques, but this one works everywhere.
Broke hamlet out into standalone files.
I don't like the favicon display; it should be served from /favicon.ico,
but I could only get the static site to serve /static/favicon.ico, so
I had to use a <link rel=icon> to pull it in. I looked at
Yesod.Default.Handlers.getFaviconR, but it doesn't seem to embed
the favicon into the binary?
There's a minor performance overhead to doing this, but this way I don't
have to worry about a situation where statfs might block for a long time.
For example, when it's on a network filesystem.
While this seems to work fine when used in a simple program,
when I load it in ghci, it segfaults about half the time. Don't know why,
and seems ghci specific, but if I get reports of crashes, I'll need to look
into that.
Converted from using c2hs to using hsc2hs, just because other code
in git-annex uses hsc2hs.
Various cleanups.
This code is LGPLed, so I had to include that licence.
Make Utility.Process wrap the parts of System.Process that I use,
and add debug logging to them.
Also wrote some higher-level code that allows running an action
with handles to a processes stdin or stdout (or both), and checking
its exit status, all in a single function call.
As a bonus, the debug logging now indicates whether the process
is being run to read from it, feed it data, chat with it (writing and
reading), or just call it for its side effect.
Test suite now passes with -threaded!
I traced back all the hangs with -threaded to System.Cmd.Utils. It seems
it's just crappy/unsafe/outdated, and should not be used. System.Process
seems to be the cool new thing, so converted all the code to use it
instead.
In the process, --debug stopped printing commands it runs. I may try to
bring that back later.
Note that even SafeSystem was switched to use System.Process. Since that
was a modified version of code from System.Cmd.Utils, it needed to be
converted too. I also got rid of nearly all calls to forkProcess,
and all calls to executeFile, which I'm also doubtful about working
well with -threaded.
This is necessary to generate events when a file is deleted and immediately
replaced. Otherwise, the cache will have the old file, and so no event
would be generated.
Use of getFileStatus is suboptimal, it would be faster to use
the inode returned by readdir -- but getDirectoryContents does not expose
it, so I'd have to copy and modify a lot of low-level code.
that doesn't exist, or cannot be read
The problem is its use of unsafeInterleaveIO, which causes its IO code
to run when the thunk is forced, outside any exception trapping the caller
may do.
Not yet tested and places git-annex-shell is run need to be modified to
pass the new field settings.
Note that rsyncServerSend was changed to fork, rather than directly exec
rsync, because it needs to keep the transfer lock held, and clean up the
transfer log when done.
The reason the DirWatcher had to wait for program termination was because
it used withINotify, so when it finished, its watcher threads were killed.
But since I have two DirWatcher threads now, that was not good, and could
perhaps explain the MVar problem I saw yesterday. In any case, fixed this
part of the code by making the DirWatcher return a handle that can be used
to stop it, and now the main Assistant thread is the only one calling
waitForTermination.
SampleMVar won't work; between getting the current value and changing
it, another thread could made a change, which would get lost.
TMVar works well; this update situation is handled by atomic transactions.
Note that, since this always pushes branch synced/master to the remote, it
assumes that master has already gotten all the commits that are on the
remote merged in. Otherwise, fast-forward prevention may prevent the push.
That's probably ok, because the next stage is to automatically detect
incoming pushes and merge.
This *may* now return Add or Delete Changes as appropriate. All I know
for sure is that it compiles.
I had hoped to avoid maintaining my own state about the content of the
directory tree, and rely on git to check what was changed. But I can't;
I need to know about new and deleted subdirectories to add them to the
watch list, and git doesn't deal with (empty) directories.
So, wrote all the code to scan directories, remember their past contents,
compare with current contents, generate appropriate Change events, and
update bookkeeping info appropriately.
Could not reproduce the build failure I had seen related to this,
but the numbers were wrong with statfs64. Probably pulling from the wrong
place in the structure. statvfs seems to work..
The idea, not yet done, is to use this to detect when a file
has an old change time, and avoid expensive restaging of the file.
If git-annex watch keeps track of the last time it finished a full scan,
then any symlink that is older than that time must have been scanned
before, so need not be added. (Relying on moving, copying, etc of a file
all updating its change time.)
Anyway, this info is available for free since inotify already checks it,
so it might as well make it available.
Now really only done in the startup scan.
It turns out to be quite hard for event handlers to know when the startup
scan is complete. I tried to make addWatch pass that info, but found
threading the state very difficult. For now, a quick hack, using the fast
flag.
Note that it's actually possible for inotify events to come in while the
startup scan is still ongoing. Due to my hack, the expensive check will
be done for files added in such inotify events.
This requires a relatively expensive test at file add time to see if it's
in git already. But it can be optimised to only happen during the startup
scan.
When a new file is annexed, a deletion event occurs when it's moved away
to be replaced by a symlink. Most of the time, there is no problimatic
race, because the same thread runs the add event as the deletion event.
So, once the symlink is in place, the deletion code won't run at all,
due to existing checks that a deleted file is really gone.
But there is a race at startup, as then the inotify thread is running
at the same time as the main thread, which does the initial tree walking
and annexing. It would be possible for the deletion inotify to run
in a perfect race with the addition, and remove the newly added symlink
from the git cache.
To solve this race, added event serialization via a MVar. We putMVar
before running each event, which blocks if an event is already running.
And when an event finishes (or crashes!), we takeMVar to free the lock.
Also, make rm -rf not spew warnings by passing --ignore-unmatch when
deleting directories.
This fixes the scenario where:
* directory foo is moved away (and still watched)
* a new directory foo is made
* file (or directory) foo/bar is created
* the old directory's file (or directory) "bar" is deleted
We don't want a deletion event for foo/bar in this case.
There are two reasons for this test. First, there could be a fifo or other
non-regular file that was closed.
Second, this test avoids ugliness when a subdirectory is moved out of the
top of the watch directory to elsewhere, and a file added to it.
Since the subdirectory has moved, the file won't be present under the
old location, and nothing will be done.
I cannot find a way to stop watching such directories, at least not without
a lot of pain. The inotify interface in Haskell doesn't make it easy to
stop watching a given subdirectory when it's moved out; it would require
keeping a map of all watch handles that is shared between threads.
This workaround avoids the problem in most cases; the only remaining case
being deletion of a file from a moved subdirectory.
Improved the inotify code, so it will also notice directory removal
and symlink creation.
In the watch code, optimised away a stat of a file that's being added,
that's done by Command.Add.start. This is the reason symlink creation is
handled separately from file creation, since during initial tree walk
at startup, a stat was already done, and can be reused.
Baked into the code was an assumption that a repository's git directory
could be determined by adding ".git" to its work tree (or nothing for bare
repos). That fails when core.worktree, or GIT_DIR and GIT_WORK_TREE are
used to separate the two.
This was attacked at the type level, by storing the gitdir and worktree
separately, so Nothing for the worktree means a bare repo.
A complication arose because we don't learn where a repository is bare
until its configuration is read. So another Location type handles
repositories that have not had their config read yet. I am not entirely
happy with this being a Location type, rather than representing them
entirely separate from the Git type. The new code is not worse than the
old, but better types could enforce more safety.
Added support for core.worktree. Overriding it with -c isn't supported
because it's not really clear what to do if a git repo's config is read, is
not bare, and is then overridden to bare. What is the right git directory
in this case? I will worry about this if/when someone has a use case for
overriding core.worktree with -c. (See Git.Config.updateLocation)
Also removed and renamed some functions like gitDir and workTree that
misused git's terminology.
One minor regression is known: git annex add in a bare repository does not
print a nice error message, but runs git ls-files in a way that fails
earlier with a less nice error message. This is because before --work-tree
was always passed to git commands, even in a bare repo, while now it's not.
Amoung other things, this makes unlocking a WORM backed file and then
re-adding it without making any changes not add a new object, as the
timestamp is preserved.
This option avoids gpg key distribution, at the expense of flexability, and
with the requirement that all clones of the git repository be equally
trusted.
This is incomplete, it does not honor it yet for hash directories
and other annex bookkeeping files. Some of that is not needed for a bare
repo; some of it may be.
Not only an optimisation. fsck always tried to preventWrite to make sure
file modes are good, and in a shared repo, that will fail on directories
not owned by the current user. Although there may be other problems with
such a setup.
When preparing the debian stable backport, I am seeing a call to statvfs()
succeed, but also set errno to 2 (ENOENT). Not sure why this happens;
I am in a schroot when it does happen, or perhaps stable's libc is a little
broken and sets errno incorrectly. Anyway, it should be perfectly fine to
clear errno after the successful call, rather than before it.
Recursive inotify has beaten me before, with its bad design and races,
but not this time! (I think.) This is able to follow the strongest
filesystem traffic I can throw at it, and robustly notices every file
add and delete. Mostly that's down to Haskell having a quite nice threaded
inotify library (that does its own buffering). A key insight was realizing
that the inotify directory add race could be dealt with by scanning for
files inside newly added directories.
TODO: Add support for freebsd/osx kqueue; see
http://hackage.haskell.org/package/kqueue
Can a git-annex-monitor be far off?
Fix Key directory hash calculation code to behave as it did before version
3.20120227 when a key contains non-ascii.
The hash directories for a given Key are based on its md5sum.
Prior to ghc 7.4, Keys contained raw, undecoded bytes, so the md5sum was
taken of each byte in turn. With the ghc 7.4 filename encoding change,
keys contains decoded unicode characters (possibly with surrigates for
undecodable bytes). This changes the result of the md5sum, since the md5sum
used is pure haskell and supports unicode. And that won't do, as git-annex
will start looking in a different hash directory for the content of a key.
The surrigates are particularly bad, since that's essentially a ghc
implementation detail, so could change again at any time. Also, changing
the locale changes how the bytes are decoded, which can also change
the md5sum.
Symptoms would include things like:
* git annex fsck would complain that no copies existed of a file,
despite its symlink pointing to the content that was locally present
* git annex fix would change the symlink to use the wrong hash
directory.
Only WORM backend is likely to have been affected, since only it tends
to include much filename data (SHA1E could in theory also be affected).
I have not tried to support the hash directories used by git-annex versions
3.20120227 to 3.20120308, so things added with those versions with WORM
will require manual fixups. Sorry for the inconvenience!
This is a straight up pure-code stinker. The relative path calculation
looked for common subdirectories in the two paths, but failed to stop
after the paths diverged. When a later pair of subdirectories were the
same, the resulting relative path was wrong.
Added regression test for this.
The code explicitly switches from HEAD to GET for most redirects.
Possibly because someone misread a spec (which does require switching from
POST to GET for 303 redirects). Or possibly because the spec really is that
bad. Upstream bug: https://github.com/haskell/HTTP/issues/24
Since we absolutely don't want to download entire (large) files from
the web when checking that they exist with HEAD, I wrote my own redirect
follower, based closely on the one used by Network.Browser, but without
this misfeature.
Note that Network.Browser checks that the redirect url is a http url
and fails if not. I don't, because I want to not need to change this
code when it gets https support (related: I'm surprised to see it
doesn't support https yet..). The check does not seem security significant;
it doesn't support file:// urls for example. If a http url is redirected
to https, the Network.Browser will actually make a http connection again.
This could loop, but only up to 5 times.
If there's no Content-Length, or the key has no size, this check is not
done, but it should happen most of the time, and protect against web
content that has changed.
Under ghc 7.4, this seems to be able to handle all filename encodings
again. Including filename encodings that do not match the LANG setting.
I think this will not work with earlier versions of ghc, it uses some ghc
internals.
Turns out that ghc 7.4 has a special filesystem encoding that it uses when
reading/writing filenames (as FilePaths). This encoding is documented
to allow "arbitrary undecodable bytes to be round-tripped through it".
So, to get FilePaths from eg, git ls-files, set the Handle that is reading
from git to use this encoding. Then things basically just work.
However, I have not found a way to make Text read using this encoding.
Text really does assume unicode. So I had to switch back to using String
when reading/writing data to git. Which is a pity, because it's some
percent slower, but at least it works.
Note that stdout and stderr also have to be set to this encoding, or
printing out filenames that contain undecodable bytes causes a crash.
IMHO this is a misfeature in ghc, that the user can pass you a filename,
which you can readFile, etc, but that default, putStr of filename may
cause a crash!
Git.CheckAttr gave me special trouble, because the filenames I got back
from git, after feeding them in, had further encoding breakage.
Rather than try to deal with that, I just zip up the input filenames
with the attributes. Which must be returned in the same order queried
for this to work.
Also of note is an apparent GHC bug I worked around in Git.CheckAttr. It
used to forkProcess and feed git from the child process. Unfortunatly,
after this forkProcess, accessing the `files` variable from the parent
returns []. Not the value that was passed into the function. This screams
of a bad bug, that's clobbering a variable, but for now I just avoid
forkProcess there to work around it. That forkProcess was itself only added
because of a ghc bug, #624389. I've confirmed that the test case for that
bug doesn't reproduce it with ghc 7.4. So that's ok, except for the new ghc
bug I have not isolated and reported. Why does this simple bit of code
magnet the ghc bugs? :)
Also, the symlink touching code is currently broken, when used on utf-8
filenames in a non-utf-8 locale, or probably on any filename containing
undecodable bytes, and I temporarily commented it out.
I had not realized what a memory leak the lazy state monad could be,
although I have not seen much evidence of actual leaking in git-annex.
However, if running git-annex on a great many files, this could matter.
The additional Utility.State.changeState adds even more strictness,
avoiding a problem I saw in github-backup where repeatedly modifying
state built up a huge pile of thunks.
This drops the >>! and >>? with the nice low fixity. IfElse does have
undocumented >>=>>! and >>=>>? operators, but I deem that too fishy.
Anyway, using whenM and unlessM is easier; I sometimes mixed the operators
up.
git-annex FTBFS on s390, mips, powerpc, sparc. That StatFS code is failing
on all of them. At least on s390, the failure appears as:
Just (FileSystemStats {fsStatBlockSize = 4096, fsStatBlockCount = 0,
fsStatByteCount = 0, fsStatBytesFree = 0, fsStatBytesAvailable = 0,
fsStatBytesUsed = 0})
While I don't understand why this is happening, or how to fix it,
bandaid over it by checking for obviously bad values and returning Nothing.
That disables disk free space checking, but at least git-annex will work.
Upstream bug: http://code.google.com/p/xmobar/issues/detail?id=70
Fails with gpg 2. Instead, use a different environment variable.
The clean fix would instead be to add an annex.gpg-options configuration.
But, that would be rather a lot of work and it's unlikely it would be
useful for much else.
Only set --no-tty when GPG_AGENT_INFO is set and batch mode is used.
In the test suite, set GPG_AGENT_INFO to /dev/null to avoid the test suite
relying on /dev/tty.
Dotfiles, and files inside dotdirs are not added by "git annex add" unless
the dotfile or directory is explicitly listed. So "git annex add ." will
add all untracked files in the current directory except for those in
dotdirs.
One reason for this is that it will make git-annex more usable with vcsh,
where you don't want "vcsh big annex add" to check in all the dotfiles
that are already versioned in other repositories.
(If you're using vcsh for repos that contain non-dotfiles, this won't help,
and you'll need to .gitignore such things, but this will cover the common
case.)
A more general reason why this seems like a good idea is the same reason ls
ignores dotfiles, just the unix convention that they are cruft that is kept
out of the way most of the time.
All the other git-annex commands still do deal with any dotfiles that do
get into the annex. This seemed right because if I've gone to the trouble
to add a dotfile, I will want "git annex get ." to get it along with
everything else.
I was happily able to repurpose some code from Git.Filename to handle this.
I remember writing that code... a whole afternoon at a coffee shop, after
which I felt I'd struggled with Haskell and git, and sorta lost, in needing
to write this nasty peice of code. But was also pleased at the use of a
pair of functions and quickcheck that allowed me to get it 100% right.
So, turns out I not only got it right, but the code wasn't as special-purpose
as I'd feared. Yay!
This is built for speed; a format string is parsed once, generating a
Format, that can be applied repeatedly to different sets of variables
to generate output.
The only fully supported thing is to have the main repository on one disk,
and .git/annex on another. Only commands that move data in/out of the annex
will need to copy it across devices.
There is only partial support for putting arbitrary subdirectories of
.git/annex on different devices. For one thing, but this can require more
copies to be done. For example, when .git/annex/tmp is on one device, and
.git/annex/journal on another, every journal write involves a call to
mv(1). Also, there are a few places that make hard links between various
subdirectories of .git/annex with createLink, that are not handled.
In the common case without cross-device, the new moveFile is actually
faster than renameFile, avoiding an unncessary stat to check that a file
(not a directory) is being moved. Of course if a cross-device move is
needed, it is as slow as mv(1) of the data.
This is actually tricky, 45bbf210a1 added
the escaping because it's needed for rsync that does go over ssh.
So I had to detect whether the remote's rsync url will use ssh or not,
and vary the escaping.
* git-annex now asks git-annex-shell to verify that it's operating in
the expected repository.
* Note that this git-annex will not interoperate with remotes using
older versions of git-annex-shell.
The reason for this check is to avoid git-annex getting confused about
what remote repository actually contains a value. It's a prerequisite for
supporting git insteadOf aliases.
find: Rather than only showing files whose contents are present, when used
with --exclude --copies or --in, displays all files that match the
specified conditions.
Note that this is a behavior change for find --exclude! Old behavior
can be gotten with find --in . --exclude=...
This includes a generic JSONStream library built on top of Text.JSON
(somewhat hackishly).
It would be possible to stream out a single json document describing
all actions, but it's probably better for consumers if they can expect
one json document per line, so I did it that way instead.
Output from external programs used for transferring files is not
currently hidden when outputting json, which probably makes it not very
useful there. This may be dealt with if there is demand for json
output for --get or --move to be parsable.
The version, status, and find subcommands have hand-crafted output and
don't do json. The whereis subcommand needs to be modified to produce
useful json.