This avoids forking another process, avoids polling, fixes a race,
and avoids a rare forkProcess thread hang that I saw once time
when starting the webapp.
Make Utility.Process wrap the parts of System.Process that I use,
and add debug logging to them.
Also wrote some higher-level code that allows running an action
with handles to a processes stdin or stdout (or both), and checking
its exit status, all in a single function call.
As a bonus, the debug logging now indicates whether the process
is being run to read from it, feed it data, chat with it (writing and
reading), or just call it for its side effect.
Test suite now passes with -threaded!
I traced back all the hangs with -threaded to System.Cmd.Utils. It seems
it's just crappy/unsafe/outdated, and should not be used. System.Process
seems to be the cool new thing, so converted all the code to use it
instead.
In the process, --debug stopped printing commands it runs. I may try to
bring that back later.
Note that even SafeSystem was switched to use System.Process. Since that
was a modified version of code from System.Cmd.Utils, it needed to be
converted too. I also got rid of nearly all calls to forkProcess,
and all calls to executeFile, which I'm also doubtful about working
well with -threaded.
This *almost* works.
Along the way, I noticed that the --uuid parameter was being accidentially
passed after the --, so that has never been actually used by
git-annex-shell to verify it's running in the expected repository. Oops. Fixed.
Not yet tested and places git-annex-shell is run need to be modified to
pass the new field settings.
Note that rsyncServerSend was changed to fork, rather than directly exec
rsync, because it needs to keep the transfer lock held, and clean up the
transfer log when done.
In order to record a semi-useful filename associated with the key,
this required plumbing the filename all the way through to the remotes'
storeKey and retrieveKeyFile.
Note that there is potential for deadlock here, narrowly avoided.
Suppose the repos are A and B. A sends file foo to B, and at the same
time, B gets file foo from A. So, A locks its upload transfer info file,
and then locks B's download transfer info file. At the same time,
B is taking the two locks in the opposite order. This is only not a
deadlock because the lock code does not wait, and aborts. So one of A or
B's transfers will be aborted and the other transfer will continue.
Whew!
Note this is per-remote, so trying to get the same file from multiple
remotes can still let duplicate downloads run. (And uploading the same file
to multiple remotes is not duplicate at all of course.)
get, move, and copy are the only git-annex subcommands that transfer
files, but there's still git-annex-shell recvkey and sendkey to deal with too.
I considered modifying retrieveKeyFile or getViaTmp, but they are called
by other code that does not involve expensive file transfers (migrate)
or that does file transfers that should not be checked by this (fsck --from).
While this word may be less familiar to some users, it avoids the
connotation that version 2 is better than version 1, which is wrong
when the two variants were conflicting.
Note that, since this always pushes branch synced/master to the remote, it
assumes that master has already gotten all the commits that are on the
remote merged in. Otherwise, fast-forward prevention may prevent the push.
That's probably ok, because the next stage is to automatically detect
incoming pushes and merge.
Kqueue needs to remember which files failed to be added due to being open,
and retry them. This commit gets the data in place for such a retry thread.
Broke KeySource out into its own file, and added Eq and Ord instances
so it can be stored in a Set.
Was decoding the git-cat-file of the symlink target as utf8, but that can't
do, unix filenames are from the 70's and need this shiny disco
fileSystemEncoding.
Now it starts really, really fast! Down from 15 minutes or so on my big
tree to around 1 minute.
The trick is to remember the last time the daemon was running. Links with a
ctime from before that point don't need to be restaged on startup (as long
as they are correct), since the old daemon would have handled them already.
We also assume that if the daemon has never run before, any links that
already exist are good. The pre-commit hook fixes links, so this should be
a safe assumption.
Adds another MVar holding a DaemonStatus data structure. Also
allowed getting rid of the Annex.Fast hack. This data structure will
probably grow a lot of details about the daemon's status, that will
later be used by the webapp's UI.
The code to actually track when the daemon was last running is not written
yet. It's 3 am.
The idea, not yet done, is to use this to detect when a file
has an old change time, and avoid expensive restaging of the file.
If git-annex watch keeps track of the last time it finished a full scan,
then any symlink that is older than that time must have been scanned
before, so need not be added. (Relying on moving, copying, etc of a file
all updating its change time.)
Anyway, this info is available for free since inotify already checks it,
so it might as well make it available.
Now really only done in the startup scan.
It turns out to be quite hard for event handlers to know when the startup
scan is complete. I tried to make addWatch pass that info, but found
threading the state very difficult. For now, a quick hack, using the fast
flag.
Note that it's actually possible for inotify events to come in while the
startup scan is still ongoing. Due to my hack, the expensive check will
be done for files added in such inotify events.
This requires a relatively expensive test at file add time to see if it's
in git already. But it can be optimised to only happen during the startup
scan.
I thought this might be a lock conflict that explains the deadlock when
built with -threaded, but it seems not.. it still locks! It even locks
without the committer thread.
Indeed, it locks when running "git annex add"! -threaded is exposing some
other problem.
Still, this seems conceptually cleaner and did not add any inneficiencies.
Also added some high-level documentation about the threads used.
The commit thread now has access to a channel containing the times of
all uncommitted changes. This lets it be smart about detecting busy times
when a batch job is running (such as rm -rf, or untarring something, etc),
and avoid committing until it's done. While at the same time, instantly
committing one-off changes that the user is going to expect to see
immediately.
I had to use STM to implement the channel, because of
http://hackage.haskell.org/trac/ghc/ticket/4154
While this adds a dependency, I always wanted to use STM, so this actually
makes me happy. ;)
Also happy that shouldCommit is a pure function, so other commit smartness
strategies can easily be played with. Although the current one seems pretty
good.
There is one bug, for some reason it does double commits, every time.
While I was in there, I noticed and fixed a bug in the queue size
calculations. It was never encountered only because Queue.add was
only ever run with 1 file in the list.
This ensures that all special remotes show up in git annex status.
Before, a special remote that was not manually described, and was not
a current git remote, did not show up there, although initremote did list
it.
There's a race adding a new file to the annex: The file is moved to the
annex and replaced with a symlink, and then we git add the symlink. If
someone comes along in the meantime and replaces the symlink with
something else, such as a new large file, we add that instead. Which could
be bad..
This race is fixed by avoiding using git add, instead the symlink is
directly staged into the index.
It would be nice to make `git annex add` use this same technique.
I have not done so yet because it currently runs git update-index once per
file, which would slow does `git annex add`. A future enhancement would be
to extend the Git.Queue to include the ability to run update-index with
a list of Streamers.
Anything that tries to open the file for write, or delete the file,
or replace it with something else, will not affect the add.
Only if a process has the file open for write before add starts
can it still change it while (or after) it's added to the annex.
(fsck will catch this later of course)
Uses a MVar again, as there seems no other way to thread the state through
inotify events.
This is a rather unsatisfactory result. I had wanted to run them in
the same monad so that the git queue could be used to coleasce git commands
and speed things up. But, that led to fragility: If several files are
added, and one is removed before queue flush, git add will fail to add
any of them. So, the queue is still explicitly flushed after each add for
now.
TODO: Investigate using git add --ignore-errors. This would need to be done
in Command.Add. And, git add still exits nonzero with it, so would need
to avoid crashing on queue flush.
When a new file is annexed, a deletion event occurs when it's moved away
to be replaced by a symlink. Most of the time, there is no problimatic
race, because the same thread runs the add event as the deletion event.
So, once the symlink is in place, the deletion code won't run at all,
due to existing checks that a deleted file is really gone.
But there is a race at startup, as then the inotify thread is running
at the same time as the main thread, which does the initial tree walking
and annexing. It would be possible for the deletion inotify to run
in a perfect race with the addition, and remove the newly added symlink
from the git cache.
To solve this race, added event serialization via a MVar. We putMVar
before running each event, which blocks if an event is already running.
And when an event finishes (or crashes!), we takeMVar to free the lock.
Also, make rm -rf not spew warnings by passing --ignore-unmatch when
deleting directories.
And just like that, annexed files can be moved and copies around within
the tree, and are automatically fixed to point to the content, and staged
in git. Huzzah!
Delete still remains TODO, with its troublesome race during add..
Improved the inotify code, so it will also notice directory removal
and symlink creation.
In the watch code, optimised away a stat of a file that's being added,
that's done by Command.Add.start. This is the reason symlink creation is
handled separately from file creation, since during initial tree walk
at startup, a stat was already done, and can be reused.
Resetting an unlocked file to the branch head failed if it had just been
added, not committed, and unlocked, since the branch didbn't have it.
The code was concerned about dropping any changes that might be staged in the
index, but I cannot see why.
Baked into the code was an assumption that a repository's git directory
could be determined by adding ".git" to its work tree (or nothing for bare
repos). That fails when core.worktree, or GIT_DIR and GIT_WORK_TREE are
used to separate the two.
This was attacked at the type level, by storing the gitdir and worktree
separately, so Nothing for the worktree means a bare repo.
A complication arose because we don't learn where a repository is bare
until its configuration is read. So another Location type handles
repositories that have not had their config read yet. I am not entirely
happy with this being a Location type, rather than representing them
entirely separate from the Git type. The new code is not worse than the
old, but better types could enforce more safety.
Added support for core.worktree. Overriding it with -c isn't supported
because it's not really clear what to do if a git repo's config is read, is
not bare, and is then overridden to bare. What is the right git directory
in this case? I will worry about this if/when someone has a use case for
overriding core.worktree with -c. (See Git.Config.updateLocation)
Also removed and renamed some functions like gitDir and workTree that
misused git's terminology.
One minor regression is known: git annex add in a bare repository does not
print a nice error message, but runs git ls-files in a way that fails
earlier with a less nice error message. This is because before --work-tree
was always passed to git commands, even in a bare repo, while now it's not.
annex.ssh-options, annex.rsync-options, annex.bup-split-options.
And adjust types to avoid the bugs that broke several config settings
recently. Now "annex." prefixing is enforced at the type level.
This is incomplete, it does not honor it yet for hash directories
and other annex bookkeeping files. Some of that is not needed for a bare
repo; some of it may be.
So far this only handles auto-annexing new files that are created inside
the repository while it's running. To make this really useful,
it needs to at least:
- notice deleted files and stage the deletion
(tricky; there's a race with add..)
- notice renamed files, auto-fix the symlink, and stage the new file location
- periodically auto-commit staged changes
- honor .gitignore, not adding files it excludes
Also nice to have would be:
- Somehow sync remotes, possibly using a push sync like dvcs-autosync
does, so they are immediately updated.
- Somehow get content that is unavilable. This is problimatic with inotify,
since we only get an event once the user has tried (and failed) to read
from the file. Perhaps instead, automatically copy content that is added
out to remotes, with the goal of all repos eventually getting a copy,
if df allows.
- Drop files that have not been used lately, or meet some other criteria
(as long as there's a copy elsewhere).
- Perhaps automatically dropunused files that have been deleted,
although I cannot see a way to do that, since by the time the inotify
deletion event arrives, the file is deleted, and we cannot see what
its symlink pointed to! Alternatievely, perhaps automatically
do an expensive unused/dropunused cleanup process.
Some of this probably needs the currently stateless threads to maintain
a common state.
getConfig got a remote-specific config, and this confusing name caused it
to be used a couple of places that only were interested in global configs.
Rename to getRemoteConfig and make getConfig only get global configs.
There are no behavior changes here, but remote.<name>.annex-web-options
never actually worked (and per-remote web options is a very unlikely to be
useful case so I didn't make it work), so fix the documentation for it.
Add tuning, docs, etc.
Not sure if status is the right place to remote size.. perhaps unused
should report the size and also warn if it sees more keys than the bloom
filter allows?
Can't trust the key size to be accurate for tmp and bad keys, so check
actual file size. In the wild I saw the old code be wrong by a factor
of about 100!
If all tmp/bad keys are empty, they're not shown in status at all.
Showing 0 bytes and suggesting to clean it up seemed weird..
Stale and bad files are rare, so it's more efficient to use inAnnex to see
if they can be deleted, rather than keeping the list of all present keys
around for them.
Before, it leaked space due to caching lists of keys. Now all necessary
data about keys is calculated as they stream in.
The "nearly constant" is due to getKeysPresent, which builds up a lot
of [] thunks as it traverses .git/annex/objects/. Will deal with it later.
Much of the memory bloat turned out to be due to getKeysReferenced
containing a mapM, which is strict and buffered the whole list
rather than streaming it.
The other half of the bloat was due to building a temporary Set
in order to call S.difference. While that is more cpu efficient,
I switched to successive S.delete, since with it, I can run a whole
git annex unused in less than 8 mb of memory.
The whole Set of keys with content available is still stored in memory,
so running unused in a repo with a whole lot of file content will still
use more memory. In a repo containing 6000 files, it needed 40 mb.
Note that the status command still uses the bloatful getKeysReferenced.