First, this ensures that git annex addurl, when run repeatedly with the
same url, doesn't create duplicate files, which it did before when it
fell back to the longer filename.
Secondly, the file part of an url is frequently not very descriptive on its
own.
The uri scheme, auth, and port is intentionally left out, as clutter.
This includes a generic JSONStream library built on top of Text.JSON
(somewhat hackishly).
It would be possible to stream out a single json document describing
all actions, but it's probably better for consumers if they can expect
one json document per line, so I did it that way instead.
Output from external programs used for transferring files is not
currently hidden when outputting json, which probably makes it not very
useful there. This may be dealt with if there is demand for json
output for --get or --move to be parsable.
The version, status, and find subcommands have hand-crafted output and
don't do json. The whereis subcommand needs to be modified to produce
useful json.
Using a single strictness annotation, in just the right place.
Tried several others, none of which helped and some of which potentially
hurt. This is only the second time I've really had to deal with this in
a year of using haskell, which is, I suppose not that bad.
Statting files returned by dirContents to see if they exist and are regular
files seems pretty useless. This code was originally part of fsck, and
perhaps the idea then was to avoid things returned by dirContents that were
not files. But it's certianly not needed in the current use cases for
getKeysPresent.
when a git repository is first being created. Clones will automatically
notice that git-annex is in use and automatically perform a basic
initalization. It's still recommended to run "git annex init" in any
clones, to describe them.
The tricky part about this is that to generate a key, the file must be
present already. Worked around by adding (back) an URL key type, which
is used for addurl --fast.
This was more complex than would be expected. unannex has to use git commit -a
since it's removing files from git; git commit filelist won't do.
Allow commands to be added to the Git queue that have no associated files,
and run such commands once.
This allows eg, `git-annex -c annex.rsync-options=-6 get file`
The overridden git configs are not passed on to git plumbing commands
that are run. Perhaps someone will find a need to do that, but I don't yet
and it would require storing more state to know what config settings
have been overridden and need to be passed on.
Could result in bad location log data for keys that contain [&:%] in their
names. (A workaround for this problem is to run git annex fsck.)
`git annex unused --from remote` could also run into the broken code.
Giulio Eulisse reported that on OSX, bad free space numbers were being
shown. It thought he had negative free space.
While the documentation is not clear, especially across OS's, it seems
likely that statfs uses unsigned long. It doesn't make sense for any
numbers to be negative.
This is substantially slower than using make, does not build or install
documentation, does not run the test suite, and is not particularly
recommended, but could be useful to some.
Rebenchmarked v2 vs v3, and v3 is now actually faster. Yes, storing data
in git, using git as a filesystem is actually faster than just using the
filesystem. If you do it just right. :)
All commands that often have to read a lot of information from
the git-annex branch should now be nearly as fast as before
the branch was introduced.
Before fsck was taking approximatly 3 hours, now it's running in 8 minutes.
The code is very nasty. It should be rewritten to read the header line
from git cat-file, and then read the specified number of bytes of content.
Since the logs have just been moved into the git-annex branch, don't need
to worry about backwards compatability with old versions of git-annex that
would fail to parse location logs with extra fields tacked on.
This is a new git subcommand, that does a generic union merge operation
between two refs, storing the result in a branch. It operates efficiently
without touching the working tree. It does need to write out a temporary
index file, and may need to write out some other temp files as well.
This could be useful for anything that stores data in a branch,
and needs to merge changes into that branch without actually checking the
branch out. Since conflict handling can't be done without a working copy,
the merge type is always a union merge, which is fine for data stored in
log format (as git-annex does), or in non-conflicting files
(as pristine-tar does).
This probably belongs in git proper, but it will live in git-annex for now.
---
Plan is to move .git-annex/ to a git-annex branch, and use git-union-merge
to handle merging changes when pulling from remotes.
Some preliminary benchmarking using real .git-annex/ data indicates
that it's quite fast, except for the "git add" call, which is as slow
as "git add" tends to be with a big index.
cp is still used when copying file from repos on the same filesystem, since
--reflink=auto can make it significantly faster on filesystems such as
btrfs.
Directory special remotes still use cp, not rsync. It's not clear what
tmp file should be used when rsyncing to such a remote.
get not honoring --from has surprised me a few times, so least surprise
suggests it should just behave like copy --from. This leaves the difference
between get and copy being that copy always requires the remote to copy
from, while get will decide whether to get a file from a key/value store or
a remote.
Avoid git reset here too, so I no longer need to care that it's much more
expensive than seems wise (but I asked the git list about that anyway).
It's not necessary to reset the staged file content from the index, as
the `git add` of the the symlink will replace it anyway.
`git commit` of unlocked files is still slow, since git still has to shove
their entire content into the index, only to have it be thrown away. So it's
still better to use `git annex add`
This takes advantage of the debug logging done by missingh, and I added
my own debug messages for executeFile calls. There are still some other
low-level ways git-annex runs stuff that are not shown by debugging,
but this gets most of it easily.
In particular, munge key filenames to comply with the IA's filename limits,
disable encryption, support their nonstandard way of creating buckets, and
allow x-amz-* headers to be specified in initremote to set item metadata.
Still TODO: initremote does not handle multiword metadata headers right.
Releasing before I have quite finished the code. Got a little caught
up in Anathem references. Time for a walk and then a tiny bit more coding
and possibly testing.
When it's stalled, there are 3 processes:
git annex
git ls-files
git check-attr
git-annex stalls trying to write to git check-attr, which stalls trying to
write to stdout (read by git-annex).
git ls-files does not seem to be involved directly; I've seen the stall when
it was still streaming out the file list, and after it had exited and
zombified.
The read and write are supposed to be handled by two different threads,
which pipeBoth forks off, thus avoiding deadlock. But it does deadlock.
(Certian signals unblock the deadlock for a while, then it stalls again.)
So, this is another case of WTF is the ghc IO manager doing today?
I avoid the issue by converting the writer to a separate process.
Possibly this was caused by some change in ghc 7 -- I'm offline and cannot
verify now, but I'm sure I used to be able to run git annex drop w/o it
hanging! And the code does not seem to have changed, except for commit
c1dc407941, which I tried reverting without
success. In fact, I reverted all the way back to 0.20110316 and still
saw the stall.
Update: Minimal test case:
import System.Cmd.Utils
main = do
as <- checkAttr "blah" $ map show [1..100000]
sequence $ map (putStrLn . show) as
checkAttr attr files = do
(_, s) <- pipeBoth "git" params $ unlines files
return $ lines s
where
params = ["check-attr", attr, "--stdin"]
Bug filed on ghc in debian, #624389
Fully tested and working, including resuming and encryption. (Though not
resuming when sending *with* encryption; gpg doesn't produce identical
output each time.)
Uses same layout as the directory special remote and the .git/annex/objects/
directory.
This was a real PITA to fix, since location logs can be staged in
both the current repo, as well as in local remote's repos, in
which case the cwd will not be in the repo. And git add needs different
params in both cases, when absolute paths are not used.
In passing, git annex fsck now stages location log fixes.
The test suite will not be run if it cannot be compiled.
It may be possible later to split off the quickcheck using tests into
a separate program and keep most of the tests using just hunit.
The remaining leaks are in hS3. The leak with encryption was worked around
by the use of the temp file. (And was probably originally caused by
gpgCipherHandle sparking a thread which kept a reference to the start
of the byte string.)
* Update Debian build dependencies for ghc 7.
* Debian package is now built with S3 support. Thanks Joachim Breitner for
making this possible, also thanks Greg Heartsfield for working to improve
the hS3 library for git-annex.
Also hid a conflicting new symbol from Control.Monad.State
This was a most surprising leak. It occurred in the process that is forked
off to feed data to gpg. That process was passed a lazy ByteString of
input, and ghc seemed to not GC the ByteString as it was lazily read
and consumed, so memory slowly leaked as the file was read and passed
through gpg to bup.
To fix it, I simply changed the feeder to take an IO action that returns
the lazy bytestring, and fed the result directly to hPut.
AFAICS, this should change nothing WRT buffering. But somehow it makes
ghc's GC do the right thing. Probably I triggered some weakness in ghc's
GC (version 6.12.1).
(Note that S3 still has this leak, and others too. Fixing it will involve
another dance with the type system.)
Update: One theory I have is that this has something to do with
the forking of the feeder process. Perhaps, when the ByteString
is produced before the fork, ghc decides it need to hold a pointer
to the start of it, for some reason -- maybe it doesn't realize that
it is only used in the forked process.
Stalls were caused by code that did approximatly:
content' <- liftIO $ withEncryptedContent cipher content return
store content'
The return evaluated without actually reading content from S3,
and so the cleanup code began waiting on gpg to exit before
gpg could send all its data.
Fixing it involved moving the `store` type action into the IO monad:
liftIO $ withEncryptedContent cipher content store
Which was a bit of a pain to do, thank you type system, but
avoids the problem as now the whole content is consumed, and
stored, before cleanup.
Since the queue is flushed in between subcommand actions being run,
there should be no issues with actions that expect to queue up some stuff
and have it run after they do other stuff. So I didn't have to audit for
such assumptions.
to avoid some issues with git on OSX with the mixed-case directories. No
migration is needed; the old mixed case hash directories are still read;
new information is written to the new directories.
So, it would be nicer to just use Cabal and take advantage
of its conditional compilation support. But, Cabal seems to
lack good support for a package with an internal library that is used by
multiple executables. It wants to build everything twice or more.
That's too slow for me.
Anyway, fairly soon, I expect to upgrade hS3 to a requirment, and I
can just revert this.
For example, this could happen if using SHA1 and a file with content
"foo" were added to that backend. Then a file with "content" foo were
migrated from the WORM backend.
Assume that, if a backend assigned the same key, the already annexed
content must be the same. So, the "old" content can be reused.
The space leak was somehow caused by this line:
absfiles <- mapM absPath files
I confess, I don't quite understand why this caused bad buffering,
but apparently the whole pipeline from git-ls-files backed up at that
point.
Happily, rewriting the code to only get the cwd once and use a pure
function to calculate absfiles clears it up, and should be a little more
efficient in syscalls too.
Add --fast flag, that can enable less expensive, but also less thurough versions of some commands.
* Add --fast flag, that can enable less expensive, but also less thurough
versions of some commands.
* fsck: In fast mode, avoid checking checksums.
* unused: In fast mode, just show all existing temp files as unused,
and avoid expensive scan for other unused content.
Free space checking is now done, for transfers of data for keys that have free space metadata.
(Notably, not for SHA* keys generated with git-annex 0.24 or earlier.)
The code is believed to work on Linux, FreeBSD, and OSX; check compile-time
messages to see if it is not enabled for your OS.
When adding files to the annex, the symlinks pointing at the annexed
content are made to have the same mtime as the original file. While git
does not preserve that information, this allows a tool like metastore to be
used with annexed files.
Haskell's IO layer crashes on characters > 255 when in a non-unicode (latin1)
locale. Until Haskell gets better behavior, put in an admittedly ugly
workaround for that: git-annex forces utf8 output mode no matter what
locale is selected. So if you use a non-utf8 locale, your filenames with
characters > 127 will not be displayed as you'd expect. But at least it
won't crash.
* Look for dir.git directories the same as git does.
* Support remote urls specified as relative paths.
* Support non-ssh remote paths that contain tilde expansions.
I had not taken into account that the code was written to run git and leave
zombies, for performance/laziness reasons, when I wrote the test suite.
So rather than the typical 1 zombie process that git-annex develops, test
developed dozens. Caused problems on system with low process limits.
Added a reap function to GitRepo, that waits for any zombie child processes.
Internally, the filenames are stored as un-decoded unicode.
I tried decoding them, but then haskell tries to access the wrong files.
Hmm.
So, I've unhappily chosen option "B", which is to decode filenames before
they are displayed.
Still todo:
- add repos from uuid.log that were not directly found
- group repos into their respective hosts
- display inaccessible repos and broken remote connections in red
- anonymize the url display somewhat, so the maps can be shared
- use uuid info to tell when two apparently different repos are actually
the same repo accessed in different ways
* Improved temp file handling. Transfers of content can now be resumed
from temp files later; the resume does not have to be the immediate
next git-annex run.
* unused: Include partially transferred content in the list.
Moved away from a map of flags to storing config directly in the AnnexState
structure. Got rid of most accessor functions in Annex.
This allowed supporting multiple --exclude flags.
* bugfix: Running `move --to` with a remote whose UUID was not yet known
could result in git-annex not recording on the local side where the
file was moved to. This could not result in data loss, or even a
significant problem, since the remote *did* record that it had the file.
* Also, add a general guard to detect attempts to record information
about repositories with missing UUIDs.