With --fast, unavailable local remotes are filtered out of the fast set.
This way, if there are local remotes, --fast always acts only on them,
and if none are mounted, acts on nothing. This consistency is better
than --fast acting on different remotes depending on what's mounted.
Rsync is only run once, with include / exclude rules used to specify
exactly what to delete. This is faster, and avoids ugly error messages
from rsync, and doesn't fail if the content already got deleted somehow.
A crash on parsing was fixed a while ago. This adds support for fully
correctly parsing multiline git config values, using git config --null.
Since git-annex-shell configlist uses normal git config output, I left in
support for that too; the two forms of config output can be easily
identified by the parser. Since configlist only prints the annex.uuid
config, there's no risk of multiline values there, so no need to change it.
Needed due to this scenario: Bare repo origin is made, foo is cloned from it;
foo is initalized; a file is added to foo's annex; git annex move --to origin
Since the git-annex branch has not yet been pushed to origin, it doesn't
auto-initialize. When the content is sent to it, it's stored, but
the remote has NoUUID, and so nothing is logged in the location log.
Then the content is removed from the local repo, and git-annex has lost
track of it.
git annex fsck in origin will find the lost content, but let's not let this
happen. Content should only be sent to initalized remotes.
This cannot happen for non-local remotes, since git-annex-shell always
checks that the repo is initialized.
Directory special remotes will now always store keys in the lowercase name,
which avoids the complication of catching failures to create the mixed case
name.
Git remotes using http will now try the lowercase name first.
Supporting multiple directory hash types will allow converting to a
different one, without a flag day.
gitAnnexLocation now checks which of the possible locations have a file.
This means more statting of files. Several places currently use
gitAnnexLocation and immediately check if the returned file exists;
those need to be optimised.
This is actually tricky, 45bbf210a1 added
the escaping because it's needed for rsync that does go over ssh.
So I had to detect whether the remote's rsync url will use ssh or not,
and vary the escaping.
git-annex-shell inannex now returns always 0, 1, or 100 (the last when
it's unclear if content is currently in the index due to it currently being
moved or dropped).
(Actual locking code still not yet written.)
Many functions took the repo as their first parameter. Changing it
consistently to be the last parameter allows doing some useful things with
currying, that reduce boilerplate.
In particular, g <- gitRepo is almost never needed now, instead
use inRepo to run an IO action in the repo, and fromRepo to get
a value from the repo.
This also provides more opportunities to use monadic and applicative
combinators.
Avoid ever using read to parse a non-haskell formatted input string.
show :: Key is arguably still show abuse, but displaying Keys as filenames
is just too useful to give up.
Before the config was read each time onLocal was called, and entirely
redundantly since it's read for same-host remotes on startup.
Also a minor bug fix: When rsyncing to a same-host remote, use the
rsync-options from the repository that the user ran git-annex in, not those of
the receiving repository.
Specifically, disabled trying to update the git-annex branch on the remote,
since that data is never used by operations that act on such remotes.
Also, when copying content to such a remote, skip committing the presence
information changes to its git-annex branch. Leaving it in the journal there
is ok: Any command run on the remote that needs the info will flush the
journal.
This may partially solve this bug:
http://git-annex.branchable.com/bugs/fails_to_handle_lot_of_files/
Although I still see unreaped git processes piling up when doing a copy --to.
Only one place need to filter the list of remotes for ignored remotes:
keyPossibilities. Make the full list available to everything else.
This allows getting rid of the special case handing for --from and --to
to make ignored remotes not be ignored with those options.
The key name was not being sufficiently escaped, although it didn't break
anything due to luck. Switch to properly escaped key names for the log
filename, with a fallback to the buggy old name.
The only remaining vestiage of backends is different types of keys. These
are still called "backends", mostly to avoid needing to change user interface
and configuration. But everything to do with storing keys in different
backends was gone; instead different types of remotes are used.
In the refactoring, lots of code was moved out of odd corners like
Backend.File, to closer to where it's used, like Command.Drop and
Command.Fsck. Quite a lot of dead code was removed. Several data structures
became simpler, which may result in better runtime efficiency. There should
be no user-visible changes.
Otherwise, the location log changes are only staged in its index,
and this can confuse matters if pulling or cloning from the remote.
The test suite was failing because this wasn't done.
cp is still used when copying file from repos on the same filesystem, since
--reflink=auto can make it significantly faster on filesystems such as
btrfs.
Directory special remotes still use cp, not rsync. It's not clear what
tmp file should be used when rsyncing to such a remote.
This takes advantage of the debug logging done by missingh, and I added
my own debug messages for executeFile calls. There are still some other
low-level ways git-annex runs stuff that are not shown by debugging,
but this gets most of it easily.
These are defined in ifelse, but it's not currently available and I don't
want to pull in a library for 6 lines of code anyhow.
Also, ifelse sets the fixity to 1, which does not allow >>? error $ ...
In particular, munge key filenames to comply with the IA's filename limits,
disable encryption, support their nonstandard way of creating buckets, and
allow x-amz-* headers to be specified in initremote to set item metadata.
Still TODO: initremote does not handle multiword metadata headers right.
Fully tested and working, including resuming and encryption. (Though not
resuming when sending *with* encryption; gpg doesn't produce identical
output each time.)
Uses same layout as the directory special remote and the .git/annex/objects/
directory.
This was a most surprising leak. It occurred in the process that is forked
off to feed data to gpg. That process was passed a lazy ByteString of
input, and ghc seemed to not GC the ByteString as it was lazily read
and consumed, so memory slowly leaked as the file was read and passed
through gpg to bup.
To fix it, I simply changed the feeder to take an IO action that returns
the lazy bytestring, and fed the result directly to hPut.
AFAICS, this should change nothing WRT buffering. But somehow it makes
ghc's GC do the right thing. Probably I triggered some weakness in ghc's
GC (version 6.12.1).
(Note that S3 still has this leak, and others too. Fixing it will involve
another dance with the type system.)
Update: One theory I have is that this has something to do with
the forking of the feeder process. Perhaps, when the ByteString
is produced before the fork, ghc decides it need to hold a pointer
to the start of it, for some reason -- maybe it doesn't realize that
it is only used in the forked process.
Stalls were caused by code that did approximatly:
content' <- liftIO $ withEncryptedContent cipher content return
store content'
The return evaluated without actually reading content from S3,
and so the cleanup code began waiting on gpg to exit before
gpg could send all its data.
Fixing it involved moving the `store` type action into the IO monad:
liftIO $ withEncryptedContent cipher content store
Which was a bit of a pain to do, thank you type system, but
avoids the problem as now the whole content is consumed, and
stored, before cleanup.
Untested, I will need to dust off my S3 keys, and plug the modem back in
that was unplugged last night due to very low battery bank power. But it
compiles, so it's probably perfect. :)
Forking a new process rather than relying on a thread to feed gpg.
The feeder thread was stalling, probably when the main thread got
to the point it was wait()ing on the gpg to exit.
Since the queue is flushed in between subcommand actions being run,
there should be no issues with actions that expect to queue up some stuff
and have it run after they do other stuff. So I didn't have to audit for
such assumptions.
to avoid some issues with git on OSX with the mixed-case directories. No
migration is needed; the old mixed case hash directories are still read;
new information is written to the new directories.
Two machines might have access to the same directory remote on different
paths, so don't include the path in its persistent config, instead use
the git config to record it.
So, it would be nicer to just use Cabal and take advantage
of its conditional compilation support. But, Cabal seems to
lack good support for a package with an internal library that is used by
multiple executables. It wants to build everything twice or more.
That's too slow for me.
Anyway, fairly soon, I expect to upgrade hS3 to a requirment, and I
can just revert this.
Goal is to support multiple different types of remotes, some of which
are not git repositories. To that end, added a Remote class, and moved
git remote specific code into Remote.GitRemote.
Remotes.hs is still present as some code has not been converted to use the
new Remote class yet.