Since encryption=shared, the encryption key is stored in the git repo, so
there is no point at all in encrypting the creds, also stored in the git
repo with that key. So `initremote` doesn't. The creds are simply stored
base-64 encoded.
However, it then tried to always decrypt creds when encryption was used..
replaceFileOr was broken and ran the rollback action always.
Luckily, for replaceFile, the rollback action was safe to run, since it
just nuked a temp file that had already been moved into place.
However, when `git annex direct` used replaeFileOr, its rollback printed a
scary message:
/home/joey/tmp/rrrr/.git/annex/misctmp/tmp32268: rename: does not exist (No such file or directory)
There was actually no bad result though.
This speeds up the webdav special remote somewhat, since it often now
groups actions together in a single http connection when eg, storing a
file.
Legacy chunks are still supported, but have not been sped up.
This depends on a as-yet unreleased version of DAV.
This commit was sponsored by Thomas Hochstein.
Reusing http connection when operating on chunks is not done yet,
I had to submit some patches to DAV to support that. However, this is no
slower than old-style chunking was.
Note that it's a fileRetriever and a fileStorer, despite DAV using
bytestrings that would allow streaming. As a result, upload/download of
encrypted files is made a bit more expensive, since it spools them to temp
files. This was needed to get the progress meters to work.
There are probably ways to avoid that.. But it turns out that the current
DAV interface buffers the whole file content in memory, and I have
sent in a patch to DAV to improve its interfaces. Using the new interfaces,
it's certainly going to need to be a fileStorer, in order to read the file
size from the file (getting the size of a bytestring would destroy
laziness). It should be possible to use the new interface to make it be a
byteRetriever, so I'll change that when I get to it.
This commit was sponsored by Andreas Olsson.
When files are stored using rsync, they have their write bit removed;
so does the directory they're put in. The local repo code did not turn
these bits back on, so failed to remove.
bup already splits files and does rolling deltas, so there is no reason to
use chunking here.
The new API made it easier to add progress support for storeKey, so that's
done. Unfortunately, bup-split still outputs its own progress with -q,
so a little ugly, but not too bad.
Made dropping remove the branch for an object, for two reasons:
1. The new API calls removeKey to roll back a storeKey when the content
changed unexpectedly.
2. So that testremote will be happy.
Also, fixed a bug that caused a crash when removing the branch for an
object in rollback.
This only performs some basic tests so far; no testing of chunking or
resuming. Also, the existing encryption type of the remote is used; it
would be good later to derive an encrypted and a non-encrypted version of
the remote and test them both.
This commit was sponsored by Joseph Liu.
For example, I had a copy to a remote that was failing for an unknown
reason. This let me see the exception was createDirectory: permission
denied; the underlying problem being a permissions issue.
Leverage the new chunked remotes to automatically resume downloads.
Sort of like rsync, although of course not as efficient since this
needs to start at a chunk boundry.
But, unlike rsync, this method will work for S3, WebDAV, external
special remotes, etc, etc. Only directory special remotes so far,
but many more soon!
This implementation will also properly handle starting a download
from one remote, interrupting, and resuming from another one, and so on.
(Resuming interrupted chunked uploads is similarly doable, although
slightly more expensive.)
This commit was sponsored by Thomas Djärv.
The repair code assumed that if fsck found no broken objects, after
removing bad objects and possibly pulling replacements from remote, all was
well.. but this is not really true. Removing bad objects could leave some
branches broken. fsck doesn't report any missing objects in this case,
and its messages about broken branches are ignored by the fsck output
parser.
To deal with this, added a separate scan of all refs to find broken ones
and remove them when --forced. This will also let anyone who ran into this
bug run repair again to fix up the incomplete repair done before.
This commit was sponsored by Aaron Whitehouse.
Based on the example from the tip, but modified to cd into the repo before
running git-annex, since konqueror does not. Also, at least on my system,
the directory is ~/.kde, not ~/.kde4. (konqueror 4.12.4)
This commit was sponsored by Jürgen Peters.
This is a security/usability tradeoff. To avoid exposing the gpg key ids
who can decrypt the repository, users can unset
gcrypt-publish-participants.
The gcrypt-publish-participants option is available in my fork of
git-remote-gcrypt.
This commit was sponsored by Christopher Kernahan.
Catch an exception when ensureInitialized is run in a non-initted
repository. In this case, just read the git config, so that the Git.Repo
object is not LocalUnknown, which is what is used to represent remotes
on eg, drives that are not connected.
The assistant already got this right, and like with the assistant, this
causes an implicit git-annex init of the local remote on the second sync,
once the git-annex branch has been pushed to it.
See this comment for more analysis:
http://git-annex.branchable.com/todo/Recovering_from_a_bad_sync/#comment-64e469a2c1969829ee149cbb41b1c138
This commit was sponsored by jscit.
I think this is a git behavior change, but have not checked to be sure.
Conflict cruft used to look like $foo~HEAD, but now just $foo is left
behind as conflict cruft.
With test case.
Running `git annex direct` would cause loss of data, because the object
was moved to a temp file, which it then tried to replace the work tree file
with, and on failure, the temp file got deleted. Now it's instead moved
back into the annex object location.
Minor because normally only 1 FD is leaked per git-annex run. However,
the test suite leaks a few hundred FDs, and this broke it on the Debian
autobuilders, which seem to have a tigher than usual ulimit.
The leak was introduced by the lazy getDirectoryContents' that was
introduced in e6330988dd in order to scale to
millions of journal files -- if the lazy list was never fully consumed, the
directory handle did not get closed.
Instead, pull in openDirectory/readDirectory/closeDirectory code that I
already developed and submitted in a patch to the haskell directory library
earlier. Using this in journalDirty avoids the place that the lazy list
caused a problem. And using it in stageJournal eliminates the need for
getDirectoryContents'.
The getJournalFiles* functions are switched back to using the regular
strict getDirectoryContents. I'm not sure if those always consume the whole
list, so this avoids any leak. And the things that call those are things
like git annex unused, which also look at every file committed to the
git-annex branch, so would need more work to scale to insane numbers of
files anyway.
When one side is an annexed symlink, and the other side is a non-annexed symlink.
In this case, git-merge does not replace the annexed symlink in the work
tree with the non-annexed symlink, which is different from it's handling of
conflicts between annexed symlinks and regular files or directories.
So, while git-annex generated the correct merge commit, the work tree
didn't get updated to reflect it.
See comments on bug for additional analysis.
Did not add this to the test suite yet; just unloaded a truckload of firewood
and am feeling lazy.
This commit was sponsored by Adam Spiers.