In the case where the pointer file is in place, and not the content
of the object, lock's performNew was called with filemodified=True,
which caused it to try to repopulate the object from an unmodified
associated file, of which there were none. So, the content of the object
got thrown away incorrectly. This was the cause (although not the root
cause) of data loss in https://github.com/datalad/datalad/issues/1020
The same problem could also occur when the work tree file is modified,
but the object is not, and lock is called with --force. Added a test case
for this, since it's excercising the same code path and is easier to set up
than the problem above.
Note that this only occurred when the keys database did not have an inode
cache recorded for the annex object. Normally, the annex object would be in
there, but there are of course circumstances where the inode cache is out
of sync with reality, since it's only a cache.
Fixed by checking if the object is unmodified; if so we don't need to
try to repopulate it. This does add an additional checksum to the unlock
path, but it's already checksumming the worktree file in another case,
so it doesn't slow it down overall.
Further investigation found a similar problem occurred when smudge --clean
is called on a file and the inode cache is not populated. cleanOldKeys
deleted the unmodified old object file in this case. This was also
fixed by checking if the object is unmodified.
In general, use of getInodeCaches and sameInodeCache is potentially
dangerous if the inode cache has not gotten populated for some reason.
Better to use isUnmodified. I breifly auited other places that check the
inode cache, and did not see any immediate problems, but it would be easy
to miss this kind of problem.
When git-annex is used with a git version older than 2.2.0, disable support for
adjusted branches, since GIT_COMMON_DIR is needed to update them and was first
added in that version of git.
The crash turned out to be caused by the sqlite database being deleted out
from under sqlite before it was done with it. Since multiple git_annex
calls are done in the same process while running the test suite, the
DbHandle could linger until GCed, and the test repo, and thus sqlite
database be deleted before the workerThread was done.
So, we need to look at both the file on disk to see if it's a annex link,
and the file in the index too. lookupFile doesn't look in the index if the file
is not present on disk.
have to change the content of unlocked file before committing
otherwise git commit will fail in v6 mode when the file was already
unlocked, because no changes have been made
In v5, that was not possible, but it is in v6, and so the test was failing.
Investigating, it turns out that locking was copying the pointer file
content to the annex object despite the content not being present. So,
add a check to prevent that.
WorkTree.lookupFile was finding a key for a file that's deleted from the
work tree, which is different than the v5 behavior (though perhaps the same
as the direct mode behavior). Fix by checking that the work tree file exists
before catting its key.
Hopefully this won't slow down much, probably the catKey is much more expensive.
I can't see any way to optimise this, except perhaps to make Command.Unused
check if work tree files exist before/after calling lookupFile. But,
it seems better to make lookupFile really only find keys for worktree files;
that's what it's intended to do.