lock: Fix edge cases where data loss could occur in v6 mode.

In the case where the pointer file is in place, and not the content
of the object, lock's  performNew was called with filemodified=True,
which caused it to try to repopulate the object from an unmodified
associated file, of which there were none. So, the content of the object
got thrown away incorrectly. This was the cause (although not the root
cause) of data loss in https://github.com/datalad/datalad/issues/1020

The same problem could also occur when the work tree file is modified,
but the object is not, and lock is called with --force. Added a test case
for this, since it's excercising the same code path and is easier to set up
than the problem above.

Note that this only occurred when the keys database did not have an inode
cache recorded for the annex object. Normally, the annex object would be in
there, but there are of course circumstances where the inode cache is out
of sync with reality, since it's only a cache.

Fixed by checking if the object is unmodified; if so we don't need to
try to repopulate it. This does add an additional checksum to the unlock
path, but it's already checksumming the worktree file in another case,
so it doesn't slow it down overall.

Further investigation found a similar problem occurred when smudge --clean
is called on a file and the inode cache is not populated. cleanOldKeys
deleted the unmodified old object file in this case. This was also
fixed by checking if the object is unmodified.

In general, use of getInodeCaches and sameInodeCache is potentially
dangerous if the inode cache has not gotten populated for some reason.
Better to use isUnmodified. I breifly auited other places that check the
inode cache, and did not see any immediate problems, but it would be easy
to miss this kind of problem.
This commit is contained in:
Joey Hess 2016-10-17 12:56:26 -04:00
parent 7baa96224f
commit ee309d6941
No known key found for this signature in database
GPG key ID: C910D9222512E3C7
4 changed files with 47 additions and 25 deletions

View file

@ -45,27 +45,27 @@ startNew file key = ifM (isJust <$> isAnnexLink file)
)
where
go (Just key')
| key' == key = cont True
| key' == key = cont
| otherwise = errorModified
go Nothing =
ifM (isUnmodified key file)
( cont False
( cont
, ifM (Annex.getState Annex.force)
( cont True
( cont
, errorModified
)
)
cont = next . performNew file key
cont = next $ performNew file key
performNew :: FilePath -> Key -> Bool -> CommandPerform
performNew file key filemodified = do
performNew :: FilePath -> Key -> CommandPerform
performNew file key = do
lockdown =<< calcRepo (gitAnnexLocation key)
addLink file key
=<< withTSDelta (liftIO . genInodeCache file)
next $ cleanupNew file key
where
lockdown obj = do
ifM (catchBoolIO $ sameInodeCache obj =<< Database.Keys.getInodeCaches key)
ifM (isUnmodified key obj)
( breakhardlink obj
, repopulate obj
)
@ -83,20 +83,18 @@ performNew file key filemodified = do
Database.Keys.storeInodeCaches key [obj]
-- Try to repopulate obj from an unmodified associated file.
repopulate obj
| filemodified = modifyContent obj $ do
g <- Annex.gitRepo
fs <- map (`fromTopFilePath` g)
<$> Database.Keys.getAssociatedFiles key
mfile <- firstM (isUnmodified key) fs
liftIO $ nukeFile obj
case mfile of
Just unmodified ->
unlessM (checkedCopyFile key unmodified obj Nothing)
lostcontent
Nothing -> lostcontent
| otherwise = modifyContent obj $
liftIO $ renameFile file obj
repopulate obj = modifyContent obj $ do
g <- Annex.gitRepo
fs <- map (`fromTopFilePath` g)
<$> Database.Keys.getAssociatedFiles key
mfile <- firstM (isUnmodified key) fs
liftIO $ nukeFile obj
case mfile of
Just unmodified ->
unlessM (checkedCopyFile key unmodified obj Nothing)
lostcontent
Nothing -> lostcontent
lostcontent = logStatus key InfoMissing
cleanupNew :: FilePath -> Key -> CommandCleanup