have clean filter check if the filename was already in use by an old key

The annex object for it may have been modified due to hard link, and
that should be cleaned up when the new version is added. If another
associated file has the old key's content, that's linked into the annex
object. Otherwise, update location log to reflect that content has been
lost.
This commit is contained in:
Joey Hess 2015-12-15 13:06:52 -04:00
parent 0a7a2dae4e
commit 71e2050f8f
Failed to extract signature
2 changed files with 25 additions and 4 deletions

View file

@ -13,6 +13,7 @@ import Annex.Content
import Annex.Link
import Annex.MetaData
import Annex.FileMatcher
import Annex.InodeSentinal
import Types.KeySource
import Backend
import Logs.Location
@ -51,7 +52,7 @@ smudge file = do
-- A previous unlocked checkout of the file may have
-- led to the annex object getting modified;
-- don't provide such modified content as it
-- will be confusing. inAnnex will detect
-- will be confusing. inAnnex will detect such
-- modifications.
ifM (inAnnex k)
( do
@ -74,12 +75,35 @@ clean file = do
else ifM (shouldAnnex file)
( do
k <- ingest file
oldkeys <- filter (/= k)
<$> Database.Keys.getAssociatedKey file
mapM_ (cleanOldKey file) oldkeys
Database.Keys.addAssociatedFile k file
liftIO $ emitPointer k
, liftIO $ B.hPut stdout b
)
stop
-- If the file being cleaned was hard linked to the old key's annex object,
-- modifying the file will have caused the object to have the wrong content.
-- Clean up from that, making the
cleanOldKey :: FilePath -> Key -> Annex ()
cleanOldKey modifiedfile key = do
obj <- calcRepo (gitAnnexLocation key)
caches <- Database.Keys.getInodeCaches key
unlessM (sameInodeCache obj caches) $ do
unlinkAnnex key
fs <- filter (/= modifiedfile)
<$> Database.Keys.getAssociatedFiles key
fs' <- filterM (`sameInodeCache` caches) fs
case fs' of
-- If linkAnnex fails, the file with the content
-- is still present, so no need for any recovery.
(f:_) -> void $ linkAnnex key f
_ -> lostcontent
where
lostcontent = logStatus key InfoMissing
shouldAnnex :: FilePath -> Annex Bool
shouldAnnex file = do
matcher <- largeFilesMatcher

View file

@ -325,9 +325,6 @@ files to be unlocked, while the indirect upgrades don't touch the files.
because the timestamp has changed. Getting a smudged file can also
cause this. Avoid this by preserving timestamp of smudged files
when manipulating.
* Clean filter should check if the filename was already in use by an old
key. The annex object for it may have been modified due to hard link, and
that should be cleaned up when the new version is added.
* Reconcile staged changes into the associated files database, whenever
the database is queried.
* See if the cases where the Keys database is not used can be