Preserve metadata when staging a new version of an annexed file.
Performance impact: When adding a large tree of new files, this needs to do some git cat-file queries to check if any of the files already existed and might need a metadata copy. I tried a benchmark in a copy of my sound repository (so there was already a significant git tree to check against. Adding 10000 small files, with a cold cache: before: 1m48.539s after: 1m52.791s So, impact is 0.0004 seconds per file added. Which seems acceptable, so did not add some kind of configuration to enable/disable this. This commit was sponsored by Lisa Feilen.
This commit is contained in:
parent
e7252cf192
commit
8d5158fa31
5 changed files with 41 additions and 14 deletions
|
@ -161,14 +161,14 @@ ingest (Just source) = do
|
|||
goindirect (Just (key, _)) mcache ms = do
|
||||
catchAnnex (moveAnnex key $ contentLocation source)
|
||||
(undo (keyFilename source) key)
|
||||
maybe noop (genMetaData key) ms
|
||||
maybe noop (genMetaData key (keyFilename source)) ms
|
||||
liftIO $ nukeFile $ keyFilename source
|
||||
return $ (Just key, mcache)
|
||||
goindirect _ _ _ = failure "failed to generate a key"
|
||||
|
||||
godirect (Just (key, _)) (Just cache) ms = do
|
||||
addInodeCache key cache
|
||||
maybe noop (genMetaData key) ms
|
||||
maybe noop (genMetaData key (keyFilename source)) ms
|
||||
finishIngestDirect key source
|
||||
return $ (Just key, Just cache)
|
||||
godirect _ _ _ = failure "failed to generate a key"
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue