git-annex/Command/Add.hs

252 lines
7.7 KiB
Haskell
Raw Normal View History

{- git-annex command
-
- Copyright 2010, 2013 Joey Hess <joey@kitenet.net>
-
- Licensed under the GNU GPL version 3 or higher.
-}
{-# LANGUAGE CPP #-}
module Command.Add where
import System.PosixCompat.Files
2011-10-05 20:02:51 +00:00
import Common.Annex
import Annex.Exception
import Command
import Types.KeySource
import Backend
2011-10-15 20:21:08 +00:00
import Logs.Location
2011-10-04 04:40:47 +00:00
import Annex.Content
import Annex.Content.Direct
import Annex.Perms
fully support core.symlinks=false in all relevant symlink handling code Refactored annex link code into nice clean new library. Audited and dealt with calls to createSymbolicLink. Remaining calls are all safe, because: Annex/Link.hs: ( liftIO $ createSymbolicLink linktarget file only when core.symlinks=true Assistant/WebApp/Configurators/Local.hs: createSymbolicLink link link test if symlinks can be made Command/Fix.hs: liftIO $ createSymbolicLink link file command only works in indirect mode Command/FromKey.hs: liftIO $ createSymbolicLink link file command only works in indirect mode Command/Indirect.hs: liftIO $ createSymbolicLink l f refuses to run if core.symlinks=false Init.hs: createSymbolicLink f f2 test if symlinks can be made Remote/Directory.hs: go [file] = catchBoolIO $ createSymbolicLink file f >> return True fast key linking; catches failure to make symlink and falls back to copy Remote/Git.hs: liftIO $ catchBoolIO $ createSymbolicLink loc file >> return True ditto Upgrade/V1.hs: liftIO $ createSymbolicLink link f v1 repos could not be on a filesystem w/o symlinks Audited and dealt with calls to readSymbolicLink. Remaining calls are all safe, because: Annex/Link.hs: ( liftIO $ catchMaybeIO $ readSymbolicLink file only when core.symlinks=true Assistant/Threads/Watcher.hs: ifM ((==) (Just link) <$> liftIO (catchMaybeIO $ readSymbolicLink file)) code that fixes real symlinks when inotify sees them It's ok to not fix psdueo-symlinks. Assistant/Threads/Watcher.hs: mlink <- liftIO (catchMaybeIO $ readSymbolicLink file) ditto Command/Fix.hs: stopUnless ((/=) (Just link) <$> liftIO (catchMaybeIO $ readSymbolicLink file)) $ do command only works in indirect mode Upgrade/V1.hs: getsymlink = takeFileName <$> readSymbolicLink file v1 repos could not be on a filesystem w/o symlinks Audited and dealt with calls to isSymbolicLink. (Typically used with getSymbolicLinkStatus, but that is just used because getFileStatus is not as robust; it also works on pseudolinks.) Remaining calls are all safe, because: Assistant/Threads/SanityChecker.hs: | isSymbolicLink s -> addsymlink file ms only handles staging of symlinks that were somehow not staged (might need to be updated to support pseudolinks, but this is only a belt-and-suspenders check anyway, and I've never seen the code run) Command/Add.hs: if isSymbolicLink s || not (isRegularFile s) avoids adding symlinks to the annex, so not relevant Command/Indirect.hs: | isSymbolicLink s -> void $ flip whenAnnexed f $ only allowed on systems that support symlinks Command/Indirect.hs: whenM (liftIO $ not . isSymbolicLink <$> getSymbolicLinkStatus f) $ do ditto Seek.hs:notSymlink f = liftIO $ not . isSymbolicLink <$> getSymbolicLinkStatus f used to find unlocked files, only relevant in indirect mode Utility/FSEvents.hs: | Files.isSymbolicLink s = runhook addSymlinkHook $ Just s Utility/FSEvents.hs: | Files.isSymbolicLink s -> Utility/INotify.hs: | Files.isSymbolicLink s -> Utility/INotify.hs: checkfiletype Files.isSymbolicLink addSymlinkHook f Utility/Kqueue.hs: | Files.isSymbolicLink s = callhook addSymlinkHook (Just s) change all above are lower-level, not relevant Audited and dealt with calls to isSymLink. Remaining calls are all safe, because: Annex/Direct.hs: | isSymLink (getmode item) = This is looking at git diff-tree objects, not files on disk Command/Unused.hs: | isSymLink (LsTree.mode l) = do This is looking at git ls-tree, not file on disk Utility/FileMode.hs:isSymLink :: FileMode -> Bool Utility/FileMode.hs:isSymLink = checkMode symbolicLinkMode low-level Done!!
2013-02-17 19:05:55 +00:00
import Annex.Link
import qualified Annex
import qualified Annex.Queue
#ifndef __ANDROID__
2011-08-20 20:11:42 +00:00
import Utility.Touch
#endif
import Utility.FileMode
import Config
import Utility.InodeCache
import Annex.FileMatcher
import Annex.ReplaceFile
Fix a few bugs involving filenames that are at or near the filesystem's maximum filename length limit. Started with a problem when running addurl on a really long url, because the whole url is munged into the filename. Ended up doing a fairly extensive review for places where filenames could get too large, although it's hard to say I'm not missed any.. Backend.Url had a 128 character limit, which is fine when the limit is 255, but not if it's a lot shorter on some systems. So check the pathconf() limit. Note that this could result in fromUrl creating different keys for the same url, if run on systems with different limits. I don't see this is likely to cause any problems. That can already happen when using addurl --fast, or if the content of an url changes. Both Command.AddUrl and Backend.Url assumed that urls don't contain a lot of multi-byte unicode, and would fail to truncate an url that did properly. A few places use a filename as the template to make a temp file. While that's nice in that the temp file name can be easily related back to the original filename, it could lead to `git annex add` failing to add a filename that was at or close to the maximum length. Note that in Command.Add.lockdown, the template is still derived from the filename, just with enough space left to turn it into a temp file. This is an important optimisation, because the assistant may lock down a bunch of files all at once, and using the same template for all of them would cause openTempFile to iterate through the same set of names, looking for an unused temp file. I'm not very happy with the relatedTemplate hack, but it avoids that slowdown. Backend.WORM does not limit the filename stored in the key. I have not tried to change that; so git annex add will fail on really long filenames when using the WORM backend. It seems better to preserve the invariant that a WORM key always contains the complete filename, since the filename is the only unique material in the key, other than mtime and size. Since nobody has complained about add failing (I think I saw it once?) on WORM, probably it's ok, or nobody but me uses it. There may be compatability problems if using git annex addurl --fast or the WORM backend on a system with the 255 limit and then trying to use that repo in a system with a smaller limit. I have not tried to deal with those. This commit was sponsored by Alexander Brem. Thanks!
2013-07-30 21:49:11 +00:00
import Utility.Tmp
def :: [Command]
def = [notBareRepo $ command "add" paramPaths seek SectionCommon
"add files to annex"]
{- Add acts on both files not checked into git yet, and unlocked files.
-
- In direct mode, it acts on any files that have changed. -}
seek :: [CommandSeek]
seek =
[ go withFilesNotInGit
, whenNotDirect $ go withFilesUnlocked
, whenDirect $ go withFilesMaybeModified
]
where
go a = withValue largeFilesMatcher $ \matcher ->
2013-03-29 20:20:15 +00:00
a $ \file -> ifM (checkFileMatcher matcher file <||> Annex.getState Annex.force)
( start file
, stop
)
2010-11-11 22:54:52 +00:00
2012-12-19 16:50:24 +00:00
{- The add subcommand annexes a file, generating a key for it using a
- backend, and then moving it into the annex directory and setting up
- the symlink pointing to its content. -}
start :: FilePath -> CommandStart
start file = ifAnnexed file addpresent add
2012-11-12 05:05:04 +00:00
where
add = do
ms <- liftIO $ catchMaybeIO $ getSymbolicLinkStatus file
case ms of
Nothing -> stop
Just s
| isSymbolicLink s || not (isRegularFile s) -> stop
| otherwise -> do
showStart "add" file
next $ perform file
addpresent (key, _) = ifM isDirect
( ifM (goodContent key file) ( stop , add )
, fixup key
)
fixup key = do
2012-11-12 05:05:04 +00:00
-- fixup from an interrupted add; the symlink
-- is present but not yet added to git
showStart "add" file
liftIO $ removeFile file
next $ next $ cleanup file key Nothing =<< inAnnex key
{- The file that's being added is locked down before a key is generated,
- to prevent it from being modified in between. This lock down is not
- perfect at best (and pretty weak at worst). For example, it does not
- guard against files that are already opened for write by another process.
- So a KeySource is returned. Its inodeCache can be used to detect any
- changes that might be made to the file after it was locked down.
-
- In indirect mode, the write bit is removed from the file as part of lock
- down to guard against further writes, and because objects in the annex
- have their write bit disabled anyway. This is not done in direct mode,
- because files there need to remain writable at all times.
-
- When possible, the file is hard linked to a temp directory. This guards
- against some changes, like deletion or overwrite of the file, and
- allows lsof checks to be done more efficiently when adding a lot of files.
-
- Lockdown can fail if a file gets deleted, and Nothing will be returned.
-}
lockDown :: FilePath -> Annex (Maybe KeySource)
2013-09-25 07:09:06 +00:00
lockDown file = ifM crippledFileSystem
( liftIO $ catchMaybeIO nohardlink
, do
tmp <- fromRepo gitAnnexTmpDir
createAnnexDirectory tmp
2013-09-25 07:09:06 +00:00
unlessM isDirect $
void $ liftIO $ tryIO $ preventWrite file
liftIO $ catchMaybeIO $ do
Fix a few bugs involving filenames that are at or near the filesystem's maximum filename length limit. Started with a problem when running addurl on a really long url, because the whole url is munged into the filename. Ended up doing a fairly extensive review for places where filenames could get too large, although it's hard to say I'm not missed any.. Backend.Url had a 128 character limit, which is fine when the limit is 255, but not if it's a lot shorter on some systems. So check the pathconf() limit. Note that this could result in fromUrl creating different keys for the same url, if run on systems with different limits. I don't see this is likely to cause any problems. That can already happen when using addurl --fast, or if the content of an url changes. Both Command.AddUrl and Backend.Url assumed that urls don't contain a lot of multi-byte unicode, and would fail to truncate an url that did properly. A few places use a filename as the template to make a temp file. While that's nice in that the temp file name can be easily related back to the original filename, it could lead to `git annex add` failing to add a filename that was at or close to the maximum length. Note that in Command.Add.lockdown, the template is still derived from the filename, just with enough space left to turn it into a temp file. This is an important optimisation, because the assistant may lock down a bunch of files all at once, and using the same template for all of them would cause openTempFile to iterate through the same set of names, looking for an unused temp file. I'm not very happy with the relatedTemplate hack, but it avoids that slowdown. Backend.WORM does not limit the filename stored in the key. I have not tried to change that; so git annex add will fail on really long filenames when using the WORM backend. It seems better to preserve the invariant that a WORM key always contains the complete filename, since the filename is the only unique material in the key, other than mtime and size. Since nobody has complained about add failing (I think I saw it once?) on WORM, probably it's ok, or nobody but me uses it. There may be compatability problems if using git annex addurl --fast or the WORM backend on a system with the 255 limit and then trying to use that repo in a system with a smaller limit. I have not tried to deal with those. This commit was sponsored by Alexander Brem. Thanks!
2013-07-30 21:49:11 +00:00
(tmpfile, h) <- openTempFile tmp $
relatedTemplate $ takeFileName file
hClose h
nukeFile tmpfile
withhardlink tmpfile `catchIO` const nohardlink
)
where
nohardlink = do
cache <- genInodeCache file
2013-09-25 07:09:06 +00:00
return KeySource
{ keyFilename = file
, contentLocation = file
, inodeCache = cache
}
withhardlink tmpfile = do
createLink file tmpfile
cache <- genInodeCache tmpfile
2013-09-25 07:09:06 +00:00
return KeySource
{ keyFilename = file
, contentLocation = tmpfile
, inodeCache = cache
}
2012-06-06 17:07:30 +00:00
{- Ingests a locked down file into the annex.
-
- In direct mode, leaves the file alone, and just updates bookkeeping
- information.
-}
ingest :: Maybe KeySource -> Annex (Maybe Key, Maybe InodeCache)
ingest Nothing = return (Nothing, Nothing)
ingest (Just source) = do
2012-06-16 02:06:59 +00:00
backend <- chooseBackend $ keyFilename source
k <- genKey source backend
cache <- liftIO $ genInodeCache $ contentLocation source
case (cache, inodeCache source) of
(_, Nothing) -> go k cache
(Just newc, Just c) | compareStrong c newc -> go k cache
_ -> failure "changed while it was being added"
2012-11-12 05:05:04 +00:00
where
go k cache = ifM isDirect ( godirect k cache , goindirect k cache )
goindirect (Just (key, _)) mcache = do
Switch to MonadCatchIO-transformers for better handling of state while catching exceptions. As seen in this bug report, the lifted exception handling using the StateT monad throws away state changes when an action throws an exception. http://git-annex.branchable.com/bugs/git_annex_fork_bombs_on_gpg_file/ .. Which can result in cached values being redundantly calculated, or other possibly worse bugs when the annex state gets out of sync with reality. This switches from a StateT AnnexState to a ReaderT (MVar AnnexState). All changes to the state go via the MVar. So when an Annex action is running inside an exception handler, and it makes some changes, they immediately go into affect in the MVar. If it then throws an exception (or even crashes its thread!), the state changes are still in effect. The MonadCatchIO-transformers change is actually only incidental. I could have kept on using lifted-base for the exception handling. However, I'd have needed to write a new instance of MonadBaseControl for the new monad.. and I didn't write the old instance.. I begged Bas and he kindly sent it to me. Happily, MonadCatchIO-transformers is able to derive a MonadCatchIO instance for my monad. This is a deep level change. It passes the test suite! What could it break? Well.. The most likely breakage would be to code that runs an Annex action in an exception handler, and *wants* state changes to be thrown away. Perhaps the state changes leaves the state inconsistent, or wrong. Since there are relatively few places in git-annex that catch exceptions in the Annex monad, and the AnnexState is generally just used to cache calculated data, this is unlikely to be a problem. Oh yeah, this change also makes Assistant.Types.ThreadedMonad a bit redundant. It's now entirely possible to run concurrent Annex actions in different threads, all sharing access to the same state! The ThreadedMonad just adds some extra work on top of that, with its own MVar, and avoids such actions possibly stepping on one-another's toes. I have not gotten rid of it, but might try that later. Being able to run concurrent Annex actions would simplify parts of the Assistant code.
2013-05-19 18:16:36 +00:00
catchAnnex (moveAnnex key $ contentLocation source)
(undo (keyFilename source) key)
liftIO $ nukeFile $ keyFilename source
return $ (Just key, mcache)
goindirect Nothing _ = failure "failed to generate a key"
godirect (Just (key, _)) (Just cache) = do
addInodeCache key cache
finishIngestDirect key source
return $ (Just key, Just cache)
godirect _ _ = failure "failed to generate a key"
failure msg = do
warning $ keyFilename source ++ " " ++ msg
when (contentLocation source /= keyFilename source) $
liftIO $ nukeFile $ contentLocation source
return (Nothing, Nothing)
finishIngestDirect :: Key -> KeySource -> Annex ()
finishIngestDirect key source = do
void $ addAssociatedFile key $ keyFilename source
when (contentLocation source /= keyFilename source) $
liftIO $ nukeFile $ contentLocation source
{- Copy to any other locations using the same key. -}
otherfs <- filter (/= keyFilename source) <$> associatedFiles key
forM_ otherfs $
addContentWhenNotPresent key (keyFilename source)
2012-06-06 17:07:30 +00:00
perform :: FilePath -> CommandPerform
perform file = lockDown file >>= ingest >>= go
where
go (Just key, cache) = next $ cleanup file key cache True
go (Nothing, _) = stop
{- On error, put the file back so it doesn't seem to have vanished.
- This can be called before or after the symlink is in place. -}
undo :: FilePath -> Key -> IOException -> Annex a
undo file key e = do
2012-03-06 18:12:15 +00:00
whenM (inAnnex key) $ do
2012-06-06 17:13:13 +00:00
liftIO $ nukeFile file
Switch to MonadCatchIO-transformers for better handling of state while catching exceptions. As seen in this bug report, the lifted exception handling using the StateT monad throws away state changes when an action throws an exception. http://git-annex.branchable.com/bugs/git_annex_fork_bombs_on_gpg_file/ .. Which can result in cached values being redundantly calculated, or other possibly worse bugs when the annex state gets out of sync with reality. This switches from a StateT AnnexState to a ReaderT (MVar AnnexState). All changes to the state go via the MVar. So when an Annex action is running inside an exception handler, and it makes some changes, they immediately go into affect in the MVar. If it then throws an exception (or even crashes its thread!), the state changes are still in effect. The MonadCatchIO-transformers change is actually only incidental. I could have kept on using lifted-base for the exception handling. However, I'd have needed to write a new instance of MonadBaseControl for the new monad.. and I didn't write the old instance.. I begged Bas and he kindly sent it to me. Happily, MonadCatchIO-transformers is able to derive a MonadCatchIO instance for my monad. This is a deep level change. It passes the test suite! What could it break? Well.. The most likely breakage would be to code that runs an Annex action in an exception handler, and *wants* state changes to be thrown away. Perhaps the state changes leaves the state inconsistent, or wrong. Since there are relatively few places in git-annex that catch exceptions in the Annex monad, and the AnnexState is generally just used to cache calculated data, this is unlikely to be a problem. Oh yeah, this change also makes Assistant.Types.ThreadedMonad a bit redundant. It's now entirely possible to run concurrent Annex actions in different threads, all sharing access to the same state! The ThreadedMonad just adds some extra work on top of that, with its own MVar, and avoids such actions possibly stepping on one-another's toes. I have not gotten rid of it, but might try that later. Being able to run concurrent Annex actions would simplify parts of the Assistant code.
2013-05-19 18:16:36 +00:00
catchAnnex (fromAnnex key file) tryharder
2012-03-06 18:12:15 +00:00
logStatus key InfoMissing
2013-06-02 19:01:58 +00:00
throwAnnex e
2012-11-12 05:05:04 +00:00
where
-- fromAnnex could fail if the file ownership is weird
tryharder :: IOException -> Annex ()
tryharder _ = do
src <- calcRepo $ gitAnnexLocation key
2012-11-12 05:05:04 +00:00
liftIO $ moveFile src file
2012-06-19 06:40:21 +00:00
{- Creates the symlink to the annexed content, returns the link target. -}
link :: FilePath -> Key -> Maybe InodeCache -> Annex String
link file key mcache = flip catchAnnex (undo file key) $ do
l <- inRepo $ gitAnnexLink file key
replaceFile file $ makeAnnexLink l
#ifndef __ANDROID__
-- touch symlink to have same time as the original file,
-- as provided in the InodeCache
case mcache of
Just c -> liftIO $ touch file (TimeSpec $ inodeCacheToMtime c) False
Nothing -> noop
#endif
2012-06-19 06:40:21 +00:00
return l
{- Creates the symlink to the annexed content, and stages it in git.
-
- As long as the filesystem supports symlinks, we use
fully support core.symlinks=false in all relevant symlink handling code Refactored annex link code into nice clean new library. Audited and dealt with calls to createSymbolicLink. Remaining calls are all safe, because: Annex/Link.hs: ( liftIO $ createSymbolicLink linktarget file only when core.symlinks=true Assistant/WebApp/Configurators/Local.hs: createSymbolicLink link link test if symlinks can be made Command/Fix.hs: liftIO $ createSymbolicLink link file command only works in indirect mode Command/FromKey.hs: liftIO $ createSymbolicLink link file command only works in indirect mode Command/Indirect.hs: liftIO $ createSymbolicLink l f refuses to run if core.symlinks=false Init.hs: createSymbolicLink f f2 test if symlinks can be made Remote/Directory.hs: go [file] = catchBoolIO $ createSymbolicLink file f >> return True fast key linking; catches failure to make symlink and falls back to copy Remote/Git.hs: liftIO $ catchBoolIO $ createSymbolicLink loc file >> return True ditto Upgrade/V1.hs: liftIO $ createSymbolicLink link f v1 repos could not be on a filesystem w/o symlinks Audited and dealt with calls to readSymbolicLink. Remaining calls are all safe, because: Annex/Link.hs: ( liftIO $ catchMaybeIO $ readSymbolicLink file only when core.symlinks=true Assistant/Threads/Watcher.hs: ifM ((==) (Just link) <$> liftIO (catchMaybeIO $ readSymbolicLink file)) code that fixes real symlinks when inotify sees them It's ok to not fix psdueo-symlinks. Assistant/Threads/Watcher.hs: mlink <- liftIO (catchMaybeIO $ readSymbolicLink file) ditto Command/Fix.hs: stopUnless ((/=) (Just link) <$> liftIO (catchMaybeIO $ readSymbolicLink file)) $ do command only works in indirect mode Upgrade/V1.hs: getsymlink = takeFileName <$> readSymbolicLink file v1 repos could not be on a filesystem w/o symlinks Audited and dealt with calls to isSymbolicLink. (Typically used with getSymbolicLinkStatus, but that is just used because getFileStatus is not as robust; it also works on pseudolinks.) Remaining calls are all safe, because: Assistant/Threads/SanityChecker.hs: | isSymbolicLink s -> addsymlink file ms only handles staging of symlinks that were somehow not staged (might need to be updated to support pseudolinks, but this is only a belt-and-suspenders check anyway, and I've never seen the code run) Command/Add.hs: if isSymbolicLink s || not (isRegularFile s) avoids adding symlinks to the annex, so not relevant Command/Indirect.hs: | isSymbolicLink s -> void $ flip whenAnnexed f $ only allowed on systems that support symlinks Command/Indirect.hs: whenM (liftIO $ not . isSymbolicLink <$> getSymbolicLinkStatus f) $ do ditto Seek.hs:notSymlink f = liftIO $ not . isSymbolicLink <$> getSymbolicLinkStatus f used to find unlocked files, only relevant in indirect mode Utility/FSEvents.hs: | Files.isSymbolicLink s = runhook addSymlinkHook $ Just s Utility/FSEvents.hs: | Files.isSymbolicLink s -> Utility/INotify.hs: | Files.isSymbolicLink s -> Utility/INotify.hs: checkfiletype Files.isSymbolicLink addSymlinkHook f Utility/Kqueue.hs: | Files.isSymbolicLink s = callhook addSymlinkHook (Just s) change all above are lower-level, not relevant Audited and dealt with calls to isSymLink. Remaining calls are all safe, because: Annex/Direct.hs: | isSymLink (getmode item) = This is looking at git diff-tree objects, not files on disk Command/Unused.hs: | isSymLink (LsTree.mode l) = do This is looking at git ls-tree, not file on disk Utility/FileMode.hs:isSymLink :: FileMode -> Bool Utility/FileMode.hs:isSymLink = checkMode symbolicLinkMode low-level Done!!
2013-02-17 19:05:55 +00:00
- git add, rather than directly staging the symlink to git.
- Using git add is best because it allows the queuing to work
- and is faster (staging the symlink runs hash-object commands each time).
- Also, using git add allows it to skip gitignored files, unless forced
- to include them.
-}
addLink :: FilePath -> Key -> Maybe InodeCache -> Annex ()
addLink file key mcache = ifM (coreSymlinks <$> Annex.getGitConfig)
( do
_ <- link file key mcache
params <- ifM (Annex.getState Annex.force)
( return [Param "-f"]
, return []
)
Annex.Queue.addCommand "add" (params++[Param "--"]) [file]
, do
l <- link file key mcache
addAnnexLink l file
)
cleanup :: FilePath -> Key -> Maybe InodeCache -> Bool -> CommandCleanup
cleanup file key mcache hascontent = do
when hascontent $
logStatus key InfoPresent
ifM (isDirect <&&> pure hascontent)
( do
l <- inRepo $ gitAnnexLink file key
stageSymlink file =<< hashSymlink l
, addLink file key mcache
)
return True