git-annex/Annex/ReplaceFile.hs

51 lines
1.6 KiB
Haskell
Raw Normal View History

{- git-annex file replacing
-
- Copyright 2013 Joey Hess <joey@kitenet.net>
-
- Licensed under the GNU GPL version 3 or higher.
-}
module Annex.ReplaceFile where
import Common.Annex
import Annex.Perms
{- Replaces a possibly already existing file with a new version,
- atomically, by running an action.
-
- The action is passed a temp file, which it can write to, and once
- done the temp file is moved into place.
-
- The action can throw an IO exception, in which case the temp file
- will be deleted, and the existing file will be preserved.
-
- Throws an IO exception when it was unable to replace the file.
-}
replaceFile :: FilePath -> (FilePath -> Annex ()) -> Annex ()
replaceFile file action = replaceFileOr file action (liftIO . nukeFile)
{- If unable to replace the file with the temp file, runs the
- rollback action, which is responsible for cleaning up the temp file. -}
replaceFileOr :: FilePath -> (FilePath -> Annex ()) -> (FilePath -> Annex ()) -> Annex ()
replaceFileOr file action rollback = do
tmpdir <- fromRepo gitAnnexTmpMiscDir
void $ createAnnexDirectory tmpdir
tmpfile <- liftIO $ setup tmpdir
go tmpfile `catchNonAsync` (const $ rollback tmpfile)
where
setup tmpdir = do
Fix a few bugs involving filenames that are at or near the filesystem's maximum filename length limit. Started with a problem when running addurl on a really long url, because the whole url is munged into the filename. Ended up doing a fairly extensive review for places where filenames could get too large, although it's hard to say I'm not missed any.. Backend.Url had a 128 character limit, which is fine when the limit is 255, but not if it's a lot shorter on some systems. So check the pathconf() limit. Note that this could result in fromUrl creating different keys for the same url, if run on systems with different limits. I don't see this is likely to cause any problems. That can already happen when using addurl --fast, or if the content of an url changes. Both Command.AddUrl and Backend.Url assumed that urls don't contain a lot of multi-byte unicode, and would fail to truncate an url that did properly. A few places use a filename as the template to make a temp file. While that's nice in that the temp file name can be easily related back to the original filename, it could lead to `git annex add` failing to add a filename that was at or close to the maximum length. Note that in Command.Add.lockdown, the template is still derived from the filename, just with enough space left to turn it into a temp file. This is an important optimisation, because the assistant may lock down a bunch of files all at once, and using the same template for all of them would cause openTempFile to iterate through the same set of names, looking for an unused temp file. I'm not very happy with the relatedTemplate hack, but it avoids that slowdown. Backend.WORM does not limit the filename stored in the key. I have not tried to change that; so git annex add will fail on really long filenames when using the WORM backend. It seems better to preserve the invariant that a WORM key always contains the complete filename, since the filename is the only unique material in the key, other than mtime and size. Since nobody has complained about add failing (I think I saw it once?) on WORM, probably it's ok, or nobody but me uses it. There may be compatability problems if using git annex addurl --fast or the WORM backend on a system with the 255 limit and then trying to use that repo in a system with a smaller limit. I have not tried to deal with those. This commit was sponsored by Alexander Brem. Thanks!
2013-07-30 21:49:11 +00:00
(tmpfile, h) <- openTempFileWithDefaultPermissions tmpdir "tmp"
hClose h
return tmpfile
go tmpfile = do
action tmpfile
liftIO $ replaceFileFrom tmpfile file
replaceFileFrom :: FilePath -> FilePath -> IO ()
replaceFileFrom src dest = go `catchIO` fallback
where
go = moveFile src dest
fallback _ = do
createDirectoryIfMissing True $ takeDirectory dest
go