Fix data loss bug in directory special remote

When moving a file to the remote failed, and partially transferred content
was left behind in the directory, re-running the same move would think it
succeeded and delete the local copy.

I reproduced data loss when moving files to a partition that was almost
full. Interrupting a transfer could have similar results.

Easily fixed by using a temp file which is then moved atomically into place
once the transfer completes.

I've audited other calls to copyFileExternal, and other special remote
file transfer code; everything else seems to use temp files correctly
(rsync, git), or otherwise use atomic transfers (bup, S3).
This commit is contained in:
Joey Hess 2012-01-16 16:28:07 -04:00
parent 0c4f12e8a2
commit f161b5eb59
2 changed files with 12 additions and 1 deletions

View file

@ -98,11 +98,13 @@ storeEncrypted d (cipher, enck) k = do
storeHelper :: FilePath -> Key -> (FilePath -> IO Bool) -> IO Bool
storeHelper d key a = do
let dest = Prelude.head $ locations d key
let tmpdest = dest ++ ".tmp"
let dir = parentDir dest
createDirectoryIfMissing True dir
allowWrite dir
ok <- a dest
ok <- a tmpdest
when ok $ do
renameFile tmpdest dest
preventWrite dest
preventWrite dir
return ok

9
debian/changelog vendored
View file

@ -1,3 +1,12 @@
git-annex (3.20120116) UNRELEASED; urgency=low
* Fix data loss bug in directory special remote, when moving a file
to the remote failed, and partially transferred content was left
behind in the directory, re-running the same move would think it
succeeded and delete the local copy.
-- Joey Hess <joeyh@debian.org> Mon, 16 Jan 2012 16:21:51 -0400
git-annex (3.20120115) unstable; urgency=low
* Add a sanity check for bad StatFS results. On architectures