Fix data loss bug in directory special remote
When moving a file to the remote failed, and partially transferred content was left behind in the directory, re-running the same move would think it succeeded and delete the local copy. I reproduced data loss when moving files to a partition that was almost full. Interrupting a transfer could have similar results. Easily fixed by using a temp file which is then moved atomically into place once the transfer completes. I've audited other calls to copyFileExternal, and other special remote file transfer code; everything else seems to use temp files correctly (rsync, git), or otherwise use atomic transfers (bup, S3).
This commit is contained in:
parent
0c4f12e8a2
commit
f161b5eb59
2 changed files with 12 additions and 1 deletions
|
@ -98,11 +98,13 @@ storeEncrypted d (cipher, enck) k = do
|
|||
storeHelper :: FilePath -> Key -> (FilePath -> IO Bool) -> IO Bool
|
||||
storeHelper d key a = do
|
||||
let dest = Prelude.head $ locations d key
|
||||
let tmpdest = dest ++ ".tmp"
|
||||
let dir = parentDir dest
|
||||
createDirectoryIfMissing True dir
|
||||
allowWrite dir
|
||||
ok <- a dest
|
||||
ok <- a tmpdest
|
||||
when ok $ do
|
||||
renameFile tmpdest dest
|
||||
preventWrite dest
|
||||
preventWrite dir
|
||||
return ok
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue