fix file corruption when proxying an upload to a special remote

The file corruption consists of each chunk of the file being duplicated.
Since chunks are typically a fixed size, it would certianly be possible
to get from a corrupted file back to the original file. But this is still
bad data loss.

Reversion was in commit fcc052bed8.
Luckily that did not make the most recent release.
This commit is contained in:
Joey Hess 2024-08-06 14:38:45 -04:00
parent 34c10d082d
commit 3cc03b4c96
No known key found for this signature in database
GPG key ID: DB12DB0FF05F8F38
2 changed files with 2 additions and 2 deletions

View file

@ -210,7 +210,6 @@ proxySpecialRemote protoversion r ihdl ohdl owaitv oclosedv = go
storetofile _ _ n [] = pure n
storetofile iv h n (b:bs) = do
writeVerifyChunk iv h b
B.hPut h b
storetofile iv h (n - fromIntegral (B.length b)) bs
proxyget offset af k = withproxytmpfile k $ \tmpfile -> do

View file

@ -1,8 +1,9 @@
git-annex (10.20240732) UNRELEASED; urgency=medium
* Avoid loading cluster log at startup.
* Remove debug output (to stderr) accidentially included in
last version.
* When proxying an upload to a special remote, verify the hash.
* Avoid loading cluster log at startup.
-- Joey Hess <id@joeyh.name> Wed, 31 Jul 2024 15:52:03 -0400