Fix hang when receiving a large file into a proxied special remote

Only indicate that we're done with the bytestring once it all gets written.
Otherwise, the end of it may get garbage collected before we can process
it, leading to a hang.

This seems to have been introduced in commit
cdc4bd7443. Which oddly was trying to fix a
very similar problem, but specific to a cluster node. In that commit,
things got out of order, with it signaling it was done with the bytestring
before it has written all of it to the file.

My test case for this bug is a directory special remote
with a file being sent to it via a proxy accessed via ssh or http.
The file was 10 mb, and it hung on the last few kb of it not being
received.

I've also tested this fix in the case of proxying to a cluster node
directory special remote over http, which was the case
cdc4bd7443 was dealing with.
This commit is contained in:
Joey Hess 2024-10-30 12:29:37 -04:00
parent fda151a4e2
commit ccbc5189b5
No known key found for this signature in database
GPG key ID: DB12DB0FF05F8F38
2 changed files with 2 additions and 1 deletions

View file

@ -249,11 +249,11 @@ proxySpecialRemote protoversion r ihdl ohdl owaitv oclosedv mexportdb = go
receivetofile iv h n = liftIO receivebytestring >>= \case
Just b -> do
n' <- storetofile iv h n (L.toChunks b)
liftIO $ atomically $
putTMVar owaitv ()
`orElse`
readTMVar oclosedv
n' <- storetofile iv h n (L.toChunks b)
-- Normally all the data is sent in a single
-- lazy bytestring. However, when the special
-- remote is a node in a cluster, a PUT is

View file

@ -23,6 +23,7 @@ git-annex (10.20241031) UNRELEASED; urgency=medium
and then use the P2P protocol to inform the proxy that the content has
been stored there, which will result in the same git-annex branch state
updates as sending DATA via the proxy.
* Fix hang when receiving a large file into a proxied special remote.
-- Joey Hess <id@joeyh.name> Thu, 17 Oct 2024 11:02:17 -0400