Fix a crash (STM deadlock) when -J is used with multiple files that point to the same key

See the comment for a trace of the deadlock.

Added a new StartStage. New worker threads begin in the StartStage.
Once a thread is ready to do work, it moves away from the StartStage,
and no thread will ever transition back to it.

A thread that blocks waiting on another thread that is processing
the same key will block while in the StartStage. That other thread
will never switch back to the StartStage, and so the deadlock is avoided.
This commit is contained in:
Joey Hess 2019-11-14 11:31:43 -04:00
parent 20d9a9b662
commit 667d38a8f1
No known key found for this signature in database
GPG key ID: DB12DB0FF05F8F38
6 changed files with 114 additions and 24 deletions

View file

@ -90,10 +90,20 @@ enteringStage newstage a = Annex.getState Annex.workers >>= \case
Nothing -> a Nothing -> a
Just tv -> do Just tv -> do
mytid <- liftIO myThreadId mytid <- liftIO myThreadId
let set = changeStageTo mytid tv newstage let set = changeStageTo mytid tv (const newstage)
let restore = maybe noop (void . changeStageTo mytid tv) let restore = maybe noop (void . changeStageTo mytid tv . const)
bracket set restore (const a) bracket set restore (const a)
{- Transition the current thread to the initial stage.
- This is done once the thread is ready to begin work.
-}
enteringInitialStage :: Annex ()
enteringInitialStage = Annex.getState Annex.workers >>= \case
Nothing -> noop
Just tv -> do
mytid <- liftIO myThreadId
void $ changeStageTo mytid tv initialStage
{- This needs to leave the WorkerPool with the same number of {- This needs to leave the WorkerPool with the same number of
- idle and active threads, and with the same number of threads for each - idle and active threads, and with the same number of threads for each
- WorkerStage. So, all it can do is swap the WorkerStage of our thread's - WorkerStage. So, all it can do is swap the WorkerStage of our thread's
@ -110,14 +120,15 @@ enteringStage newstage a = Annex.getState Annex.workers >>= \case
- in the pool than spareVals. That does not prevent other threads that call - in the pool than spareVals. That does not prevent other threads that call
- this from using them though, so it's fine. - this from using them though, so it's fine.
-} -}
changeStageTo :: ThreadId -> TMVar (WorkerPool AnnexState) -> WorkerStage -> Annex (Maybe WorkerStage) changeStageTo :: ThreadId -> TMVar (WorkerPool AnnexState) -> (UsedStages -> WorkerStage) -> Annex (Maybe WorkerStage)
changeStageTo mytid tv newstage = liftIO $ changeStageTo mytid tv getnewstage = liftIO $
replaceidle >>= maybe replaceidle >>= maybe
(return Nothing) (return Nothing)
(either waitidle (return . Just)) (either waitidle (return . Just))
where where
replaceidle = atomically $ do replaceidle = atomically $ do
pool <- takeTMVar tv pool <- takeTMVar tv
let newstage = getnewstage (usedStages pool)
let notchanging = do let notchanging = do
putTMVar tv pool putTMVar tv pool
return Nothing return Nothing
@ -128,7 +139,7 @@ changeStageTo mytid tv newstage = liftIO $
Nothing -> do Nothing -> do
putTMVar tv $ putTMVar tv $
addWorkerPool (IdleWorker oldstage) pool' addWorkerPool (IdleWorker oldstage) pool'
return $ Just $ Left (myaid, oldstage) return $ Just $ Left (myaid, newstage, oldstage)
Just pool'' -> do Just pool'' -> do
-- optimisation -- optimisation
putTMVar tv $ putTMVar tv $
@ -139,27 +150,26 @@ changeStageTo mytid tv newstage = liftIO $
_ -> notchanging _ -> notchanging
else notchanging else notchanging
waitidle (myaid, oldstage) = atomically $ do waitidle (myaid, newstage, oldstage) = atomically $ do
pool <- waitIdleWorkerSlot newstage =<< takeTMVar tv pool <- waitIdleWorkerSlot newstage =<< takeTMVar tv
putTMVar tv $ addWorkerPool (ActiveWorker myaid newstage) pool putTMVar tv $ addWorkerPool (ActiveWorker myaid newstage) pool
return (Just oldstage) return (Just oldstage)
-- | Waits until there's an idle worker in the worker pool -- | Waits until there's an idle StartStage worker in the worker pool,
-- for its initial stage, removes it from the pool, and returns its state. -- removes it from the pool, and returns its state.
-- --
-- If the worker pool is not already allocated, returns Nothing. -- If the worker pool is not already allocated, returns Nothing.
waitInitialWorkerSlot :: TMVar (WorkerPool Annex.AnnexState) -> STM (Maybe (Annex.AnnexState, WorkerStage)) waitStartWorkerSlot :: TMVar (WorkerPool Annex.AnnexState) -> STM (Maybe (Annex.AnnexState, WorkerStage))
waitInitialWorkerSlot tv = do waitStartWorkerSlot tv = do
pool <- takeTMVar tv pool <- takeTMVar tv
let stage = initialStage (usedStages pool) st <- go pool
st <- go stage pool return $ Just (st, StartStage)
return $ Just (st, stage)
where where
go wantstage pool = case spareVals pool of go pool = case spareVals pool of
[] -> retry [] -> retry
(v:vs) -> do (v:vs) -> do
let pool' = pool { spareVals = vs } let pool' = pool { spareVals = vs }
putTMVar tv =<< waitIdleWorkerSlot wantstage pool' putTMVar tv =<< waitIdleWorkerSlot StartStage pool'
return v return v
waitIdleWorkerSlot :: WorkerStage -> WorkerPool Annex.AnnexState -> STM (WorkerPool Annex.AnnexState) waitIdleWorkerSlot :: WorkerStage -> WorkerPool Annex.AnnexState -> STM (WorkerPool Annex.AnnexState)

View file

@ -3,6 +3,8 @@ git-annex (7.20191107) UNRELEASED; urgency=medium
* Added annex.allowsign option. * Added annex.allowsign option.
* Make --json-error-messages capture more errors, * Make --json-error-messages capture more errors,
particularly url download errors. particularly url download errors.
* Fix a crash (STM deadlock) when -J is used with multiple files
that point to the same key.
-- Joey Hess <id@joeyh.name> Mon, 11 Nov 2019 15:59:47 -0400 -- Joey Hess <id@joeyh.name> Mon, 11 Nov 2019 15:59:47 -0400

View file

@ -63,7 +63,7 @@ commandAction start = Annex.getState Annex.concurrency >>= \case
runconcurrent = Annex.getState Annex.workers >>= \case runconcurrent = Annex.getState Annex.workers >>= \case
Nothing -> runnonconcurrent Nothing -> runnonconcurrent
Just tv -> Just tv ->
liftIO (atomically (waitInitialWorkerSlot tv)) >>= liftIO (atomically (waitStartWorkerSlot tv)) >>=
maybe runnonconcurrent (runconcurrent' tv) maybe runnonconcurrent (runconcurrent' tv)
runconcurrent' tv (workerst, workerstage) = do runconcurrent' tv (workerst, workerstage) = do
aid <- liftIO $ async $ snd <$> Annex.run workerst aid <- liftIO $ async $ snd <$> Annex.run workerst
@ -99,12 +99,13 @@ commandAction start = Annex.getState Annex.concurrency >>= \case
case mkActionItem startmsg' of case mkActionItem startmsg' of
OnlyActionOn k' _ | k' /= k -> OnlyActionOn k' _ | k' /= k ->
concurrentjob' workerst startmsg' perform' concurrentjob' workerst startmsg' perform'
_ -> mkjob workerst startmsg' perform' _ -> beginjob workerst startmsg' perform'
Nothing -> noop Nothing -> noop
_ -> mkjob workerst startmsg perform _ -> beginjob workerst startmsg perform
mkjob workerst startmsg perform = beginjob workerst startmsg perform =
inOwnConsoleRegion (Annex.output workerst) $ inOwnConsoleRegion (Annex.output workerst) $ do
enteringInitialStage
void $ accountCommandAction startmsg $ void $ accountCommandAction startmsg $
performconcurrent startmsg perform performconcurrent startmsg perform

View file

@ -40,7 +40,12 @@ instance Show (Worker t) where
show (ActiveWorker _ s) = "ActiveWorker " ++ show s show (ActiveWorker _ s) = "ActiveWorker " ++ show s
data WorkerStage data WorkerStage
= PerformStage = StartStage
-- ^ All threads start in this stage, and then transition away from
-- it to the initialStage when they begin doing work. This should
-- never be included in UsedStages, because transition from some
-- other stage back to this one could result in a deadlock.
| PerformStage
-- ^ Running a CommandPerform action. -- ^ Running a CommandPerform action.
| CleanupStage | CleanupStage
-- ^ Running a CommandCleanup action. -- ^ Running a CommandCleanup action.
@ -102,12 +107,13 @@ workerAsync (ActiveWorker aid _) = Just aid
allocateWorkerPool :: t -> Int -> UsedStages -> WorkerPool t allocateWorkerPool :: t -> Int -> UsedStages -> WorkerPool t
allocateWorkerPool t n u = WorkerPool allocateWorkerPool t n u = WorkerPool
{ usedStages = u { usedStages = u
, workerList = take totalthreads $ map IdleWorker stages , workerList = map IdleWorker $
take totalthreads $ concat $ repeat stages
, spareVals = replicate totalthreads t , spareVals = replicate totalthreads t
} }
where where
stages = concat $ repeat $ S.toList $ stageSet u stages = StartStage : S.toList (stageSet u)
totalthreads = n * S.size (stageSet u) totalthreads = n * length stages
addWorkerPool :: Worker t -> WorkerPool t -> WorkerPool t addWorkerPool :: Worker t -> WorkerPool t -> WorkerPool t
addWorkerPool w pool = pool { workerList = w : workerList pool } addWorkerPool w pool = pool { workerList = w : workerList pool }

View file

@ -69,3 +69,5 @@ I felt like it is an old issue but failed to find a trace of it upon a quick loo
[[!meta author=yoh]] [[!meta author=yoh]]
[[!tag projects/datalad]] [[!tag projects/datalad]]
> [[fixed|done]] --[[Joey]]

View file

@ -0,0 +1,69 @@
[[!comment format=mdwn
username="joey"
subject="""comment 6"""
date="2019-11-14T15:20:13Z"
content="""
Added tracing of changes to the WorkerPool.
joey@darkstar:/tmp/dst>git annex get -J1 1 2 --json
("initial pool",WorkerPool UsedStages {initialStage = TransferStage, stageSet = fromList [TransferStage,VerifyStage]} [IdleWorker TransferStage,IdleWorker VerifyStage] 2)
("starting worker",WorkerPool UsedStages {initialStage = TransferStage, stageSet = fromList [TransferStage,VerifyStage]} [ActiveWorker TransferStage,IdleWorker VerifyStage] 1)
Transfer starts for file 1
(("change stage from",TransferStage,"to",VerifyStage),WorkerPool UsedStages {initialStage = TransferStage, stageSet = fromList [TransferStage,VerifyStage]} [IdleWorker TransferStage,ActiveWorker VerifyStage] 1)
Transfer complete, verifying starts.
("starting worker",WorkerPool UsedStages {initialStage = TransferStage, stageSet = fromList [TransferStage,VerifyStage]} [ActiveWorker TransferStage,ActiveWorker VerifyStage] 0)
This second thread is being started to process file 2.
It starts in TransferStage, but it will be blocked from doing anything
by ensureOnlyActionOn.
("finishCommandActions starts with",WorkerPool UsedStages {initialStage = TransferStage, stageSet = fromList [TransferStage,VerifyStage]} [ActiveWorker TransferStage,ActiveWorker VerifyStage] 0)
("finishCommandActions observes",WorkerPool UsedStages {initialStage = TransferStage, stageSet = fromList [TransferStage,VerifyStage]} [ActiveWorker TransferStage,ActiveWorker VerifyStage] 0)
All files have threads to process them started, so finishCommandActions starts up.
It will retry since the threads are still running.
(("change stage from",VerifyStage,"to",TransferStage),WorkerPool UsedStages {initialStage = TransferStage, stageSet = fromList [TransferStage,VerifyStage]} [IdleWorker VerifyStage,ActiveWorker TransferStage] 0)
The first thread is done with verification, and
the stage is being restored to transfer.
The 0 means that there are 0 spareVals. Normally, the number of spareVals
should be the same as the number of IdleWorkers, so it should be 1.
It's 0 because the thread is in the process of changing between stages.
The thread should at this point be waiting for an idle TransferStage
slot to become available. The second thread still has that active.
It seems that wait never completes, because a trace I had after that wait
never got printed.
("finishCommandActions observes",WorkerPool UsedStages {initialStage = TransferStage, stageSet = fromList [TransferStage,VerifyStage]} [IdleWorker VerifyStage,ActiveWorker TransferStage] 0)
It retries again, because of the active worker and also because spareVals
is not the same as IdleWorkers.
git-annex: thread blocked indefinitely in an STM transaction
Deadlock.
Looks like that second thread that got into transfer stage
never leaves it, and then the first thread, which wants to
restore back to transfer stage, is left waiting forever for it. And so is
finishCommandActions.
Aha! The second thread is in fact still in ensureOnlyActionOn.
So it's waiting on the first thread to finish. But the first thread can't
transition back to TransferStage because the second thread has stolen it.
Now it makes sense.
So.. One way to fix this would be to add a new stage, which is used for
threads that are just starting. Then the second thread would be in
StartStage, and the first thread would not be prevented from transitioning
back to TransferStage. Would need to make sure that, once a thread leaves
StartStage, it does not ever transition back to it.
"""]]