assistant: Add 1/200th second delay between checking each file in the full transfer scan, to avoid using too much CPU.

The slowdown is not going to be large in typical small-ish repos.
And it does not seem to matter if the assistant reacts a little bit slower
in situations involving the expensive scan, since:

a) Those situations typically involve getting back in sync after something
   has changed on a remote, often after a disconnect of some duration.
   So taking a few seconds more is not noticable.
b) If the scan finds things that it needs to do, it will start
   blocking anyway after 10 transfers are queued (due to use of
   queueTransferWhenSmall). So, only the speed of finding the first 10
   transfers will be impacted by this change.

This commit was sponsored by Jochen Bartl on Patreon.
This commit is contained in:
Joey Hess 2017-03-06 13:32:47 -04:00
parent 113b48ba19
commit af2a6d578e
No known key found for this signature in database
GPG key ID: C910D9222512E3C7
3 changed files with 44 additions and 0 deletions

View file

@ -25,6 +25,7 @@ import qualified Types.Remote as Remote
import Utility.ThreadScheduler
import Utility.NotificationBroadcaster
import Utility.Batch
import Utility.ThreadScheduler
import qualified Git.LsFiles as LsFiles
import Annex.WorkTree
import Annex.Content
@ -32,6 +33,7 @@ import Annex.Wanted
import CmdLine.Action
import qualified Data.Set as S
import Control.Concurrent
{- This thread waits until a remote needs to be scanned, to find transfers
- that need to be made, to keep data in sync.
@ -145,6 +147,10 @@ expensiveScan urlrenderer rs = batch <~> do
(findtransfers f unwanted)
=<< liftAnnex (lookupFile f)
mapM_ (enqueue f) ts
{- Delay for a short time to avoid using too much CPU. -}
liftIO $ threadDelay $ fromIntegral $ oneSecond `div` 200
scan unwanted' fs
enqueue f (r, t) =