2015-07-31 20:00:13 +00:00
|
|
|
{- git-annex actions
|
|
|
|
-
|
avoid flushing keys db queue after each Annex action
The flush was only done Annex.run' to make sure that the queue was flushed
before git-annex exits. But, doing it there means that as soon as one
change gets queued, it gets flushed soon after, which contributes to
excessive writes to the database, slowing git-annex down.
(This does not yet speed git-annex up, but it is a stepping stone to
doing so.)
Database queues do not autoflush when garbage collected, so have to
be flushed explicitly. I don't think it's possible to make them
autoflush (except perhaps if git-annex sqitched to using ResourceT..).
The comment in Database.Keys.closeDb used to be accurate, since the
automatic flushing did mean that all writes reached the database even
when closeDb was not called. But now, closeDb or flushDb needs to be
called before stopping using an Annex state. So, removed that comment.
In Remote.Git, change to using quiesce everywhere that it used to use
stopCoProcesses. This means that uses on onLocal in there are just as
slow as before. I considered only calling closeDb on the local git remotes
when git-annex exits. But, the reason that Remote.Git calls stopCoProcesses
in each onLocal is so as not to leave git processes running that have files
open on the remote repo, when it's on removable media. So, it seemed to make
sense to also closeDb after each one, since sqlite may also keep files
open. Although that has not seemed to cause problems with removable
media so far. It was also just easier to quiesce in each onLocal than
once at the end. This does likely leave performance on the floor, so
could be revisited.
In Annex.Content.saveState, there was no reason to close the db,
flushing it is enough.
The rest of the changes are from auditing for Annex.new, and making
sure that quiesce is called, after any action that might possibly need
it.
After that audit, I'm pretty sure that the change to Annex.run' is
safe. The only concern might be that this does let more changes get
queued for write to the db, and if git-annex is interrupted, those will be
lost. But interrupting git-annex can obviously already prevent it from
writing the most recent change to the db, so it must recover from such
lost data... right?
Sponsored-by: Dartmouth College's Datalad project
2022-10-12 17:50:46 +00:00
|
|
|
- Copyright 2010-2022 Joey Hess <id@joeyh.name>
|
2015-07-31 20:00:13 +00:00
|
|
|
-
|
2019-03-13 19:48:14 +00:00
|
|
|
- Licensed under the GNU AGPL version 3 or higher.
|
2015-07-31 20:00:13 +00:00
|
|
|
-}
|
|
|
|
|
2020-12-11 19:28:58 +00:00
|
|
|
{-# LANGUAGE CPP #-}
|
|
|
|
|
2019-06-19 16:35:08 +00:00
|
|
|
module Annex.Action (
|
2020-12-07 18:44:21 +00:00
|
|
|
action,
|
|
|
|
verifiedAction,
|
2019-06-19 16:35:08 +00:00
|
|
|
startup,
|
avoid flushing keys db queue after each Annex action
The flush was only done Annex.run' to make sure that the queue was flushed
before git-annex exits. But, doing it there means that as soon as one
change gets queued, it gets flushed soon after, which contributes to
excessive writes to the database, slowing git-annex down.
(This does not yet speed git-annex up, but it is a stepping stone to
doing so.)
Database queues do not autoflush when garbage collected, so have to
be flushed explicitly. I don't think it's possible to make them
autoflush (except perhaps if git-annex sqitched to using ResourceT..).
The comment in Database.Keys.closeDb used to be accurate, since the
automatic flushing did mean that all writes reached the database even
when closeDb was not called. But now, closeDb or flushDb needs to be
called before stopping using an Annex state. So, removed that comment.
In Remote.Git, change to using quiesce everywhere that it used to use
stopCoProcesses. This means that uses on onLocal in there are just as
slow as before. I considered only calling closeDb on the local git remotes
when git-annex exits. But, the reason that Remote.Git calls stopCoProcesses
in each onLocal is so as not to leave git processes running that have files
open on the remote repo, when it's on removable media. So, it seemed to make
sense to also closeDb after each one, since sqlite may also keep files
open. Although that has not seemed to cause problems with removable
media so far. It was also just easier to quiesce in each onLocal than
once at the end. This does likely leave performance on the floor, so
could be revisited.
In Annex.Content.saveState, there was no reason to close the db,
flushing it is enough.
The rest of the changes are from auditing for Annex.new, and making
sure that quiesce is called, after any action that might possibly need
it.
After that audit, I'm pretty sure that the change to Annex.run' is
safe. The only concern might be that this does let more changes get
queued for write to the db, and if git-annex is interrupted, those will be
lost. But interrupting git-annex can obviously already prevent it from
writing the most recent change to the db, so it must recover from such
lost data... right?
Sponsored-by: Dartmouth College's Datalad project
2022-10-12 17:50:46 +00:00
|
|
|
quiesce,
|
2019-06-19 16:35:08 +00:00
|
|
|
stopCoProcesses,
|
|
|
|
) where
|
2015-07-31 20:00:13 +00:00
|
|
|
|
|
|
|
import qualified Data.Map as M
|
|
|
|
|
2016-01-20 20:36:33 +00:00
|
|
|
import Annex.Common
|
2015-07-31 20:00:13 +00:00
|
|
|
import qualified Annex
|
|
|
|
import Annex.Content
|
2020-04-17 18:36:45 +00:00
|
|
|
import Annex.CatFile
|
|
|
|
import Annex.CheckAttr
|
|
|
|
import Annex.HashObject
|
|
|
|
import Annex.CheckIgnore
|
2020-12-09 17:10:35 +00:00
|
|
|
import Annex.TransferrerPool
|
avoid flushing keys db queue after each Annex action
The flush was only done Annex.run' to make sure that the queue was flushed
before git-annex exits. But, doing it there means that as soon as one
change gets queued, it gets flushed soon after, which contributes to
excessive writes to the database, slowing git-annex down.
(This does not yet speed git-annex up, but it is a stepping stone to
doing so.)
Database queues do not autoflush when garbage collected, so have to
be flushed explicitly. I don't think it's possible to make them
autoflush (except perhaps if git-annex sqitched to using ResourceT..).
The comment in Database.Keys.closeDb used to be accurate, since the
automatic flushing did mean that all writes reached the database even
when closeDb was not called. But now, closeDb or flushDb needs to be
called before stopping using an Annex state. So, removed that comment.
In Remote.Git, change to using quiesce everywhere that it used to use
stopCoProcesses. This means that uses on onLocal in there are just as
slow as before. I considered only calling closeDb on the local git remotes
when git-annex exits. But, the reason that Remote.Git calls stopCoProcesses
in each onLocal is so as not to leave git processes running that have files
open on the remote repo, when it's on removable media. So, it seemed to make
sense to also closeDb after each one, since sqlite may also keep files
open. Although that has not seemed to cause problems with removable
media so far. It was also just easier to quiesce in each onLocal than
once at the end. This does likely leave performance on the floor, so
could be revisited.
In Annex.Content.saveState, there was no reason to close the db,
flushing it is enough.
The rest of the changes are from auditing for Annex.new, and making
sure that quiesce is called, after any action that might possibly need
it.
After that audit, I'm pretty sure that the change to Annex.run' is
safe. The only concern might be that this does let more changes get
queued for write to the db, and if git-annex is interrupted, those will be
lost. But interrupting git-annex can obviously already prevent it from
writing the most recent change to the db, so it must recover from such
lost data... right?
Sponsored-by: Dartmouth College's Datalad project
2022-10-12 17:50:46 +00:00
|
|
|
import qualified Database.Keys
|
2015-07-31 20:00:13 +00:00
|
|
|
|
2020-12-11 19:28:58 +00:00
|
|
|
import Control.Concurrent.STM
|
|
|
|
#ifndef mingw32_HOST_OS
|
|
|
|
import System.Posix.Signals
|
|
|
|
#endif
|
|
|
|
|
2020-12-07 18:44:21 +00:00
|
|
|
{- Runs an action that may throw exceptions, catching and displaying them. -}
|
|
|
|
action :: Annex () -> Annex Bool
|
|
|
|
action a = tryNonAsync a >>= \case
|
|
|
|
Right () -> return True
|
|
|
|
Left e -> do
|
|
|
|
warning (show e)
|
|
|
|
return False
|
|
|
|
|
|
|
|
verifiedAction :: Annex Verification -> Annex (Bool, Verification)
|
|
|
|
verifiedAction a = tryNonAsync a >>= \case
|
|
|
|
Right v -> return (True, v)
|
|
|
|
Left e -> do
|
|
|
|
warning (show e)
|
|
|
|
return (False, UnVerified)
|
|
|
|
|
|
|
|
|
2015-07-31 20:00:13 +00:00
|
|
|
{- Actions to perform each time ran. -}
|
|
|
|
startup :: Annex ()
|
2020-12-11 19:28:58 +00:00
|
|
|
startup = do
|
|
|
|
#ifndef mingw32_HOST_OS
|
2021-04-02 19:26:21 +00:00
|
|
|
av <- Annex.getRead Annex.signalactions
|
2020-12-11 19:28:58 +00:00
|
|
|
let propagate sig = liftIO $ installhandleronce sig av
|
|
|
|
propagate sigINT
|
|
|
|
propagate sigQUIT
|
|
|
|
propagate sigTERM
|
|
|
|
propagate sigTSTP
|
|
|
|
propagate sigCONT
|
|
|
|
propagate sigHUP
|
|
|
|
-- sigWINCH is not propagated; it should not be needed,
|
|
|
|
-- and the concurrent-output library installs its own signal
|
|
|
|
-- handler for it.
|
|
|
|
-- sigSTOP and sigKILL cannot be caught, so will not be propagated.
|
|
|
|
where
|
|
|
|
installhandleronce sig av = void $
|
|
|
|
installHandler sig (CatchOnce (gotsignal sig av)) Nothing
|
|
|
|
gotsignal sig av = do
|
|
|
|
mapM_ (\a -> a (fromIntegral sig)) =<< atomically (readTVar av)
|
|
|
|
raiseSignal sig
|
|
|
|
installhandleronce sig av
|
|
|
|
#else
|
|
|
|
return ()
|
|
|
|
#endif
|
2015-07-31 20:00:13 +00:00
|
|
|
|
avoid flushing keys db queue after each Annex action
The flush was only done Annex.run' to make sure that the queue was flushed
before git-annex exits. But, doing it there means that as soon as one
change gets queued, it gets flushed soon after, which contributes to
excessive writes to the database, slowing git-annex down.
(This does not yet speed git-annex up, but it is a stepping stone to
doing so.)
Database queues do not autoflush when garbage collected, so have to
be flushed explicitly. I don't think it's possible to make them
autoflush (except perhaps if git-annex sqitched to using ResourceT..).
The comment in Database.Keys.closeDb used to be accurate, since the
automatic flushing did mean that all writes reached the database even
when closeDb was not called. But now, closeDb or flushDb needs to be
called before stopping using an Annex state. So, removed that comment.
In Remote.Git, change to using quiesce everywhere that it used to use
stopCoProcesses. This means that uses on onLocal in there are just as
slow as before. I considered only calling closeDb on the local git remotes
when git-annex exits. But, the reason that Remote.Git calls stopCoProcesses
in each onLocal is so as not to leave git processes running that have files
open on the remote repo, when it's on removable media. So, it seemed to make
sense to also closeDb after each one, since sqlite may also keep files
open. Although that has not seemed to cause problems with removable
media so far. It was also just easier to quiesce in each onLocal than
once at the end. This does likely leave performance on the floor, so
could be revisited.
In Annex.Content.saveState, there was no reason to close the db,
flushing it is enough.
The rest of the changes are from auditing for Annex.new, and making
sure that quiesce is called, after any action that might possibly need
it.
After that audit, I'm pretty sure that the change to Annex.run' is
safe. The only concern might be that this does let more changes get
queued for write to the db, and if git-annex is interrupted, those will be
lost. But interrupting git-annex can obviously already prevent it from
writing the most recent change to the db, so it must recover from such
lost data... right?
Sponsored-by: Dartmouth College's Datalad project
2022-10-12 17:50:46 +00:00
|
|
|
{- Rn all cleanup actions, save all state, stop all long-running child
|
|
|
|
- processes.
|
|
|
|
-
|
|
|
|
- This can be run repeatedly with other Annex actions run in between,
|
|
|
|
- but usually it is run only once at the end.
|
|
|
|
-
|
|
|
|
- When passed True, avoids making any commits to the git-annex branch,
|
|
|
|
- leaving changes in the journal for later commit.
|
|
|
|
-}
|
|
|
|
quiesce :: Bool -> Annex ()
|
|
|
|
quiesce nocommit = do
|
|
|
|
cas <- Annex.withState $ \st -> return
|
|
|
|
( st { Annex.cleanupactions = mempty }
|
|
|
|
, Annex.cleanupactions st
|
|
|
|
)
|
|
|
|
sequence_ (M.elems cas)
|
2015-07-31 20:00:13 +00:00
|
|
|
saveState nocommit
|
2017-09-30 02:36:08 +00:00
|
|
|
stopCoProcesses
|
avoid flushing keys db queue after each Annex action
The flush was only done Annex.run' to make sure that the queue was flushed
before git-annex exits. But, doing it there means that as soon as one
change gets queued, it gets flushed soon after, which contributes to
excessive writes to the database, slowing git-annex down.
(This does not yet speed git-annex up, but it is a stepping stone to
doing so.)
Database queues do not autoflush when garbage collected, so have to
be flushed explicitly. I don't think it's possible to make them
autoflush (except perhaps if git-annex sqitched to using ResourceT..).
The comment in Database.Keys.closeDb used to be accurate, since the
automatic flushing did mean that all writes reached the database even
when closeDb was not called. But now, closeDb or flushDb needs to be
called before stopping using an Annex state. So, removed that comment.
In Remote.Git, change to using quiesce everywhere that it used to use
stopCoProcesses. This means that uses on onLocal in there are just as
slow as before. I considered only calling closeDb on the local git remotes
when git-annex exits. But, the reason that Remote.Git calls stopCoProcesses
in each onLocal is so as not to leave git processes running that have files
open on the remote repo, when it's on removable media. So, it seemed to make
sense to also closeDb after each one, since sqlite may also keep files
open. Although that has not seemed to cause problems with removable
media so far. It was also just easier to quiesce in each onLocal than
once at the end. This does likely leave performance on the floor, so
could be revisited.
In Annex.Content.saveState, there was no reason to close the db,
flushing it is enough.
The rest of the changes are from auditing for Annex.new, and making
sure that quiesce is called, after any action that might possibly need
it.
After that audit, I'm pretty sure that the change to Annex.run' is
safe. The only concern might be that this does let more changes get
queued for write to the db, and if git-annex is interrupted, those will be
lost. But interrupting git-annex can obviously already prevent it from
writing the most recent change to the db, so it must recover from such
lost data... right?
Sponsored-by: Dartmouth College's Datalad project
2022-10-12 17:50:46 +00:00
|
|
|
Database.Keys.closeDb
|
2017-09-30 02:36:08 +00:00
|
|
|
|
2020-12-09 17:10:35 +00:00
|
|
|
{- Stops all long-running child processes, including git query processes. -}
|
2020-04-17 18:36:45 +00:00
|
|
|
stopCoProcesses :: Annex ()
|
|
|
|
stopCoProcesses = do
|
|
|
|
catFileStop
|
|
|
|
checkAttrStop
|
|
|
|
hashObjectStop
|
|
|
|
checkIgnoreStop
|
fix cat-file leak in get with -J
Bugfix: When -J was enabled, getting files leaked a ever-growing number of
git cat-file processes.
(Since commit dd39e9e255a5684824ea75861f48f658eaaba288)
The leak happened when mergeState called stopNonConcurrentSafeCoProcesses.
While stopNonConcurrentSafeCoProcesses usually manages to stop everything,
there was a race condition where cat-file processes were leaked. Because
catFileStop modifies Annex.catfilehandles in a non-concurrency safe way,
and could clobber modifications made in between. Which should have been ok,
since originally catFileStop was only used at shutdown.
Note the comment on catFileStop saying it should only be used when nothing
else is using the handles. It would be possible to make catFileStop
race-safe, but it should just not be used in a situation where a race is
possible. So I didn't bother.
Instead, the fix is just not to stop any processes in mergeState. Because
in order for mergeState to be called, dupState must have been run, and it
enables concurrency mode, stops any non-concurrent processes, and so all
processes that are running are concurrency safea. So there is no need to
stop them when merging state. Indeed, stopping them would be extra work,
even if there was not this bug.
Sponsored-by: Dartmouth College's Datalad project
2021-11-19 16:51:08 +00:00
|
|
|
emptyTransferrerPool
|