2012-06-19 06:40:21 +00:00
|
|
|
{- git-annex assistant commit thread
|
2012-06-13 16:36:33 +00:00
|
|
|
-
|
2015-01-21 16:50:09 +00:00
|
|
|
- Copyright 2012 Joey Hess <id@joeyh.name>
|
2012-06-23 05:20:40 +00:00
|
|
|
-
|
|
|
|
- Licensed under the GNU GPL version 3 or higher.
|
2012-06-13 16:36:33 +00:00
|
|
|
-}
|
|
|
|
|
2013-07-26 22:42:22 +00:00
|
|
|
{-# LANGUAGE CPP #-}
|
always check with ls-files before adding new files
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
2012-10-02 21:34:22 +00:00
|
|
|
|
2012-06-25 20:10:10 +00:00
|
|
|
module Assistant.Threads.Committer where
|
2012-06-13 16:36:33 +00:00
|
|
|
|
2012-07-20 23:29:59 +00:00
|
|
|
import Assistant.Common
|
2012-06-19 06:40:21 +00:00
|
|
|
import Assistant.Changes
|
2012-10-29 23:30:23 +00:00
|
|
|
import Assistant.Types.Changes
|
2012-06-22 17:39:44 +00:00
|
|
|
import Assistant.Commits
|
2012-08-02 13:03:04 +00:00
|
|
|
import Assistant.Alert
|
2012-10-30 18:34:48 +00:00
|
|
|
import Assistant.DaemonStatus
|
2012-07-05 16:21:22 +00:00
|
|
|
import Assistant.TransferQueue
|
2013-03-10 22:16:03 +00:00
|
|
|
import Assistant.Drop
|
2012-07-05 16:21:22 +00:00
|
|
|
import Logs.Transfer
|
2013-02-05 17:41:48 +00:00
|
|
|
import Logs.Location
|
2012-06-13 16:36:33 +00:00
|
|
|
import qualified Annex.Queue
|
always check with ls-files before adding new files
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
2012-10-02 21:34:22 +00:00
|
|
|
import qualified Git.LsFiles
|
preliminary deferring of file adds to commit time
Defer adding files to the annex until commit time, when during a batch
operation, a bundle of files will be available. This will allow for
checking a them all with a single lsof call.
The tricky part is that adding the file causes a symlink change inotify.
So I made it wait for an appropriate number of symlink changes to be
received before continuing with the commit. This avoids any delay
in the commit process. It is possible that some unrelated symlink change is
made; if that happens it'll commit it and delay committing the newly added
symlink for 1 second. This seems ok. I do rely on the expected symlink
change event always being received, but only when the add succeeds.
Another way to do it might be to directly stage the symlink, and then
ignore the redundant symlink change event. That would involve some
redundant work, and perhaps an empty commit, but if this code turns
out to have some bug, that'd be the best way to avoid it.
FWIW, this change seems to, as a bonus, have produced better grouping
of batch changes into single commits. Before, a large batch change would
result in a series of commits, with the first containing only one file,
and each of the rest bundling a number of files. Now, the added wait for
the symlink changes to arrive gives time for additional add changes to
be processed, all within the same commit.
2012-06-15 20:27:44 +00:00
|
|
|
import qualified Command.Add
|
2012-06-13 21:54:23 +00:00
|
|
|
import Utility.ThreadScheduler
|
2012-06-16 02:35:29 +00:00
|
|
|
import qualified Utility.Lsof as Lsof
|
2012-06-19 06:40:21 +00:00
|
|
|
import qualified Utility.DirWatcher as DirWatcher
|
2012-06-20 20:07:14 +00:00
|
|
|
import Types.KeySource
|
always check with ls-files before adding new files
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
2012-10-02 21:34:22 +00:00
|
|
|
import Config
|
2012-12-24 17:37:29 +00:00
|
|
|
import Annex.Content
|
fully support core.symlinks=false in all relevant symlink handling code
Refactored annex link code into nice clean new library.
Audited and dealt with calls to createSymbolicLink.
Remaining calls are all safe, because:
Annex/Link.hs: ( liftIO $ createSymbolicLink linktarget file
only when core.symlinks=true
Assistant/WebApp/Configurators/Local.hs: createSymbolicLink link link
test if symlinks can be made
Command/Fix.hs: liftIO $ createSymbolicLink link file
command only works in indirect mode
Command/FromKey.hs: liftIO $ createSymbolicLink link file
command only works in indirect mode
Command/Indirect.hs: liftIO $ createSymbolicLink l f
refuses to run if core.symlinks=false
Init.hs: createSymbolicLink f f2
test if symlinks can be made
Remote/Directory.hs: go [file] = catchBoolIO $ createSymbolicLink file f >> return True
fast key linking; catches failure to make symlink and falls back to copy
Remote/Git.hs: liftIO $ catchBoolIO $ createSymbolicLink loc file >> return True
ditto
Upgrade/V1.hs: liftIO $ createSymbolicLink link f
v1 repos could not be on a filesystem w/o symlinks
Audited and dealt with calls to readSymbolicLink.
Remaining calls are all safe, because:
Annex/Link.hs: ( liftIO $ catchMaybeIO $ readSymbolicLink file
only when core.symlinks=true
Assistant/Threads/Watcher.hs: ifM ((==) (Just link) <$> liftIO (catchMaybeIO $ readSymbolicLink file))
code that fixes real symlinks when inotify sees them
It's ok to not fix psdueo-symlinks.
Assistant/Threads/Watcher.hs: mlink <- liftIO (catchMaybeIO $ readSymbolicLink file)
ditto
Command/Fix.hs: stopUnless ((/=) (Just link) <$> liftIO (catchMaybeIO $ readSymbolicLink file)) $ do
command only works in indirect mode
Upgrade/V1.hs: getsymlink = takeFileName <$> readSymbolicLink file
v1 repos could not be on a filesystem w/o symlinks
Audited and dealt with calls to isSymbolicLink.
(Typically used with getSymbolicLinkStatus, but that is just used because
getFileStatus is not as robust; it also works on pseudolinks.)
Remaining calls are all safe, because:
Assistant/Threads/SanityChecker.hs: | isSymbolicLink s -> addsymlink file ms
only handles staging of symlinks that were somehow not staged
(might need to be updated to support pseudolinks, but this is
only a belt-and-suspenders check anyway, and I've never seen the code run)
Command/Add.hs: if isSymbolicLink s || not (isRegularFile s)
avoids adding symlinks to the annex, so not relevant
Command/Indirect.hs: | isSymbolicLink s -> void $ flip whenAnnexed f $
only allowed on systems that support symlinks
Command/Indirect.hs: whenM (liftIO $ not . isSymbolicLink <$> getSymbolicLinkStatus f) $ do
ditto
Seek.hs:notSymlink f = liftIO $ not . isSymbolicLink <$> getSymbolicLinkStatus f
used to find unlocked files, only relevant in indirect mode
Utility/FSEvents.hs: | Files.isSymbolicLink s = runhook addSymlinkHook $ Just s
Utility/FSEvents.hs: | Files.isSymbolicLink s ->
Utility/INotify.hs: | Files.isSymbolicLink s ->
Utility/INotify.hs: checkfiletype Files.isSymbolicLink addSymlinkHook f
Utility/Kqueue.hs: | Files.isSymbolicLink s = callhook addSymlinkHook (Just s) change
all above are lower-level, not relevant
Audited and dealt with calls to isSymLink.
Remaining calls are all safe, because:
Annex/Direct.hs: | isSymLink (getmode item) =
This is looking at git diff-tree objects, not files on disk
Command/Unused.hs: | isSymLink (LsTree.mode l) = do
This is looking at git ls-tree, not file on disk
Utility/FileMode.hs:isSymLink :: FileMode -> Bool
Utility/FileMode.hs:isSymLink = checkMode symbolicLinkMode
low-level
Done!!
2013-02-17 19:05:55 +00:00
|
|
|
import Annex.Link
|
2013-03-11 16:56:47 +00:00
|
|
|
import Annex.CatFile
|
2012-12-30 03:10:18 +00:00
|
|
|
import qualified Annex
|
2013-03-11 16:56:47 +00:00
|
|
|
import Utility.InodeCache
|
|
|
|
import Annex.Content.Direct
|
2013-12-01 17:59:39 +00:00
|
|
|
import qualified Command.Sync
|
2014-06-16 15:32:13 +00:00
|
|
|
import qualified Git.Branch
|
2012-06-13 16:36:33 +00:00
|
|
|
|
|
|
|
import Data.Time.Clock
|
2012-06-16 02:35:29 +00:00
|
|
|
import Data.Tuple.Utils
|
|
|
|
import qualified Data.Set as S
|
2013-03-11 16:56:47 +00:00
|
|
|
import qualified Data.Map as M
|
2012-06-20 23:04:16 +00:00
|
|
|
import Data.Either
|
2013-03-11 01:36:13 +00:00
|
|
|
import Control.Concurrent
|
2012-06-13 16:36:33 +00:00
|
|
|
|
|
|
|
{- This thread makes git commits at appropriate times. -}
|
2012-10-29 15:40:22 +00:00
|
|
|
commitThread :: NamedThread
|
2013-01-26 06:09:33 +00:00
|
|
|
commitThread = namedThread "Committer" $ do
|
2013-12-04 21:39:44 +00:00
|
|
|
havelsof <- liftIO $ inPath "lsof"
|
2012-12-30 03:10:18 +00:00
|
|
|
delayadd <- liftAnnex $
|
|
|
|
maybe delayaddDefault (return . Just . Seconds)
|
2013-01-01 17:52:47 +00:00
|
|
|
=<< annexDelayAdd <$> Annex.getGitConfig
|
2013-03-11 01:36:13 +00:00
|
|
|
waitChangeTime $ \(changes, time) -> do
|
2013-12-04 21:39:44 +00:00
|
|
|
readychanges <- handleAdds havelsof delayadd changes
|
2013-12-16 19:43:28 +00:00
|
|
|
if shouldCommit False time (length readychanges) readychanges
|
always check with ls-files before adding new files
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
2012-10-02 21:34:22 +00:00
|
|
|
then do
|
2013-03-11 01:36:13 +00:00
|
|
|
debug
|
|
|
|
[ "committing"
|
|
|
|
, show (length readychanges)
|
|
|
|
, "changes"
|
|
|
|
]
|
|
|
|
void $ alertWhile commitAlert $
|
|
|
|
liftAnnex commitStaged
|
|
|
|
recordCommit
|
2013-04-25 04:44:52 +00:00
|
|
|
let numchanges = length readychanges
|
2013-03-11 01:36:13 +00:00
|
|
|
mapM_ checkChangeContent readychanges
|
2013-04-25 04:44:52 +00:00
|
|
|
return numchanges
|
|
|
|
else do
|
|
|
|
refill readychanges
|
|
|
|
return 0
|
2013-03-11 01:36:13 +00:00
|
|
|
|
|
|
|
refill :: [Change] -> Assistant ()
|
|
|
|
refill [] = noop
|
|
|
|
refill cs = do
|
|
|
|
debug ["delaying commit of", show (length cs), "changes"]
|
|
|
|
refillChanges cs
|
|
|
|
|
2013-07-27 21:41:41 +00:00
|
|
|
{- Wait for one or more changes to arrive to be committed, and then
|
|
|
|
- runs an action to commit them. If more changes arrive while this is
|
|
|
|
- going on, they're handled intelligently, batching up changes into
|
|
|
|
- large commits where possible, doing rename detection, and
|
|
|
|
- commiting immediately otherwise. -}
|
2013-04-25 04:44:52 +00:00
|
|
|
waitChangeTime :: (([Change], UTCTime) -> Assistant Int) -> Assistant ()
|
2013-07-27 21:41:41 +00:00
|
|
|
waitChangeTime a = waitchanges 0
|
2013-03-11 01:36:13 +00:00
|
|
|
where
|
2013-07-27 21:41:41 +00:00
|
|
|
waitchanges lastcommitsize = do
|
2013-04-25 04:44:52 +00:00
|
|
|
-- Wait one one second as a simple rate limiter.
|
|
|
|
liftIO $ threadDelaySeconds (Seconds 1)
|
|
|
|
-- Now, wait until at least one change is available for
|
|
|
|
-- processing.
|
|
|
|
cs <- getChanges
|
2013-07-27 21:41:41 +00:00
|
|
|
handlechanges cs lastcommitsize
|
|
|
|
handlechanges changes lastcommitsize = do
|
2013-04-25 04:44:52 +00:00
|
|
|
let len = length changes
|
|
|
|
-- See if now's a good time to commit.
|
|
|
|
now <- liftIO getCurrentTime
|
2013-12-16 19:43:28 +00:00
|
|
|
scanning <- not . scanComplete <$> getDaemonStatus
|
|
|
|
case (lastcommitsize >= maxCommitSize, shouldCommit scanning now len changes, possiblyrename changes) of
|
2013-04-25 04:44:52 +00:00
|
|
|
(True, True, _)
|
|
|
|
| len > maxCommitSize ->
|
2013-12-16 20:24:57 +00:00
|
|
|
a (changes, now) >>= waitchanges
|
2013-04-25 04:44:52 +00:00
|
|
|
| otherwise -> aftermaxcommit changes
|
|
|
|
(_, True, False) ->
|
2013-12-16 20:24:57 +00:00
|
|
|
a (changes, now) >>= waitchanges
|
2013-04-25 04:44:52 +00:00
|
|
|
(_, True, True) -> do
|
|
|
|
morechanges <- getrelatedchanges changes
|
2013-12-16 20:24:57 +00:00
|
|
|
a (changes ++ morechanges, now) >>= waitchanges
|
2013-04-25 04:44:52 +00:00
|
|
|
_ -> do
|
|
|
|
refill changes
|
2013-07-27 21:41:41 +00:00
|
|
|
waitchanges lastcommitsize
|
2013-04-25 04:44:52 +00:00
|
|
|
|
2013-03-11 01:36:13 +00:00
|
|
|
{- Did we perhaps only get one of the AddChange and RmChange pair
|
2013-03-11 19:14:42 +00:00
|
|
|
- that make up a file rename? Or some of the pairs that make up
|
|
|
|
- a directory rename?
|
|
|
|
-}
|
2013-10-03 02:59:07 +00:00
|
|
|
possiblyrename = all renamepart
|
2013-03-11 19:14:42 +00:00
|
|
|
|
|
|
|
renamepart (PendingAddChange _ _) = True
|
|
|
|
renamepart c = isRmChange c
|
2013-03-11 01:36:13 +00:00
|
|
|
|
2013-03-11 19:46:09 +00:00
|
|
|
{- Gets changes related to the passed changes, without blocking
|
|
|
|
- very long.
|
|
|
|
-
|
|
|
|
- If there are multiple RmChanges, this is probably a directory
|
|
|
|
- rename, in which case it may be necessary to wait longer to get
|
|
|
|
- all the Changes involved.
|
|
|
|
-}
|
|
|
|
getrelatedchanges oldchanges
|
|
|
|
| length (filter isRmChange oldchanges) > 1 =
|
|
|
|
concat <$> getbatchchanges []
|
|
|
|
| otherwise = do
|
|
|
|
liftIO humanImperceptibleDelay
|
|
|
|
getAnyChanges
|
|
|
|
getbatchchanges cs = do
|
|
|
|
liftIO $ threadDelay $ fromIntegral $ oneSecond `div` 10
|
|
|
|
cs' <- getAnyChanges
|
|
|
|
if null cs'
|
|
|
|
then return cs
|
|
|
|
else getbatchchanges (cs':cs)
|
|
|
|
|
2013-04-25 04:44:52 +00:00
|
|
|
{- The last commit was maximum size, so it's very likely there
|
|
|
|
- are more changes and we'd like to ensure we make another commit
|
|
|
|
- of maximum size if possible.
|
|
|
|
-
|
|
|
|
- But, it can take a while for the Watcher to wake back up
|
|
|
|
- after a commit. It can get blocked by another thread
|
|
|
|
- that is using the Annex state, such as a git-annex branch
|
|
|
|
- commit. Especially after such a large commit, this can
|
|
|
|
- take several seconds. When this happens, it defeats the
|
|
|
|
- normal commit batching, which sees some old changes the
|
|
|
|
- Watcher found while the commit was being prepared, and sees
|
|
|
|
- no recent ones, and wants to commit immediately.
|
|
|
|
-
|
|
|
|
- All that we need to do, then, is wait for the Watcher to
|
|
|
|
- wake up, and queue up one more change.
|
|
|
|
-
|
|
|
|
- However, it's also possible that we're at the end of changes for
|
|
|
|
- now. So to avoid waiting a really long time before committing
|
|
|
|
- those changes we have, poll for up to 30 seconds, and then
|
|
|
|
- commit them.
|
|
|
|
-
|
|
|
|
- Also, try to run something in Annex, to ensure we block
|
|
|
|
- longer if the Annex state is indeed blocked.
|
|
|
|
-}
|
|
|
|
aftermaxcommit oldchanges = loop (30 :: Int)
|
|
|
|
where
|
2014-10-09 18:53:13 +00:00
|
|
|
loop 0 = continue oldchanges
|
|
|
|
loop n = do
|
2013-04-25 04:44:52 +00:00
|
|
|
liftAnnex noop -- ensure Annex state is free
|
|
|
|
liftIO $ threadDelaySeconds (Seconds 1)
|
|
|
|
changes <- getAnyChanges
|
|
|
|
if null changes
|
|
|
|
then loop (n - 1)
|
2013-07-27 21:41:41 +00:00
|
|
|
else continue (oldchanges ++ changes)
|
|
|
|
continue cs
|
|
|
|
| null cs = waitchanges 0
|
|
|
|
| otherwise = handlechanges cs 0
|
2013-04-25 04:44:52 +00:00
|
|
|
|
2013-03-11 16:56:47 +00:00
|
|
|
isRmChange :: Change -> Bool
|
|
|
|
isRmChange (Change { changeInfo = i }) | i == RmChange = True
|
|
|
|
isRmChange _ = False
|
|
|
|
|
2013-03-11 01:36:13 +00:00
|
|
|
{- An amount of time that is hopefully imperceptably short for humans,
|
|
|
|
- while long enough for a computer to get some work done.
|
|
|
|
- Note that 0.001 is a little too short for rename change batching to
|
|
|
|
- work. -}
|
|
|
|
humanImperceptibleInterval :: NominalDiffTime
|
|
|
|
humanImperceptibleInterval = 0.01
|
|
|
|
|
|
|
|
humanImperceptibleDelay :: IO ()
|
|
|
|
humanImperceptibleDelay = threadDelay $
|
|
|
|
truncate $ humanImperceptibleInterval * fromIntegral oneSecond
|
|
|
|
|
2013-04-25 04:44:52 +00:00
|
|
|
maxCommitSize :: Int
|
|
|
|
maxCommitSize = 5000
|
|
|
|
|
2013-03-11 01:36:13 +00:00
|
|
|
{- Decide if now is a good time to make a commit.
|
|
|
|
- Note that the list of changes has an undefined order.
|
|
|
|
-
|
|
|
|
- Current strategy: If there have been 10 changes within the past second,
|
|
|
|
- a batch activity is taking place, so wait for later.
|
|
|
|
-}
|
2013-12-16 19:43:28 +00:00
|
|
|
shouldCommit :: Bool -> UTCTime -> Int -> [Change] -> Bool
|
|
|
|
shouldCommit scanning now len changes
|
|
|
|
| scanning = len >= maxCommitSize
|
2013-03-11 01:36:13 +00:00
|
|
|
| len == 0 = False
|
2013-04-25 04:44:52 +00:00
|
|
|
| len >= maxCommitSize = True
|
2013-03-11 01:36:13 +00:00
|
|
|
| length recentchanges < 10 = True
|
|
|
|
| otherwise = False -- batch activity
|
2012-10-29 15:40:22 +00:00
|
|
|
where
|
2013-03-11 01:36:13 +00:00
|
|
|
thissecond c = timeDelta c <= 1
|
|
|
|
recentchanges = filter thissecond changes
|
|
|
|
timeDelta c = now `diffUTCTime` changeTime c
|
2012-06-13 16:36:33 +00:00
|
|
|
|
run current branch merge in annex monad
I was seeing some interesting crashes after the previous commit,
when making file changes slightly faster than the assistant could keep up.
error: Ref refs/heads/master is at 7074f8e0a11110c532d06746e334f2fec6af6ab4 but expected 95ea86008d72a40d97a81cfc8fb47a0da92166bd
fatal: cannot lock HEAD ref
Committer crashed: git commit [Param "--allow-empty-message",Param "-m",Param "",Param "--allow-empty",Param "--quiet"] failed
Pusher crashed: thread blocked indefinitely in an STM transaction
Clearly the the merger ended up running at the same time as the committer,
and with both modifying HEAD the committer crashed. I fixed that by
making the Merger run its merge inside the annex monad, which avoids
it running concurrently with other git operations. Also by making
the committer not crash if git fails.
What I don't understand is why the pusher then crashed with a STM deadlock.
That must be in either the DaemonStatusHandle or the FailedPushMap,
and the latter is only used by the pusher. Did the committer's crash somehow
break STM?
The BlockedIndefinitelyOnSTM exception is described as:
-- |The thread is waiting to retry an STM transaction, but there are no
-- other references to any @TVar@s involved, so it can't ever continue.
If the Committer had a reference to a TVar and crashed, I can sort of see
this leading to that exception..
The crash was quite easy to reproduce after the previous commit, but
after making the above change, I have yet to see it again. Here's hoping.
2012-09-18 01:32:30 +00:00
|
|
|
commitStaged :: Annex Bool
|
2012-06-13 16:36:33 +00:00
|
|
|
commitStaged = do
|
2012-10-02 22:04:06 +00:00
|
|
|
{- This could fail if there's another commit being made by
|
|
|
|
- something else. -}
|
unify exception handling into Utility.Exception
Removed old extensible-exceptions, only needed for very old ghc.
Made webdav use Utility.Exception, to work after some changes in DAV's
exception handling.
Removed Annex.Exception. Mostly this was trivial, but note that
tryAnnex is replaced with tryNonAsync and catchAnnex replaced with
catchNonAsync. In theory that could be a behavior change, since the former
caught all exceptions, and the latter don't catch async exceptions.
However, in practice, nothing in the Annex monad uses async exceptions.
Grepping for throwTo and killThread only find stuff in the assistant,
which does not seem related.
Command.Add.undo is changed to accept a SomeException, and things
that use it for rollback now catch non-async exceptions, rather than
only IOExceptions.
2014-08-08 01:55:44 +00:00
|
|
|
v <- tryNonAsync Annex.Queue.flush
|
2012-10-02 22:04:06 +00:00
|
|
|
case v of
|
|
|
|
Left _ -> return False
|
2014-06-16 15:32:13 +00:00
|
|
|
Right _ -> do
|
2014-07-04 15:36:59 +00:00
|
|
|
ok <- Command.Sync.commitStaged Git.Branch.AutomaticCommit ""
|
2014-06-16 15:32:13 +00:00
|
|
|
when ok $
|
|
|
|
Command.Sync.updateSyncBranch =<< inRepo Git.Branch.current
|
|
|
|
return ok
|
2012-06-13 16:36:33 +00:00
|
|
|
|
always check with ls-files before adding new files
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
2012-10-02 21:34:22 +00:00
|
|
|
{- OSX needs a short delay after a file is added before locking it down,
|
2012-12-28 20:42:11 +00:00
|
|
|
- when using a non-direct mode repository, as pasting a file seems to
|
|
|
|
- try to set file permissions or otherwise access the file after closing
|
|
|
|
- it. -}
|
|
|
|
delayaddDefault :: Annex (Maybe Seconds)
|
always check with ls-files before adding new files
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
2012-10-02 21:34:22 +00:00
|
|
|
#ifdef darwin_HOST_OS
|
2012-12-28 20:42:11 +00:00
|
|
|
delayaddDefault = ifM isDirect
|
|
|
|
( return Nothing
|
|
|
|
, return $ Just $ Seconds 1
|
|
|
|
)
|
always check with ls-files before adding new files
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
2012-10-02 21:34:22 +00:00
|
|
|
#else
|
2012-12-28 20:42:11 +00:00
|
|
|
delayaddDefault = return Nothing
|
always check with ls-files before adding new files
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
2012-10-02 21:34:22 +00:00
|
|
|
#endif
|
|
|
|
|
|
|
|
{- If there are PendingAddChanges, or InProcessAddChanges, the files
|
|
|
|
- have not yet actually been added to the annex, and that has to be done
|
|
|
|
- now, before committing.
|
2012-06-16 00:44:34 +00:00
|
|
|
-
|
|
|
|
- Deferring the adds to this point causes batches to be bundled together,
|
|
|
|
- which allows faster checking with lsof that the files are not still open
|
always check with ls-files before adding new files
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
2012-10-02 21:34:22 +00:00
|
|
|
- for write by some other process, and faster checking with git-ls-files
|
|
|
|
- that the files are not already checked into git.
|
2012-06-16 00:44:34 +00:00
|
|
|
-
|
|
|
|
- When a file is added, Inotify will notice the new symlink. So this waits
|
|
|
|
- for additional Changes to arrive, so that the symlink has hopefully been
|
2012-06-20 20:07:14 +00:00
|
|
|
- staged before returning, and will be committed immediately.
|
|
|
|
-
|
|
|
|
- OTOH, for kqueue, eventsCoalesce, so instead the symlink is directly
|
2012-06-20 23:04:16 +00:00
|
|
|
- created and staged.
|
|
|
|
-
|
|
|
|
- Returns a list of all changes that are ready to be committed.
|
|
|
|
- Any pending adds that are not ready yet are put back into the ChangeChan,
|
|
|
|
- where they will be retried later.
|
2012-06-16 00:44:34 +00:00
|
|
|
-}
|
2013-12-04 21:39:44 +00:00
|
|
|
handleAdds :: Bool -> Maybe Seconds -> [Change] -> Assistant [Change]
|
|
|
|
handleAdds havelsof delayadd cs = returnWhen (null incomplete) $ do
|
always check with ls-files before adding new files
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
2012-10-02 21:34:22 +00:00
|
|
|
let (pending, inprocess) = partition isPendingAddChange incomplete
|
2012-12-24 18:42:19 +00:00
|
|
|
direct <- liftAnnex isDirect
|
2013-07-26 22:42:22 +00:00
|
|
|
(pending', cleanup) <- if direct
|
|
|
|
then return (pending, noop)
|
2012-12-24 18:42:19 +00:00
|
|
|
else findnew pending
|
2013-12-04 21:39:44 +00:00
|
|
|
(postponed, toadd) <- partitionEithers <$> safeToAdd havelsof delayadd pending' inprocess
|
2013-07-26 22:42:22 +00:00
|
|
|
cleanup
|
2012-06-20 23:04:16 +00:00
|
|
|
|
|
|
|
unless (null postponed) $
|
2012-10-29 23:30:23 +00:00
|
|
|
refillChanges postponed
|
2012-06-20 23:04:16 +00:00
|
|
|
|
|
|
|
returnWhen (null toadd) $ do
|
2013-04-24 17:04:46 +00:00
|
|
|
added <- addaction toadd $
|
|
|
|
catMaybes <$> if direct
|
|
|
|
then adddirect toadd
|
|
|
|
else forM toadd add
|
2012-12-24 17:37:29 +00:00
|
|
|
if DirWatcher.eventsCoalesce || null added || direct
|
2012-06-20 23:04:16 +00:00
|
|
|
then return $ added ++ otherchanges
|
|
|
|
else do
|
2013-12-04 21:39:44 +00:00
|
|
|
r <- handleAdds havelsof delayadd =<< getChanges
|
2012-06-20 23:04:16 +00:00
|
|
|
return $ r ++ added ++ otherchanges
|
2012-10-29 15:40:22 +00:00
|
|
|
where
|
|
|
|
(incomplete, otherchanges) = partition (\c -> isPendingAddChange c || isInProcessAddChange c) cs
|
always check with ls-files before adding new files
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
2012-10-02 21:34:22 +00:00
|
|
|
|
2013-07-26 22:42:22 +00:00
|
|
|
findnew [] = return ([], noop)
|
2012-10-29 15:40:22 +00:00
|
|
|
findnew pending@(exemplar:_) = do
|
2013-07-26 22:42:22 +00:00
|
|
|
(newfiles, cleanup) <- liftAnnex $
|
2012-10-29 15:40:22 +00:00
|
|
|
inRepo (Git.LsFiles.notInRepo False $ map changeFile pending)
|
|
|
|
-- note: timestamp info is lost here
|
|
|
|
let ts = changeTime exemplar
|
2013-10-03 02:59:07 +00:00
|
|
|
return (map (PendingAddChange ts) newfiles, void $ liftIO cleanup)
|
2012-10-29 15:40:22 +00:00
|
|
|
|
|
|
|
returnWhen c a
|
|
|
|
| c = return otherchanges
|
|
|
|
| otherwise = a
|
2012-06-20 23:04:16 +00:00
|
|
|
|
2012-10-29 15:40:22 +00:00
|
|
|
add :: Change -> Assistant (Maybe Change)
|
2013-04-24 17:04:46 +00:00
|
|
|
add change@(InProcessAddChange { keySource = ks }) =
|
2013-10-03 02:59:07 +00:00
|
|
|
catchDefaultIO Nothing <~> doadd
|
|
|
|
where
|
2014-10-09 18:53:13 +00:00
|
|
|
doadd = sanitycheck ks $ do
|
2013-10-03 02:59:07 +00:00
|
|
|
(mkey, mcache) <- liftAnnex $ do
|
|
|
|
showStart "add" $ keyFilename ks
|
|
|
|
Command.Add.ingest $ Just ks
|
|
|
|
maybe (failedingest change) (done change mcache $ keyFilename ks) mkey
|
2012-10-29 20:49:47 +00:00
|
|
|
add _ = return Nothing
|
2012-06-16 00:44:34 +00:00
|
|
|
|
2013-03-11 16:56:47 +00:00
|
|
|
{- In direct mode, avoid overhead of re-injesting a renamed
|
|
|
|
- file, by examining the other Changes to see if a removed
|
|
|
|
- file has the same InodeCache as the new file. If so,
|
|
|
|
- we can just update bookkeeping, and stage the file in git.
|
|
|
|
-}
|
|
|
|
adddirect :: [Change] -> Assistant [Maybe Change]
|
|
|
|
adddirect toadd = do
|
|
|
|
ct <- liftAnnex compareInodeCachesWith
|
|
|
|
m <- liftAnnex $ removedKeysMap ct cs
|
fix for Windows file timestamp timezone madness
On Windows, changing the time zone causes the apparent mtime of files to
change. This confuses git-annex, which natually thinks this means the files
have actually been modified (since THAT'S WHAT A MTIME IS FOR, BILL <sheesh>).
Work around this stupidity, by using the inode sentinal file to detect if
the timezone has changed, and calculate a TSDelta, which will be applied
when generating InodeCaches.
This should add no overhead at all on unix. Indeed, I sped up a few
things slightly in the refactoring.
Seems to basically work! But it has a big known problem:
If the timezone changes while the assistant (or a long-running command)
runs, it won't notice, since it only checks the inode cache once, and
so will use the old delta for all new inode caches it generates for new
files it's added. Which will result in them seeming changed the next time
it runs.
This commit was sponsored by Vincent Demeester.
2014-06-11 21:51:12 +00:00
|
|
|
delta <- liftAnnex getTSDelta
|
2013-03-11 16:56:47 +00:00
|
|
|
if M.null m
|
|
|
|
then forM toadd add
|
|
|
|
else forM toadd $ \c -> do
|
fix for Windows file timestamp timezone madness
On Windows, changing the time zone causes the apparent mtime of files to
change. This confuses git-annex, which natually thinks this means the files
have actually been modified (since THAT'S WHAT A MTIME IS FOR, BILL <sheesh>).
Work around this stupidity, by using the inode sentinal file to detect if
the timezone has changed, and calculate a TSDelta, which will be applied
when generating InodeCaches.
This should add no overhead at all on unix. Indeed, I sped up a few
things slightly in the refactoring.
Seems to basically work! But it has a big known problem:
If the timezone changes while the assistant (or a long-running command)
runs, it won't notice, since it only checks the inode cache once, and
so will use the old delta for all new inode caches it generates for new
files it's added. Which will result in them seeming changed the next time
it runs.
This commit was sponsored by Vincent Demeester.
2014-06-11 21:51:12 +00:00
|
|
|
mcache <- liftIO $ genInodeCache (changeFile c) delta
|
2013-03-11 16:56:47 +00:00
|
|
|
case mcache of
|
|
|
|
Nothing -> add c
|
|
|
|
Just cache ->
|
|
|
|
case M.lookup (inodeCacheToKey ct cache) m of
|
|
|
|
Nothing -> add c
|
2013-03-11 18:14:45 +00:00
|
|
|
Just k -> fastadd c k
|
2013-03-11 16:56:47 +00:00
|
|
|
|
2013-03-11 18:14:45 +00:00
|
|
|
fastadd :: Change -> Key -> Assistant (Maybe Change)
|
|
|
|
fastadd change key = do
|
|
|
|
let source = keySource change
|
|
|
|
liftAnnex $ Command.Add.finishIngestDirect key source
|
2013-09-25 20:07:11 +00:00
|
|
|
done change Nothing (keyFilename source) key
|
2013-03-11 16:56:47 +00:00
|
|
|
|
|
|
|
removedKeysMap :: InodeComparisonType -> [Change] -> Annex (M.Map InodeCacheKey Key)
|
|
|
|
removedKeysMap ct l = do
|
|
|
|
mks <- forM (filter isRmChange l) $ \c ->
|
|
|
|
catKeyFile $ changeFile c
|
2013-04-06 20:01:39 +00:00
|
|
|
M.fromList . concat <$> mapM mkpairs (catMaybes mks)
|
2013-03-11 16:56:47 +00:00
|
|
|
where
|
2013-04-06 20:01:39 +00:00
|
|
|
mkpairs k = map (\c -> (inodeCacheToKey ct c, k)) <$>
|
|
|
|
recordedInodeCache k
|
2013-03-11 16:56:47 +00:00
|
|
|
|
2013-04-23 22:23:04 +00:00
|
|
|
failedingest change = do
|
2013-04-24 20:26:55 +00:00
|
|
|
refill [retryChange change]
|
2012-10-29 15:40:22 +00:00
|
|
|
liftAnnex showEndFail
|
|
|
|
return Nothing
|
2013-03-10 22:16:03 +00:00
|
|
|
|
2013-09-25 20:07:11 +00:00
|
|
|
done change mcache file key = liftAnnex $ do
|
2013-03-11 18:14:45 +00:00
|
|
|
logStatus key InfoPresent
|
|
|
|
link <- ifM isDirect
|
2015-01-27 21:38:06 +00:00
|
|
|
( calcRepo $ gitAnnexLink file key
|
2013-09-25 20:07:11 +00:00
|
|
|
, Command.Add.link file key mcache
|
2013-03-11 18:14:45 +00:00
|
|
|
)
|
2013-10-03 02:59:07 +00:00
|
|
|
whenM (pure DirWatcher.eventsCoalesce <||> isDirect) $
|
2013-03-11 18:14:45 +00:00
|
|
|
stageSymlink file =<< hashSymlink link
|
2013-03-17 21:01:43 +00:00
|
|
|
showEndOk
|
2013-03-10 22:16:03 +00:00
|
|
|
return $ Just $ finishedChange change key
|
2012-06-16 02:35:29 +00:00
|
|
|
|
2012-10-29 15:40:22 +00:00
|
|
|
{- Check that the keysource's keyFilename still exists,
|
|
|
|
- and is still a hard link to its contentLocation,
|
|
|
|
- before ingesting it. -}
|
|
|
|
sanitycheck keysource a = do
|
|
|
|
fs <- liftIO $ getSymbolicLinkStatus $ keyFilename keysource
|
|
|
|
ks <- liftIO $ getSymbolicLinkStatus $ contentLocation keysource
|
|
|
|
if deviceID ks == deviceID fs && fileID ks == fileID fs
|
|
|
|
then a
|
2012-11-29 20:46:59 +00:00
|
|
|
else do
|
|
|
|
-- remove the hard link
|
2013-02-14 18:10:36 +00:00
|
|
|
when (contentLocation keysource /= keyFilename keysource) $
|
|
|
|
void $ liftIO $ tryIO $ removeFile $ contentLocation keysource
|
2012-11-29 20:46:59 +00:00
|
|
|
return Nothing
|
2012-06-21 00:05:40 +00:00
|
|
|
|
2013-04-24 17:04:46 +00:00
|
|
|
{- Shown an alert while performing an action to add a file or
|
2013-07-10 19:37:40 +00:00
|
|
|
- files. When only a few files are added, their names are shown
|
2013-04-24 17:04:46 +00:00
|
|
|
- in the alert. When it's a batch add, the number of files added
|
|
|
|
- is shown.
|
|
|
|
-
|
|
|
|
- Add errors tend to be transient and will be
|
|
|
|
- automatically dealt with, so the alert is always told
|
|
|
|
- the add succeeded.
|
|
|
|
-}
|
|
|
|
addaction [] a = a
|
2013-07-10 19:37:40 +00:00
|
|
|
addaction toadd a = alertWhile' (addFileAlert $ map changeFile toadd) $
|
2013-04-24 17:04:46 +00:00
|
|
|
(,)
|
|
|
|
<$> pure True
|
|
|
|
<*> a
|
|
|
|
|
always check with ls-files before adding new files
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
2012-10-02 21:34:22 +00:00
|
|
|
{- Files can Either be Right to be added now,
|
2012-06-20 23:04:16 +00:00
|
|
|
- or are unsafe, and must be Left for later.
|
|
|
|
-
|
2013-02-15 17:08:22 +00:00
|
|
|
- Check by running lsof on the repository.
|
2012-06-16 02:35:29 +00:00
|
|
|
-}
|
2013-12-04 21:39:44 +00:00
|
|
|
safeToAdd :: Bool -> Maybe Seconds -> [Change] -> [Change] -> Assistant [Either Change Change]
|
|
|
|
safeToAdd _ _ [] [] = return []
|
|
|
|
safeToAdd havelsof delayadd pending inprocess = do
|
2012-10-29 15:40:22 +00:00
|
|
|
maybe noop (liftIO . threadDelaySeconds) delayadd
|
|
|
|
liftAnnex $ do
|
2013-10-03 02:59:07 +00:00
|
|
|
keysources <- forM pending $ Command.Add.lockDown . changeFile
|
|
|
|
let inprocess' = inprocess ++ mapMaybe mkinprocess (zip pending keysources)
|
2013-12-04 21:39:44 +00:00
|
|
|
openfiles <- if havelsof
|
|
|
|
then S.fromList . map fst3 . filter openwrite <$>
|
|
|
|
findopenfiles (map keySource inprocess')
|
|
|
|
else pure S.empty
|
2013-02-15 17:08:22 +00:00
|
|
|
let checked = map (check openfiles) inprocess'
|
always check with ls-files before adding new files
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
2012-10-02 21:34:22 +00:00
|
|
|
|
|
|
|
{- If new events are received when files are closed,
|
|
|
|
- there's no need to retry any changes that cannot
|
|
|
|
- be done now. -}
|
|
|
|
if DirWatcher.closingTracked
|
|
|
|
then do
|
|
|
|
mapM_ canceladd $ lefts checked
|
|
|
|
allRight $ rights checked
|
|
|
|
else return checked
|
2012-10-29 15:40:22 +00:00
|
|
|
where
|
|
|
|
check openfiles change@(InProcessAddChange { keySource = ks })
|
|
|
|
| S.member (contentLocation ks) openfiles = Left change
|
|
|
|
check _ change = Right change
|
2012-06-20 23:04:16 +00:00
|
|
|
|
2013-10-03 02:59:07 +00:00
|
|
|
mkinprocess (c, Just ks) = Just InProcessAddChange
|
2012-10-29 15:40:22 +00:00
|
|
|
{ changeTime = changeTime c
|
|
|
|
, keySource = ks
|
|
|
|
}
|
2013-01-14 19:02:13 +00:00
|
|
|
mkinprocess (_, Nothing) = Nothing
|
always check with ls-files before adding new files
Makes it safe to use git annex unlock with the watcher/assistant.
And also to mix use of the watcher/assistant with regular files stored in git.
Long ago, I had avoided doing this check, except during the startup scan,
because it would be slow to run ls-files repeatedly.
But then I added the lsof check, and to make that fast, got it to detect
batch file adds. So let's move the ls-files check to also occur when it'll
have a batch, and can check them all with one call.
This does slow down adding a single file by just a bit, but really only
a little bit. (The lsof check is probably more expensive.) It also
speeds up the startup scan, especially when there are lots of new files
found by the scan.
Also, fixed the sleep for annex.delayadd to not run while the threadstate
lock is held, so it doesn't unnecessarily freeze everything else.
Also, --force no longer makes it skip the lsof check, which was not
documented, and seems never a good idea.
2012-10-02 21:34:22 +00:00
|
|
|
|
2012-10-29 15:40:22 +00:00
|
|
|
canceladd (InProcessAddChange { keySource = ks }) = do
|
|
|
|
warning $ keyFilename ks
|
|
|
|
++ " still has writers, not adding"
|
|
|
|
-- remove the hard link
|
2013-02-14 18:10:36 +00:00
|
|
|
when (contentLocation ks /= keyFilename ks) $
|
|
|
|
void $ liftIO $ tryIO $ removeFile $ contentLocation ks
|
2012-10-29 15:40:22 +00:00
|
|
|
canceladd _ = noop
|
2012-06-16 02:35:29 +00:00
|
|
|
|
2013-02-11 21:24:12 +00:00
|
|
|
openwrite (_file, mode, _pid)
|
|
|
|
| mode == Lsof.OpenWriteOnly = True
|
|
|
|
| mode == Lsof.OpenReadWrite = True
|
|
|
|
| mode == Lsof.OpenUnknown = True
|
|
|
|
| otherwise = False
|
2012-06-20 23:04:16 +00:00
|
|
|
|
2012-10-29 15:40:22 +00:00
|
|
|
allRight = return . map Right
|
2013-02-15 17:08:22 +00:00
|
|
|
|
|
|
|
{- Normally the KeySources are locked down inside the temp directory,
|
|
|
|
- so can just lsof that, which is quite efficient.
|
|
|
|
-
|
|
|
|
- In crippled filesystem mode, there is no lock down, so must run lsof
|
|
|
|
- on each individual file.
|
|
|
|
-}
|
|
|
|
findopenfiles keysources = ifM crippledFileSystem
|
|
|
|
( liftIO $ do
|
|
|
|
let segments = segmentXargs $ map keyFilename keysources
|
|
|
|
concat <$> forM segments (\fs -> Lsof.query $ "--" : fs)
|
|
|
|
, do
|
2014-02-26 20:52:56 +00:00
|
|
|
tmpdir <- fromRepo gitAnnexTmpMiscDir
|
2013-02-15 17:08:22 +00:00
|
|
|
liftIO $ Lsof.queryDir tmpdir
|
|
|
|
)
|
2013-03-10 22:16:03 +00:00
|
|
|
|
|
|
|
{- After a Change is committed, queue any necessary transfers or drops
|
|
|
|
- of the content of the key.
|
|
|
|
-
|
|
|
|
- This is not done during the startup scan, because the expensive
|
|
|
|
- transfer scan does the same thing then.
|
|
|
|
-}
|
|
|
|
checkChangeContent :: Change -> Assistant ()
|
2013-03-11 17:52:06 +00:00
|
|
|
checkChangeContent change@(Change { changeInfo = i }) =
|
2013-03-10 22:16:03 +00:00
|
|
|
case changeInfoKey i of
|
|
|
|
Nothing -> noop
|
|
|
|
Just k -> whenM (scanComplete <$> getDaemonStatus) $ do
|
|
|
|
present <- liftAnnex $ inAnnex k
|
2014-01-23 20:51:16 +00:00
|
|
|
void $ if present
|
2013-03-10 22:16:03 +00:00
|
|
|
then queueTransfers "new file created" Next k (Just f) Upload
|
|
|
|
else queueTransfers "new or renamed file wanted" Next k (Just f) Download
|
|
|
|
handleDrops "file renamed" present k (Just f) Nothing
|
2013-03-11 17:52:06 +00:00
|
|
|
where
|
|
|
|
f = changeFile change
|
2013-03-10 22:16:03 +00:00
|
|
|
checkChangeContent _ = noop
|