2011-11-26 12:39:47 +00:00
|
|
|
{-# LANGUAGE BangPatterns #-}
|
|
|
|
|
2010-10-09 23:22:40 +00:00
|
|
|
{- git-annex location log
|
|
|
|
-
|
2011-04-02 19:50:51 +00:00
|
|
|
- git-annex keeps track of which repositories have the contents of annexed
|
|
|
|
- files.
|
2010-10-09 23:22:40 +00:00
|
|
|
-
|
2010-10-12 22:06:34 +00:00
|
|
|
- Repositories record their UUID and the date when they --get or --drop
|
2010-10-13 00:04:36 +00:00
|
|
|
- a value.
|
2010-10-10 16:31:14 +00:00
|
|
|
-
|
2024-06-14 22:06:28 +00:00
|
|
|
- Copyright 2010-2024 Joey Hess <id@joeyh.name>
|
2010-10-27 20:53:54 +00:00
|
|
|
-
|
2019-03-13 19:48:14 +00:00
|
|
|
- Licensed under the GNU AGPL version 3 or higher.
|
2010-10-09 23:22:40 +00:00
|
|
|
-}
|
|
|
|
|
info: Added calculation of combined annex size of all repositories
Factored out overLocationLogs from CmdLine.Seek, which can calculate this
pretty fast even in a large repo. In my big repo, the time to run git-annex
info went up from 1.33s to 8.5s.
Note that the "backend usage" stats are for annexed files in the working
tree only, not all annexed files. This new data source would let that be
changed, but that would be a confusing behavior change. And I cannot
retitle it either, out of fear something uses the current title (eg parsing
the json).
Also note that, while time says "402108maxresident" in my big repo now,
up from "54092maxresident", top shows the RES constant at 64mb, and it
was 48mb before. So I don't think there is a memory leak. I tried using
deepseq to force full evaluation of addKeyCopies and memory use didn't
change, which also says no memory leak. And indeed, not even calling
addKeyCopies resulted in the same memory use. Probably the increased memory
usage is buffering the stream of data from git in overLocationLogs.
Sponsored-by: Brett Eisenberg on Patreon
2023-11-08 17:15:00 +00:00
|
|
|
{-# LANGUAGE BangPatterns #-}
|
|
|
|
|
2011-10-15 20:21:08 +00:00
|
|
|
module Logs.Location (
|
2010-10-12 22:25:41 +00:00
|
|
|
LogStatus(..),
|
2012-12-12 23:20:38 +00:00
|
|
|
logStatus,
|
2020-12-11 15:33:10 +00:00
|
|
|
logStatusAfter,
|
2010-10-13 19:55:18 +00:00
|
|
|
logChange,
|
2012-01-10 17:11:16 +00:00
|
|
|
loggedLocations,
|
2023-12-08 17:23:03 +00:00
|
|
|
loggedPreviousLocations,
|
2014-02-06 16:43:56 +00:00
|
|
|
loggedLocationsHistorical,
|
2016-07-17 19:15:08 +00:00
|
|
|
loggedLocationsRef,
|
2016-04-22 17:49:32 +00:00
|
|
|
isKnownKey,
|
2015-06-09 18:08:57 +00:00
|
|
|
checkDead,
|
|
|
|
setDead,
|
2018-04-26 18:21:27 +00:00
|
|
|
Unchecked,
|
|
|
|
finishCheck,
|
2011-07-01 21:23:01 +00:00
|
|
|
loggedKeys,
|
2011-10-29 21:49:37 +00:00
|
|
|
loggedKeysFor,
|
2018-04-26 20:13:05 +00:00
|
|
|
loggedKeysFor',
|
info: Added calculation of combined annex size of all repositories
Factored out overLocationLogs from CmdLine.Seek, which can calculate this
pretty fast even in a large repo. In my big repo, the time to run git-annex
info went up from 1.33s to 8.5s.
Note that the "backend usage" stats are for annexed files in the working
tree only, not all annexed files. This new data source would let that be
changed, but that would be a confusing behavior change. And I cannot
retitle it either, out of fear something uses the current title (eg parsing
the json).
Also note that, while time says "402108maxresident" in my big repo now,
up from "54092maxresident", top shows the RES constant at 64mb, and it
was 48mb before. So I don't think there is a memory leak. I tried using
deepseq to force full evaluation of addKeyCopies and memory use didn't
change, which also says no memory leak. And indeed, not even calling
addKeyCopies resulted in the same memory use. Probably the increased memory
usage is buffering the stream of data from git in overLocationLogs.
Sponsored-by: Brett Eisenberg on Patreon
2023-11-08 17:15:00 +00:00
|
|
|
overLocationLogs,
|
|
|
|
overLocationLogs',
|
2024-08-14 17:46:44 +00:00
|
|
|
overLocationLogsJournal,
|
2024-08-17 17:30:24 +00:00
|
|
|
parseLoggedLocations,
|
|
|
|
parseLoggedLocationsWithoutClusters,
|
2010-10-11 21:52:46 +00:00
|
|
|
) where
|
2010-10-09 23:22:40 +00:00
|
|
|
|
2016-01-20 20:36:33 +00:00
|
|
|
import Annex.Common
|
2011-10-04 04:40:47 +00:00
|
|
|
import qualified Annex.Branch
|
2024-08-14 20:04:18 +00:00
|
|
|
import Annex.Branch (FileContents)
|
2024-08-15 17:27:14 +00:00
|
|
|
import Annex.RepoSize.LiveUpdate
|
2013-08-29 22:51:22 +00:00
|
|
|
import Logs
|
2011-10-15 20:21:08 +00:00
|
|
|
import Logs.Presence
|
remove dead nodes when loading the cluster log
This is to avoid inserting a cluster uuid into the location log when
only dead nodes in the cluster contain the content of a key.
One reason why this is necessary is Remote.keyLocations, which excludes
dead repositories from the list. But there are probably many more.
Implementing this was challenging, because Logs.Location importing
Logs.Cluster which imports Logs.Trust which imports Remote.List resulted
in an import cycle through several other modules.
Resorted to making Logs.Location not import Logs.Cluster, and instead
it assumes that Annex.clusters gets populated when necessary before it's
called.
That's done in Annex.Startup, which is run by the git-annex command
(but not other commands) at early startup in initialized repos. Or,
is run after initialization.
Note that is Remote.Git, it is unable to import Annex.Startup, because
Remote.Git importing Logs.Cluster leads the the same import cycle.
So ensureInitialized is not passed annexStartup in there.
Other commands, like git-annex-shell currently don't run annexStartup
either.
So there are cases where Logs.Location will not see clusters. So it won't add
any cluster UUIDs when loading the log. That's ok, the only reason to do
that is to make display of where objects are located include clusters,
and to make commands like git-annex get --from treat keys as being located
in a cluster. git-annex-shell certainly does not do anything like that,
and I'm pretty sure Remote.Git (and callers to Remote.Git.onLocalRepo)
don't either.
2024-06-16 18:35:07 +00:00
|
|
|
import Types.Cluster
|
2012-12-12 23:20:38 +00:00
|
|
|
import Annex.UUID
|
2016-07-17 19:15:08 +00:00
|
|
|
import Annex.CatFile
|
2017-08-14 17:55:38 +00:00
|
|
|
import Annex.VectorClock
|
2024-08-14 07:19:30 +00:00
|
|
|
import Git.Types (RefDate, Ref, Sha)
|
2015-01-28 21:17:26 +00:00
|
|
|
import qualified Annex
|
2012-12-12 23:20:38 +00:00
|
|
|
|
2015-06-09 18:52:05 +00:00
|
|
|
import Data.Time.Clock
|
info: Added calculation of combined annex size of all repositories
Factored out overLocationLogs from CmdLine.Seek, which can calculate this
pretty fast even in a large repo. In my big repo, the time to run git-annex
info went up from 1.33s to 8.5s.
Note that the "backend usage" stats are for annexed files in the working
tree only, not all annexed files. This new data source would let that be
changed, but that would be a confusing behavior change. And I cannot
retitle it either, out of fear something uses the current title (eg parsing
the json).
Also note that, while time says "402108maxresident" in my big repo now,
up from "54092maxresident", top shows the RES constant at 64mb, and it
was 48mb before. So I don't think there is a memory leak. I tried using
deepseq to force full evaluation of addKeyCopies and memory use didn't
change, which also says no memory leak. And indeed, not even calling
addKeyCopies resulted in the same memory use. Probably the increased memory
usage is buffering the stream of data from git in overLocationLogs.
Sponsored-by: Brett Eisenberg on Patreon
2023-11-08 17:15:00 +00:00
|
|
|
import qualified Data.ByteString.Lazy as L
|
2024-06-14 22:06:28 +00:00
|
|
|
import qualified Data.Map as M
|
|
|
|
import qualified Data.Set as S
|
2015-06-09 18:52:05 +00:00
|
|
|
|
2012-12-12 23:20:38 +00:00
|
|
|
{- Log a change in the presence of a key's value in current repository. -}
|
2024-08-23 20:35:12 +00:00
|
|
|
logStatus :: LiveUpdate -> Key -> LogStatus -> Annex ()
|
|
|
|
logStatus lu key s = do
|
2012-12-12 23:20:38 +00:00
|
|
|
u <- getUUID
|
2024-08-23 20:35:12 +00:00
|
|
|
logChange lu key u s
|
2010-10-09 23:22:40 +00:00
|
|
|
|
2020-12-11 15:33:10 +00:00
|
|
|
{- Run an action that gets the content of a key, and update the log
|
|
|
|
- when it succeeds. -}
|
2024-08-23 20:35:12 +00:00
|
|
|
logStatusAfter :: LiveUpdate -> Key -> Annex Bool -> Annex Bool
|
|
|
|
logStatusAfter lu key a = ifM a
|
2020-12-11 15:33:10 +00:00
|
|
|
( do
|
2024-08-23 20:35:12 +00:00
|
|
|
logStatus lu key InfoPresent
|
2020-12-11 15:33:10 +00:00
|
|
|
return True
|
|
|
|
, return False
|
|
|
|
)
|
|
|
|
|
2024-06-14 22:06:28 +00:00
|
|
|
{- Log a change in the presence of a key's value in a repository.
|
|
|
|
-
|
|
|
|
- Cluster UUIDs are not logged. Instead, when a node of a cluster is
|
|
|
|
- logged to contain a key, loading the log will include the cluster's
|
|
|
|
- UUID.
|
|
|
|
-}
|
2024-08-23 20:35:12 +00:00
|
|
|
logChange :: LiveUpdate -> Key -> UUID -> LogStatus -> Annex ()
|
|
|
|
logChange lu key u@(UUID _) s
|
2024-06-14 22:06:28 +00:00
|
|
|
| isClusterUUID u = noop
|
|
|
|
| otherwise = do
|
|
|
|
config <- Annex.getGitConfig
|
partially fix concurrency issue in updating the rollingtotal
It's possible for two processes or threads to both be doing the same
operation at the same time. Eg, both dropping the same key. If one
finishes and updates the rollingtotal, then the other one needs to be
prevented from later updating the rollingtotal as well. And they could
finish at the same time, or with some time in between.
Addressed this by making updateRepoSize be called with the journal
locked, and only once it's been determined that there is an actual
location change to record in the log. updateRepoSize waits for the
database to be updated.
When there is a redundant operation, updateRepoSize won't be called,
and the redundant LiveUpdate will be removed from the database on
garbage collection.
But: There will be a window where the redundant LiveUpdate is still
visible in the db, and processes can see it, combine it with the
rollingtotal, and arrive at the wrong size. This is a small window, but
it still ought to be addressed. Unsure if it would always be safe to
remove the redundant LiveUpdate? Consider the case where two drops and a
get are all running concurrently somehow, and the order they finish is
[drop, get, drop]. The second drop seems redundant to the first, but
it would not be safe to remove it. While this seems unlikely, it's hard
to rule out that a get and drop at different stages can both be running
at the same time.
2024-08-26 13:43:32 +00:00
|
|
|
void $ maybeAddLog
|
2024-06-14 22:06:28 +00:00
|
|
|
(Annex.Branch.RegardingUUID [u])
|
|
|
|
(locationLogFile config key)
|
|
|
|
s
|
|
|
|
(LogInfo (fromUUID u))
|
partially fix concurrency issue in updating the rollingtotal
It's possible for two processes or threads to both be doing the same
operation at the same time. Eg, both dropping the same key. If one
finishes and updates the rollingtotal, then the other one needs to be
prevented from later updating the rollingtotal as well. And they could
finish at the same time, or with some time in between.
Addressed this by making updateRepoSize be called with the journal
locked, and only once it's been determined that there is an actual
location change to record in the log. updateRepoSize waits for the
database to be updated.
When there is a redundant operation, updateRepoSize won't be called,
and the redundant LiveUpdate will be removed from the database on
garbage collection.
But: There will be a window where the redundant LiveUpdate is still
visible in the db, and processes can see it, combine it with the
rollingtotal, and arrive at the wrong size. This is a small window, but
it still ought to be addressed. Unsure if it would always be safe to
remove the redundant LiveUpdate? Consider the case where two drops and a
get are all running concurrently somehow, and the order they finish is
[drop, get, drop]. The second drop seems redundant to the first, but
it would not be safe to remove it. While this seems unlikely, it's hard
to rule out that a get and drop at different stages can both be running
at the same time.
2024-08-26 13:43:32 +00:00
|
|
|
(updateRepoSize lu u key s)
|
2024-08-23 20:35:12 +00:00
|
|
|
logChange _ _ NoUUID _ = noop
|
2010-10-12 22:25:41 +00:00
|
|
|
|
|
|
|
{- Returns a list of repository UUIDs that, according to the log, have
|
2014-02-06 16:43:56 +00:00
|
|
|
- the value of a key. -}
|
2012-01-10 17:11:16 +00:00
|
|
|
loggedLocations :: Key -> Annex [UUID]
|
2023-12-08 17:23:03 +00:00
|
|
|
loggedLocations = getLoggedLocations presentLogInfo
|
|
|
|
|
|
|
|
{- Returns a list of repository UUIDs that the location log indicates
|
|
|
|
- used to have the vale of a key, but no longer do.
|
|
|
|
-}
|
|
|
|
loggedPreviousLocations :: Key -> Annex [UUID]
|
|
|
|
loggedPreviousLocations = getLoggedLocations notPresentLogInfo
|
2014-02-06 16:43:56 +00:00
|
|
|
|
|
|
|
{- Gets the location log on a particular date. -}
|
|
|
|
loggedLocationsHistorical :: RefDate -> Key -> Annex [UUID]
|
2015-04-01 21:53:16 +00:00
|
|
|
loggedLocationsHistorical = getLoggedLocations . historicalLogInfo
|
2014-02-06 16:43:56 +00:00
|
|
|
|
2016-07-17 19:15:08 +00:00
|
|
|
{- Gets the locations contained in a git ref. -}
|
|
|
|
loggedLocationsRef :: Ref -> Annex [UUID]
|
2019-01-03 17:21:48 +00:00
|
|
|
loggedLocationsRef ref = map (toUUID . fromLogInfo) . getLog <$> catObject ref
|
2016-07-17 19:15:08 +00:00
|
|
|
|
2024-08-17 17:30:24 +00:00
|
|
|
{- Parses the content of a log file and gets the locations in it.
|
|
|
|
-
|
|
|
|
- Adds the UUIDs of any clusters whose nodes are in the list.
|
|
|
|
-}
|
2024-06-14 22:06:28 +00:00
|
|
|
parseLoggedLocations :: Clusters -> L.ByteString -> [UUID]
|
2024-08-17 15:16:21 +00:00
|
|
|
parseLoggedLocations clusters =
|
|
|
|
addClusterUUIDs clusters . parseLoggedLocationsWithoutClusters
|
|
|
|
|
|
|
|
parseLoggedLocationsWithoutClusters :: L.ByteString -> [UUID]
|
|
|
|
parseLoggedLocationsWithoutClusters l =
|
2024-06-14 22:06:28 +00:00
|
|
|
map (toUUID . fromLogInfo . info)
|
|
|
|
(filterPresent (parseLog l))
|
info: Added calculation of combined annex size of all repositories
Factored out overLocationLogs from CmdLine.Seek, which can calculate this
pretty fast even in a large repo. In my big repo, the time to run git-annex
info went up from 1.33s to 8.5s.
Note that the "backend usage" stats are for annexed files in the working
tree only, not all annexed files. This new data source would let that be
changed, but that would be a confusing behavior change. And I cannot
retitle it either, out of fear something uses the current title (eg parsing
the json).
Also note that, while time says "402108maxresident" in my big repo now,
up from "54092maxresident", top shows the RES constant at 64mb, and it
was 48mb before. So I don't think there is a memory leak. I tried using
deepseq to force full evaluation of addKeyCopies and memory use didn't
change, which also says no memory leak. And indeed, not even calling
addKeyCopies resulted in the same memory use. Probably the increased memory
usage is buffering the stream of data from git in overLocationLogs.
Sponsored-by: Brett Eisenberg on Patreon
2023-11-08 17:15:00 +00:00
|
|
|
|
2019-11-26 19:27:22 +00:00
|
|
|
getLoggedLocations :: (RawFilePath -> Annex [LogInfo]) -> Key -> Annex [UUID]
|
2015-01-28 21:17:26 +00:00
|
|
|
getLoggedLocations getter key = do
|
|
|
|
config <- Annex.getGitConfig
|
2024-06-14 22:06:28 +00:00
|
|
|
locs <- map (toUUID . fromLogInfo) <$> getter (locationLogFile config key)
|
|
|
|
clusters <- getClusters
|
|
|
|
return $ addClusterUUIDs clusters locs
|
|
|
|
|
|
|
|
addClusterUUIDs :: Clusters -> [UUID] -> [UUID]
|
|
|
|
addClusterUUIDs clusters locs
|
|
|
|
| M.null clustermap = locs
|
|
|
|
-- ^ optimisation for common case of no clusters
|
|
|
|
| otherwise = clusterlocs ++ locs
|
|
|
|
where
|
|
|
|
clustermap = clusterNodeUUIDs clusters
|
|
|
|
clusterlocs = map fromClusterUUID $ S.toList $
|
|
|
|
S.unions $ mapMaybe findclusters locs
|
|
|
|
findclusters u = M.lookup (ClusterNodeUUID u) clustermap
|
2015-04-01 21:53:16 +00:00
|
|
|
|
2016-04-22 17:49:32 +00:00
|
|
|
{- Is there a location log for the key? True even for keys with no
|
|
|
|
- remaining locations. -}
|
|
|
|
isKnownKey :: Key -> Annex Bool
|
|
|
|
isKnownKey key = do
|
|
|
|
config <- Annex.getGitConfig
|
|
|
|
not . null <$> readLog (locationLogFile config key)
|
|
|
|
|
2015-06-09 18:08:57 +00:00
|
|
|
{- For a key to be dead, all locations that have location status for the key
|
|
|
|
- must have InfoDead set. -}
|
|
|
|
checkDead :: Key -> Annex Bool
|
|
|
|
checkDead key = do
|
2015-04-01 21:53:16 +00:00
|
|
|
config <- Annex.getGitConfig
|
2015-06-09 18:08:57 +00:00
|
|
|
ls <- compactLog <$> readLog (locationLogFile config key)
|
Keys marked as dead are now skipped by --all.
fsck already special-cased dead keys to make --all not report errors with
them, and it makes sense to also expand that to whereis. I think it makes
sense for dead keys to be skipped by all uses of --all, so mistakes can be
completely forgotten about and not come back to haunt us.
The speed impact of testing if the key is dead is negligible for fsck and
whereis, since they use the location log anyway and it gets cached.
This does slow down a few commands that support --all, in particular
metadata --all runs around 2x as slow. I don't think metadata
--all is often used though. It might slow down copy/move/mirror
--all and get --all.
log --all is not affected (does not use the normal --all machinery).
Dead keys will still be processed by --incomplete, --branch,
--failed, and --key. Although it would be unlikely for a dead key to
ave in incomplete or failed transfer. It seems to make perfect sense for
--branch to process keys on the branch, even if dead.
(fsck's special-casing of dead keys was left in, so if one of these options
causes a dead key to be fscked, there will be a nice message.)
This commit was supported by the NSF-funded DataLad project.
2017-05-09 16:55:21 +00:00
|
|
|
return $! all (\l -> status l == InfoDead) ls
|
2015-06-09 18:08:57 +00:00
|
|
|
|
2015-06-09 18:52:05 +00:00
|
|
|
{- Updates the log to say that a key is dead.
|
|
|
|
-
|
|
|
|
- Changes all logged lines for the key, in any location, that are
|
|
|
|
- currently InfoMissing, to be InfoDead.
|
deal better with clock skew situations, using vector clocks
* Deal with clock skew, both forwards and backwards, when logging
information to the git-annex branch.
* GIT_ANNEX_VECTOR_CLOCK can now be set to a fixed value (eg 1)
rather than needing to be advanced each time a new change is made.
* Misuse of GIT_ANNEX_VECTOR_CLOCK will no longer confuse git-annex.
When changing a file in the git-annex branch, the vector clock to use is now
determined by first looking at the current time (or GIT_ANNEX_VECTOR_CLOCK
when set), and comparing it to the newest vector clock already in use in
that file. If a newer time stamp was already in use, advance it forward by
a second instead.
When the clock is set to a time in the past, this avoids logging with
an old timestamp, which would risk that log line later being ignored in favor
of "newer" line that is really not newer.
When a log entry has been made with a clock that was set far ahead in the
future, this avoids newer information being logged with an older timestamp
and so being ignored in favor of that future-timestamped information.
Once all clocks get fixed, this will result in the vector clocks being
incremented, until finally enough time has passed that time gets back ahead
of the vector clock value, and then it will return to usual operation.
(This latter situation is not ideal, but it seems the best that can be done.
The issue with it is, since all writers will be incrementing the last
vector clock they saw, there's no way to tell when one writer made a write
significantly later in time than another, so the earlier write might
arbitrarily be picked when merging. This problem is why git-annex uses
timestamps in the first place, rather than pure vector clocks.)
Advancing forward by 1 second is somewhat arbitrary. setDead
advances a timestamp by just 1 picosecond, and the vector clock could
too. But then it would interfere with setDead, which wants to be
overrulled by any change. So it could use 2 picoseconds or something,
but that seems weird. It could just as well advance it forward by a
minute or whatever, but then it would be harder for real time to catch
up with the vector clock when forward clock slew had happened.
A complication is that many log files contain several different peices of
information, and it may be best to only use vector clocks for the same peice
of information. For example, a key's location log file contains
InfoPresent/InfoMissing for each UUID, and it only looks at the vector
clocks for the UUID that is being changed, and not other UUIDs.
Although exactly where the dividing line is can be hard to determine.
Consider metadata logs, where a field "tag" can have multiple values set
at different times. Should it advance forward past the last tag?
Probably. What about when a different field is set, should it look at
the clocks of other fields? Perhaps not, but currently it does, and
this does not seems like it will cause any problems.
Another one I'm not entirely sure about is the export log, which is
keyed by (fromuuid, touuid). So if multiple repos are exporting to the
same remote, different vector clocks can be used for that remote.
It looks like that's probably ok, because it does not try to determine
what order things occurred when there was an export conflict.
Sponsored-by: Jochen Bartl on Patreon
2021-08-03 20:45:20 +00:00
|
|
|
-
|
|
|
|
- The vector clock in the log is updated minimally, so that any
|
|
|
|
- other location log changes are guaranteed to overrule this.
|
2015-06-09 18:52:05 +00:00
|
|
|
-}
|
2015-06-09 18:08:57 +00:00
|
|
|
setDead :: Key -> Annex ()
|
2015-06-09 18:52:05 +00:00
|
|
|
setDead key = do
|
|
|
|
config <- Annex.getGitConfig
|
|
|
|
let logfile = locationLogFile config key
|
|
|
|
ls <- compactLog <$> readLog logfile
|
|
|
|
mapM_ (go logfile) (filter (\l -> status l == InfoMissing) ls)
|
|
|
|
where
|
2024-08-15 17:27:14 +00:00
|
|
|
go logfile l = do
|
start implementing hidden git-annex repositories
This adds a separate journal, which does not currently get committed to
an index, but is planned to be committed to .git/annex/index-private.
Changes that are regarding a UUID that is private will get written to
this journal, and so will not be published into the git-annex branch.
All log writing should have been made to indicate the UUID it's
regarding, though I've not verified this yet.
Currently, no UUIDs are treated as private yet, a way to configure that
is needed.
The implementation is careful to not add any additional IO work when
privateUUIDsKnown is False. It will skip looking at the private journal
at all. So this should be free, or nearly so, unless the feature is
used. When it is used, all branch reads will be about twice as expensive.
It is very lucky -- or very prudent design -- that Annex.Branch.change
and maybeChange are the only ways to change a file on the branch,
and Annex.Branch.set is only internal use. That let Annex.Branch.get
always yield any private information that has been recorded, without
the risk that Annex.Branch.set might be called, with a non-private UUID,
and end up leaking the private information into the git-annex branch.
And, this relies on the way git-annex union merges the git-annex branch.
When reading a file, there can be a public and a private version, and
they are just concacenated together. That will be handled the same as if
there were two diverged git-annex branches that got union merged.
2021-04-20 18:32:41 +00:00
|
|
|
let u = toUUID (fromLogInfo (info l))
|
deal better with clock skew situations, using vector clocks
* Deal with clock skew, both forwards and backwards, when logging
information to the git-annex branch.
* GIT_ANNEX_VECTOR_CLOCK can now be set to a fixed value (eg 1)
rather than needing to be advanced each time a new change is made.
* Misuse of GIT_ANNEX_VECTOR_CLOCK will no longer confuse git-annex.
When changing a file in the git-annex branch, the vector clock to use is now
determined by first looking at the current time (or GIT_ANNEX_VECTOR_CLOCK
when set), and comparing it to the newest vector clock already in use in
that file. If a newer time stamp was already in use, advance it forward by
a second instead.
When the clock is set to a time in the past, this avoids logging with
an old timestamp, which would risk that log line later being ignored in favor
of "newer" line that is really not newer.
When a log entry has been made with a clock that was set far ahead in the
future, this avoids newer information being logged with an older timestamp
and so being ignored in favor of that future-timestamped information.
Once all clocks get fixed, this will result in the vector clocks being
incremented, until finally enough time has passed that time gets back ahead
of the vector clock value, and then it will return to usual operation.
(This latter situation is not ideal, but it seems the best that can be done.
The issue with it is, since all writers will be incrementing the last
vector clock they saw, there's no way to tell when one writer made a write
significantly later in time than another, so the earlier write might
arbitrarily be picked when merging. This problem is why git-annex uses
timestamps in the first place, rather than pure vector clocks.)
Advancing forward by 1 second is somewhat arbitrary. setDead
advances a timestamp by just 1 picosecond, and the vector clock could
too. But then it would interfere with setDead, which wants to be
overrulled by any change. So it could use 2 picoseconds or something,
but that seems weird. It could just as well advance it forward by a
minute or whatever, but then it would be harder for real time to catch
up with the vector clock when forward clock slew had happened.
A complication is that many log files contain several different peices of
information, and it may be best to only use vector clocks for the same peice
of information. For example, a key's location log file contains
InfoPresent/InfoMissing for each UUID, and it only looks at the vector
clocks for the UUID that is being changed, and not other UUIDs.
Although exactly where the dividing line is can be hard to determine.
Consider metadata logs, where a field "tag" can have multiple values set
at different times. Should it advance forward past the last tag?
Probably. What about when a different field is set, should it look at
the clocks of other fields? Perhaps not, but currently it does, and
this does not seems like it will cause any problems.
Another one I'm not entirely sure about is the export log, which is
keyed by (fromuuid, touuid). So if multiple repos are exporting to the
same remote, different vector clocks can be used for that remote.
It looks like that's probably ok, because it does not try to determine
what order things occurred when there was an export conflict.
Sponsored-by: Jochen Bartl on Patreon
2021-08-03 20:45:20 +00:00
|
|
|
c = case date l of
|
|
|
|
VectorClock v -> CandidateVectorClock $
|
|
|
|
v + realToFrac (picosecondsToDiffTime 1)
|
|
|
|
Unknown -> CandidateVectorClock 0
|
2024-08-15 17:27:14 +00:00
|
|
|
addLog' (Annex.Branch.RegardingUUID [u]) logfile InfoDead
|
deal better with clock skew situations, using vector clocks
* Deal with clock skew, both forwards and backwards, when logging
information to the git-annex branch.
* GIT_ANNEX_VECTOR_CLOCK can now be set to a fixed value (eg 1)
rather than needing to be advanced each time a new change is made.
* Misuse of GIT_ANNEX_VECTOR_CLOCK will no longer confuse git-annex.
When changing a file in the git-annex branch, the vector clock to use is now
determined by first looking at the current time (or GIT_ANNEX_VECTOR_CLOCK
when set), and comparing it to the newest vector clock already in use in
that file. If a newer time stamp was already in use, advance it forward by
a second instead.
When the clock is set to a time in the past, this avoids logging with
an old timestamp, which would risk that log line later being ignored in favor
of "newer" line that is really not newer.
When a log entry has been made with a clock that was set far ahead in the
future, this avoids newer information being logged with an older timestamp
and so being ignored in favor of that future-timestamped information.
Once all clocks get fixed, this will result in the vector clocks being
incremented, until finally enough time has passed that time gets back ahead
of the vector clock value, and then it will return to usual operation.
(This latter situation is not ideal, but it seems the best that can be done.
The issue with it is, since all writers will be incrementing the last
vector clock they saw, there's no way to tell when one writer made a write
significantly later in time than another, so the earlier write might
arbitrarily be picked when merging. This problem is why git-annex uses
timestamps in the first place, rather than pure vector clocks.)
Advancing forward by 1 second is somewhat arbitrary. setDead
advances a timestamp by just 1 picosecond, and the vector clock could
too. But then it would interfere with setDead, which wants to be
overrulled by any change. So it could use 2 picoseconds or something,
but that seems weird. It could just as well advance it forward by a
minute or whatever, but then it would be harder for real time to catch
up with the vector clock when forward clock slew had happened.
A complication is that many log files contain several different peices of
information, and it may be best to only use vector clocks for the same peice
of information. For example, a key's location log file contains
InfoPresent/InfoMissing for each UUID, and it only looks at the vector
clocks for the UUID that is being changed, and not other UUIDs.
Although exactly where the dividing line is can be hard to determine.
Consider metadata logs, where a field "tag" can have multiple values set
at different times. Should it advance forward past the last tag?
Probably. What about when a different field is set, should it look at
the clocks of other fields? Perhaps not, but currently it does, and
this does not seems like it will cause any problems.
Another one I'm not entirely sure about is the export log, which is
keyed by (fromuuid, touuid). So if multiple repos are exporting to the
same remote, different vector clocks can be used for that remote.
It looks like that's probably ok, because it does not try to determine
what order things occurred when there was an export conflict.
Sponsored-by: Jochen Bartl on Patreon
2021-08-03 20:45:20 +00:00
|
|
|
(info l) c
|
2024-08-23 20:35:12 +00:00
|
|
|
updateRepoSize NoLiveUpdate u key InfoDead
|
2011-04-02 19:50:51 +00:00
|
|
|
|
2018-04-26 18:21:27 +00:00
|
|
|
data Unchecked a = Unchecked (Annex (Maybe a))
|
|
|
|
|
|
|
|
finishCheck :: Unchecked a -> Annex (Maybe a)
|
|
|
|
finishCheck (Unchecked a) = a
|
|
|
|
|
2011-04-03 00:36:01 +00:00
|
|
|
{- Finds all keys that have location log information.
|
Keys marked as dead are now skipped by --all.
fsck already special-cased dead keys to make --all not report errors with
them, and it makes sense to also expand that to whereis. I think it makes
sense for dead keys to be skipped by all uses of --all, so mistakes can be
completely forgotten about and not come back to haunt us.
The speed impact of testing if the key is dead is negligible for fsck and
whereis, since they use the location log anyway and it gets cached.
This does slow down a few commands that support --all, in particular
metadata --all runs around 2x as slow. I don't think metadata
--all is often used though. It might slow down copy/move/mirror
--all and get --all.
log --all is not affected (does not use the normal --all machinery).
Dead keys will still be processed by --incomplete, --branch,
--failed, and --key. Although it would be unlikely for a dead key to
ave in incomplete or failed transfer. It seems to make perfect sense for
--branch to process keys on the branch, even if dead.
(fsck's special-casing of dead keys was left in, so if one of these options
causes a dead key to be fscked, there will be a nice message.)
This commit was supported by the NSF-funded DataLad project.
2017-05-09 16:55:21 +00:00
|
|
|
- (There may be duplicate keys in the list.)
|
|
|
|
-
|
|
|
|
- Keys that have been marked as dead are not included.
|
|
|
|
-}
|
2021-12-27 19:28:31 +00:00
|
|
|
loggedKeys :: Annex (Maybe ([Unchecked Key], IO Bool))
|
Keys marked as dead are now skipped by --all.
fsck already special-cased dead keys to make --all not report errors with
them, and it makes sense to also expand that to whereis. I think it makes
sense for dead keys to be skipped by all uses of --all, so mistakes can be
completely forgotten about and not come back to haunt us.
The speed impact of testing if the key is dead is negligible for fsck and
whereis, since they use the location log anyway and it gets cached.
This does slow down a few commands that support --all, in particular
metadata --all runs around 2x as slow. I don't think metadata
--all is often used though. It might slow down copy/move/mirror
--all and get --all.
log --all is not affected (does not use the normal --all machinery).
Dead keys will still be processed by --incomplete, --branch,
--failed, and --key. Although it would be unlikely for a dead key to
ave in incomplete or failed transfer. It seems to make perfect sense for
--branch to process keys on the branch, even if dead.
(fsck's special-casing of dead keys was left in, so if one of these options
causes a dead key to be fscked, there will be a nice message.)
This commit was supported by the NSF-funded DataLad project.
2017-05-09 16:55:21 +00:00
|
|
|
loggedKeys = loggedKeys' (not <$$> checkDead)
|
|
|
|
|
2021-12-27 19:28:31 +00:00
|
|
|
loggedKeys' :: (Key -> Annex Bool) -> Annex (Maybe ([Unchecked Key], IO Bool))
|
2020-02-14 19:22:48 +00:00
|
|
|
loggedKeys' check = do
|
|
|
|
config <- Annex.getGitConfig
|
2021-12-27 19:28:31 +00:00
|
|
|
Annex.Branch.files >>= \case
|
|
|
|
Nothing -> return Nothing
|
|
|
|
Just (bfs, cleanup) -> do
|
|
|
|
let l = mapMaybe (defercheck <$$> locationLogFileKey config) bfs
|
|
|
|
return (Just (l, cleanup))
|
2018-04-26 18:21:27 +00:00
|
|
|
where
|
|
|
|
defercheck k = Unchecked $ ifM (check k)
|
|
|
|
( return (Just k)
|
|
|
|
, return Nothing
|
|
|
|
)
|
2011-07-01 21:23:01 +00:00
|
|
|
|
2011-10-29 21:49:37 +00:00
|
|
|
{- Finds all keys that have location log information indicating
|
2018-04-26 20:13:05 +00:00
|
|
|
- they are present in the specified repository.
|
|
|
|
-
|
|
|
|
- This does not stream well; use loggedKeysFor' for lazy streaming.
|
2018-04-26 18:21:27 +00:00
|
|
|
-}
|
2021-12-27 19:28:31 +00:00
|
|
|
loggedKeysFor :: UUID -> Annex (Maybe [Key])
|
|
|
|
loggedKeysFor u = loggedKeysFor' u >>= \case
|
|
|
|
Nothing -> return Nothing
|
|
|
|
Just (l, cleanup) -> do
|
|
|
|
l' <- catMaybes <$> mapM finishCheck l
|
|
|
|
liftIO $ void cleanup
|
|
|
|
return (Just l')
|
|
|
|
|
|
|
|
loggedKeysFor' :: UUID -> Annex (Maybe ([Unchecked Key], IO Bool))
|
2018-04-26 20:13:05 +00:00
|
|
|
loggedKeysFor' u = loggedKeys' isthere
|
2012-11-11 04:51:07 +00:00
|
|
|
where
|
|
|
|
isthere k = do
|
|
|
|
us <- loggedLocations k
|
|
|
|
let !there = u `elem` us
|
|
|
|
return there
|
info: Added calculation of combined annex size of all repositories
Factored out overLocationLogs from CmdLine.Seek, which can calculate this
pretty fast even in a large repo. In my big repo, the time to run git-annex
info went up from 1.33s to 8.5s.
Note that the "backend usage" stats are for annexed files in the working
tree only, not all annexed files. This new data source would let that be
changed, but that would be a confusing behavior change. And I cannot
retitle it either, out of fear something uses the current title (eg parsing
the json).
Also note that, while time says "402108maxresident" in my big repo now,
up from "54092maxresident", top shows the RES constant at 64mb, and it
was 48mb before. So I don't think there is a memory leak. I tried using
deepseq to force full evaluation of addKeyCopies and memory use didn't
change, which also says no memory leak. And indeed, not even calling
addKeyCopies resulted in the same memory use. Probably the increased memory
usage is buffering the stream of data from git in overLocationLogs.
Sponsored-by: Brett Eisenberg on Patreon
2023-11-08 17:15:00 +00:00
|
|
|
|
|
|
|
{- This is much faster than loggedKeys. -}
|
2024-08-14 07:19:30 +00:00
|
|
|
overLocationLogs
|
|
|
|
:: Bool
|
2024-08-17 15:16:21 +00:00
|
|
|
-> Bool
|
2024-08-14 07:19:30 +00:00
|
|
|
-> v
|
2024-08-14 17:46:44 +00:00
|
|
|
-> (Key -> [UUID] -> v -> Annex v)
|
2024-08-14 07:19:30 +00:00
|
|
|
-> Annex (Annex.Branch.UnmergedBranches (v, Sha))
|
2024-08-17 15:16:21 +00:00
|
|
|
overLocationLogs ignorejournal noclusters v =
|
|
|
|
overLocationLogs' ignorejournal noclusters v (flip const)
|
info: Added calculation of combined annex size of all repositories
Factored out overLocationLogs from CmdLine.Seek, which can calculate this
pretty fast even in a large repo. In my big repo, the time to run git-annex
info went up from 1.33s to 8.5s.
Note that the "backend usage" stats are for annexed files in the working
tree only, not all annexed files. This new data source would let that be
changed, but that would be a confusing behavior change. And I cannot
retitle it either, out of fear something uses the current title (eg parsing
the json).
Also note that, while time says "402108maxresident" in my big repo now,
up from "54092maxresident", top shows the RES constant at 64mb, and it
was 48mb before. So I don't think there is a memory leak. I tried using
deepseq to force full evaluation of addKeyCopies and memory use didn't
change, which also says no memory leak. And indeed, not even calling
addKeyCopies resulted in the same memory use. Probably the increased memory
usage is buffering the stream of data from git in overLocationLogs.
Sponsored-by: Brett Eisenberg on Patreon
2023-11-08 17:15:00 +00:00
|
|
|
|
|
|
|
overLocationLogs'
|
2024-08-13 17:23:39 +00:00
|
|
|
:: Bool
|
2024-08-17 15:16:21 +00:00
|
|
|
-> Bool
|
2024-08-14 17:46:44 +00:00
|
|
|
-> v
|
2024-08-14 20:04:18 +00:00
|
|
|
-> (Annex (FileContents Key Bool) -> Annex v -> Annex v)
|
info: Added calculation of combined annex size of all repositories
Factored out overLocationLogs from CmdLine.Seek, which can calculate this
pretty fast even in a large repo. In my big repo, the time to run git-annex
info went up from 1.33s to 8.5s.
Note that the "backend usage" stats are for annexed files in the working
tree only, not all annexed files. This new data source would let that be
changed, but that would be a confusing behavior change. And I cannot
retitle it either, out of fear something uses the current title (eg parsing
the json).
Also note that, while time says "402108maxresident" in my big repo now,
up from "54092maxresident", top shows the RES constant at 64mb, and it
was 48mb before. So I don't think there is a memory leak. I tried using
deepseq to force full evaluation of addKeyCopies and memory use didn't
change, which also says no memory leak. And indeed, not even calling
addKeyCopies resulted in the same memory use. Probably the increased memory
usage is buffering the stream of data from git in overLocationLogs.
Sponsored-by: Brett Eisenberg on Patreon
2023-11-08 17:15:00 +00:00
|
|
|
-> (Key -> [UUID] -> v -> Annex v)
|
2024-08-14 07:19:30 +00:00
|
|
|
-> Annex (Annex.Branch.UnmergedBranches (v, Sha))
|
2024-08-17 15:16:21 +00:00
|
|
|
overLocationLogs' ignorejournal noclusters iv discarder keyaction = do
|
|
|
|
mclusters <- if noclusters then pure Nothing else Just <$> getClusters
|
2024-08-14 17:46:44 +00:00
|
|
|
overLocationLogsHelper
|
|
|
|
(Annex.Branch.overBranchFileContents ignorejournal)
|
|
|
|
(\locparser _ _ content -> pure (locparser (fst <$> content)))
|
|
|
|
True
|
2024-08-17 15:16:21 +00:00
|
|
|
iv
|
|
|
|
discarder
|
|
|
|
keyaction
|
|
|
|
mclusters
|
2024-08-14 17:46:44 +00:00
|
|
|
|
|
|
|
type LocChanges =
|
|
|
|
( S.Set UUID
|
|
|
|
-- ^ locations that are in the journal, but not in the
|
|
|
|
-- git-annex branch
|
|
|
|
, S.Set UUID
|
|
|
|
-- ^ locations that are in the git-annex branch,
|
|
|
|
-- but have been removed in the journal
|
|
|
|
)
|
|
|
|
|
|
|
|
{- Like overLocationLogs, but only adds changes in journalled files
|
|
|
|
- compared with what was logged in the git-annex branch at the specified
|
|
|
|
- commit sha. -}
|
|
|
|
overLocationLogsJournal
|
|
|
|
:: v
|
|
|
|
-> Sha
|
|
|
|
-> (Key -> LocChanges -> v -> Annex v)
|
2024-08-17 15:16:21 +00:00
|
|
|
-> Maybe Clusters
|
2024-08-14 17:46:44 +00:00
|
|
|
-> Annex v
|
2024-08-17 15:16:21 +00:00
|
|
|
overLocationLogsJournal v branchsha keyaction mclusters =
|
2024-08-14 17:46:44 +00:00
|
|
|
overLocationLogsHelper
|
|
|
|
(Annex.Branch.overJournalFileContents handlestale)
|
|
|
|
changedlocs
|
|
|
|
False
|
|
|
|
-- ^ do not precache journalled content, which may be stale
|
|
|
|
v (flip const) keyaction
|
2024-08-17 15:16:21 +00:00
|
|
|
mclusters
|
2024-08-14 17:46:44 +00:00
|
|
|
where
|
|
|
|
handlestale _ journalcontent = return (journalcontent, Just True)
|
|
|
|
|
|
|
|
changedlocs locparser _key logf (Just (journalcontent, isstale)) = do
|
|
|
|
branchcontent <- Annex.Branch.getRef branchsha logf
|
|
|
|
let branchlocs = S.fromList $ locparser $ Just branchcontent
|
|
|
|
let journallocs = S.fromList $ locparser $ Just $ case isstale of
|
|
|
|
Just True -> Annex.Branch.combineStaleJournalWithBranch
|
|
|
|
branchcontent journalcontent
|
|
|
|
_ -> journalcontent
|
|
|
|
return
|
|
|
|
( S.difference journallocs branchlocs
|
|
|
|
, S.difference branchlocs journallocs
|
|
|
|
)
|
|
|
|
changedlocs _ _ _ Nothing = pure (S.empty, S.empty)
|
|
|
|
|
|
|
|
overLocationLogsHelper
|
2024-08-14 20:04:18 +00:00
|
|
|
:: ((RawFilePath -> Maybe Key) -> (Annex (FileContents Key b) -> Annex v) -> Annex a)
|
2024-08-14 17:46:44 +00:00
|
|
|
-> ((Maybe L.ByteString -> [UUID]) -> Key -> RawFilePath -> Maybe (L.ByteString, Maybe b) -> Annex u)
|
|
|
|
-> Bool
|
|
|
|
-> v
|
2024-08-14 20:04:18 +00:00
|
|
|
-> (Annex (FileContents Key b) -> Annex v -> Annex v)
|
2024-08-14 17:46:44 +00:00
|
|
|
-> (Key -> u -> v -> Annex v)
|
2024-08-17 15:16:21 +00:00
|
|
|
-> (Maybe Clusters)
|
2024-08-14 17:46:44 +00:00
|
|
|
-> Annex a
|
2024-08-17 15:16:21 +00:00
|
|
|
overLocationLogsHelper runner locparserrunner canprecache iv discarder keyaction mclusters = do
|
info: Added calculation of combined annex size of all repositories
Factored out overLocationLogs from CmdLine.Seek, which can calculate this
pretty fast even in a large repo. In my big repo, the time to run git-annex
info went up from 1.33s to 8.5s.
Note that the "backend usage" stats are for annexed files in the working
tree only, not all annexed files. This new data source would let that be
changed, but that would be a confusing behavior change. And I cannot
retitle it either, out of fear something uses the current title (eg parsing
the json).
Also note that, while time says "402108maxresident" in my big repo now,
up from "54092maxresident", top shows the RES constant at 64mb, and it
was 48mb before. So I don't think there is a memory leak. I tried using
deepseq to force full evaluation of addKeyCopies and memory use didn't
change, which also says no memory leak. And indeed, not even calling
addKeyCopies resulted in the same memory use. Probably the increased memory
usage is buffering the stream of data from git in overLocationLogs.
Sponsored-by: Brett Eisenberg on Patreon
2023-11-08 17:15:00 +00:00
|
|
|
config <- Annex.getGitConfig
|
2024-08-14 17:46:44 +00:00
|
|
|
|
2024-08-17 15:16:21 +00:00
|
|
|
let locparser = maybe
|
|
|
|
parseLoggedLocationsWithoutClusters
|
|
|
|
parseLoggedLocations
|
|
|
|
mclusters
|
|
|
|
let locparser' = maybe [] locparser
|
info: Added calculation of combined annex size of all repositories
Factored out overLocationLogs from CmdLine.Seek, which can calculate this
pretty fast even in a large repo. In my big repo, the time to run git-annex
info went up from 1.33s to 8.5s.
Note that the "backend usage" stats are for annexed files in the working
tree only, not all annexed files. This new data source would let that be
changed, but that would be a confusing behavior change. And I cannot
retitle it either, out of fear something uses the current title (eg parsing
the json).
Also note that, while time says "402108maxresident" in my big repo now,
up from "54092maxresident", top shows the RES constant at 64mb, and it
was 48mb before. So I don't think there is a memory leak. I tried using
deepseq to force full evaluation of addKeyCopies and memory use didn't
change, which also says no memory leak. And indeed, not even calling
addKeyCopies resulted in the same memory use. Probably the increased memory
usage is buffering the stream of data from git in overLocationLogs.
Sponsored-by: Brett Eisenberg on Patreon
2023-11-08 17:15:00 +00:00
|
|
|
let getk = locationLogFileKey config
|
|
|
|
let go v reader = reader >>= \case
|
|
|
|
Just (k, f, content) -> discarder reader $ do
|
|
|
|
-- precache to make checkDead fast, and also to
|
|
|
|
-- make any accesses done in keyaction fast.
|
2024-08-14 17:46:44 +00:00
|
|
|
when canprecache $
|
|
|
|
maybe noop (Annex.Branch.precache f . fst) content
|
info: Added calculation of combined annex size of all repositories
Factored out overLocationLogs from CmdLine.Seek, which can calculate this
pretty fast even in a large repo. In my big repo, the time to run git-annex
info went up from 1.33s to 8.5s.
Note that the "backend usage" stats are for annexed files in the working
tree only, not all annexed files. This new data source would let that be
changed, but that would be a confusing behavior change. And I cannot
retitle it either, out of fear something uses the current title (eg parsing
the json).
Also note that, while time says "402108maxresident" in my big repo now,
up from "54092maxresident", top shows the RES constant at 64mb, and it
was 48mb before. So I don't think there is a memory leak. I tried using
deepseq to force full evaluation of addKeyCopies and memory use didn't
change, which also says no memory leak. And indeed, not even calling
addKeyCopies resulted in the same memory use. Probably the increased memory
usage is buffering the stream of data from git in overLocationLogs.
Sponsored-by: Brett Eisenberg on Patreon
2023-11-08 17:15:00 +00:00
|
|
|
ifM (checkDead k)
|
|
|
|
( go v reader
|
|
|
|
, do
|
2024-08-17 15:16:21 +00:00
|
|
|
!locs <- locparserrunner locparser' k f content
|
2024-08-14 17:46:44 +00:00
|
|
|
!v' <- keyaction k locs v
|
info: Added calculation of combined annex size of all repositories
Factored out overLocationLogs from CmdLine.Seek, which can calculate this
pretty fast even in a large repo. In my big repo, the time to run git-annex
info went up from 1.33s to 8.5s.
Note that the "backend usage" stats are for annexed files in the working
tree only, not all annexed files. This new data source would let that be
changed, but that would be a confusing behavior change. And I cannot
retitle it either, out of fear something uses the current title (eg parsing
the json).
Also note that, while time says "402108maxresident" in my big repo now,
up from "54092maxresident", top shows the RES constant at 64mb, and it
was 48mb before. So I don't think there is a memory leak. I tried using
deepseq to force full evaluation of addKeyCopies and memory use didn't
change, which also says no memory leak. And indeed, not even calling
addKeyCopies resulted in the same memory use. Probably the increased memory
usage is buffering the stream of data from git in overLocationLogs.
Sponsored-by: Brett Eisenberg on Patreon
2023-11-08 17:15:00 +00:00
|
|
|
go v' reader
|
|
|
|
)
|
|
|
|
Nothing -> return v
|
|
|
|
|
2024-08-14 17:46:44 +00:00
|
|
|
runner getk (go iv)
|
remove dead nodes when loading the cluster log
This is to avoid inserting a cluster uuid into the location log when
only dead nodes in the cluster contain the content of a key.
One reason why this is necessary is Remote.keyLocations, which excludes
dead repositories from the list. But there are probably many more.
Implementing this was challenging, because Logs.Location importing
Logs.Cluster which imports Logs.Trust which imports Remote.List resulted
in an import cycle through several other modules.
Resorted to making Logs.Location not import Logs.Cluster, and instead
it assumes that Annex.clusters gets populated when necessary before it's
called.
That's done in Annex.Startup, which is run by the git-annex command
(but not other commands) at early startup in initialized repos. Or,
is run after initialization.
Note that is Remote.Git, it is unable to import Annex.Startup, because
Remote.Git importing Logs.Cluster leads the the same import cycle.
So ensureInitialized is not passed annexStartup in there.
Other commands, like git-annex-shell currently don't run annexStartup
either.
So there are cases where Logs.Location will not see clusters. So it won't add
any cluster UUIDs when loading the log. That's ok, the only reason to do
that is to make display of where objects are located include clusters,
and to make commands like git-annex get --from treat keys as being located
in a cluster. git-annex-shell certainly does not do anything like that,
and I'm pretty sure Remote.Git (and callers to Remote.Git.onLocalRepo)
don't either.
2024-06-16 18:35:07 +00:00
|
|
|
|
|
|
|
-- Cannot import Logs.Cluster due to a cycle.
|
|
|
|
-- Annex.clusters gets populated when starting up git-annex.
|
|
|
|
getClusters :: Annex Clusters
|
2024-07-31 19:54:14 +00:00
|
|
|
getClusters = maybe (pure noClusters) id =<< Annex.getState Annex.clusters
|