git-annex/Remote/Git.hs

1028 lines
35 KiB
Haskell
Raw Normal View History

{- Standard git remotes.
-
- Copyright 2011-2024 Joey Hess <id@joeyh.name>
-
- Licensed under the GNU AGPL version 3 or higher.
-}
{-# LANGUAGE CPP #-}
{-# LANGUAGE OverloadedStrings #-}
module Remote.Git (
remote,
configRead,
onLocalRepo,
) where
import Annex.Common
2013-05-14 17:53:29 +00:00
import Annex.Ssh
import Types.Remote
import Types.GitConfig
import qualified Git
import qualified Git.Config
import qualified Git.Construct
import qualified Git.Command
import qualified Git.GCrypt
import qualified Git.Types as Git
import qualified Annex
import Annex.Transfer
import Annex.CopyFile
import Annex.Verify
import Annex.Content (verificationOfContentFailed)
import Annex.UUID
2011-10-04 04:40:47 +00:00
import qualified Annex.Content
2011-12-12 21:38:46 +00:00
import qualified Annex.BranchState
import qualified Annex.Branch
import qualified Annex.Url as Url
import qualified Annex.SpecialRemote.Config as SpecialRemote
2013-05-12 23:19:28 +00:00
import Utility.Tmp
import Config
import Config.Cost
import Annex.SpecialRemote.Config
import Config.DynamicConfig
2014-01-26 20:36:31 +00:00
import Annex.Init
2014-03-13 23:06:26 +00:00
import Types.CleanupActions
2014-01-26 20:32:55 +00:00
import qualified CmdLine.GitAnnexShell.Fields as Fields
2012-12-12 23:20:38 +00:00
import Logs.Location
import Logs.Proxy
don't sync with cluster nodes by default Avoid `git-annex sync --content` etc from operating on cluster nodes by default since syncing with a cluster implicitly syncs with its nodes. This avoids a lot of unncessary work when a cluster has a lot of nodes just in checking if each node's preferred content is satisfied. And it avoids content being sent to nodes individually, so instead syncing with clusters always fanout uploads to nodes. The downside is that there are situations where a cluster's preferred content settings can be met, but those of its nodes are not. Or where a node does not contain a key, but the cluster does, and there are not enough copies of the key yet, so it would be desirable the send it there. I think that's an acceptable tradeoff. These kind of situations are ones where the cluster itself should probably be responsible for copying content to the node. Which it can do much less expensively than a client can. Part of the balanced preferred content design that I will be working on in a couple of months involves rebalancing clusters, so I expect to revisit this. The use of annex-sync config does allow running git-annex sync with a specific node, or nodes, and it will sync with it. And it's also possible to set annex-sync git configs to make it sync with a node by default. (Although that will require setting up an explicit git remote for the node rather than relying on the proxied remote.) Logs.Cluster.Basic is needed because Remote.Git cannot import Logs.Cluster due to a cycle. And the Annex.Startup load of clusters happens too late for Remote.Git to use that. This does mean one redundant load of the cluster log, though only when there is a proxy.
2024-06-25 14:06:28 +00:00
import Logs.Cluster.Basic
import Utility.Metered
2013-10-14 19:05:10 +00:00
import Utility.Env
import Utility.Batch
import Remote.Helper.Git
import Remote.Helper.Messages
2019-02-20 19:55:01 +00:00
import Remote.Helper.ExportImport
import qualified Remote.Helper.Ssh as Ssh
import qualified Remote.GCrypt
import qualified Remote.GitLFS
import qualified Remote.P2P
import qualified Remote.Helper.P2P as P2PHelper
2024-07-24 16:14:56 +00:00
import qualified P2P.Protocol as P2P
import P2P.Address
import P2P.Http.Url
import P2P.Http.Client
import Annex.Path
import Creds
import Types.NumCopies
toward SafeDropProof expiry checking Added Maybe POSIXTime to SafeDropProof, which gets set when the proof is based on a LockedCopy. If there are several LockedCopies, it uses the closest expiry time. That is not optimal, it may be that the proof expires based on one LockedCopy but another one has not expired. But that seems unlikely to really happen, and anyway the user can just re-run a drop if it fails due to expiry. Pass the SafeDropProof to removeKey, which is responsible for checking it for expiry in situations where that could be a problem. Which really only means in Remote.Git. Made Remote.Git check expiry when dropping from a local remote. Checking expiry when dropping from a P2P remote is not yet implemented. P2P.Protocol.remove has SafeDropProof plumbed through to it for that purpose. Fixing the remaining 2 build warnings should complete this work. Note that the use of a POSIXTime here means that if the clock gets set forward while git-annex is in the middle of a drop, it may say that dropping took too long. That seems ok. Less ok is that if the clock gets turned back a sufficient amount (eg 5 minutes), proof expiry won't be noticed. It might be better to use the Monotonic clock, but that doesn't advance when a laptop is suspended, and while there is the linux Boottime clock, that is not available on other systems. Perhaps a combination of POSIXTime and the Monotonic clock could detect laptop suspension and also detect clock being turned back? There is a potential future flag day where p2pDefaultLockContentRetentionDuration is not assumed, but is probed using the P2P protocol, and peers that don't support it can no longer produce a LockedCopy. Until that happens, when git-annex is communicating with older peers there is a risk of data loss when a ssh connection closes during LOCKCONTENT.
2024-07-04 16:23:46 +00:00
import Annex.SafeDropProof
import Types.ProposedAccepted
import Annex.Action
import Messages.Progress
import Control.Concurrent
import qualified Data.Map as M
import qualified Data.Set as S
import qualified Data.List.NonEmpty as NE
import qualified Data.ByteString as B
2023-03-03 16:58:39 +00:00
import qualified Utility.RawFilePath as R
import Network.URI
2011-12-31 08:11:39 +00:00
remote :: RemoteType
remote = RemoteType
{ typename = "git"
, enumerate = list
, generate = gen
, configParser = mkRemoteConfigParser
[ optionalStringParser locationField
(FieldDesc "url of git remote to remember with special remote")
, yesNoParser versioningField (Just False) HiddenField
]
, setup = gitSetup
, exportSupported = exportUnsupported
2019-02-20 19:55:01 +00:00
, importSupported = importUnsupported
add thirdPartyPopulated interface This is to support, eg a borg repo as a special remote, which is populated not by running git-annex commands, but by using borg. Then git-annex sync lists the content of the remote, learns which files are annex objects, and treats those as present in the remote. So, most of the import machinery is reused, to a new purpose. While normally importtree maintains a remote tracking branch, this does not, because the files stored in the remote are annex object files, not user-visible filenames. But, internally, a git tree is still generated, of the files on the remote that are annex objects. This tree is used by retrieveExportWithContentIdentifier, etc. As with other import/export remotes, that the tree is recorded in the export log, and gets grafted into the git-annex branch. importKey changed to be able to return Nothing, to indicate when an ImportLocation is not an annex object and so should be skipped from being included in the tree. It did not seem to make sense to have git-annex import do this, since from the user's perspective, it's not like other imports. So only git-annex sync does it. Note that, git-annex sync does not yet download objects from such remotes that are preferred content. importKeys is run with content downloading disabled, to avoid getting the content of all objects. Perhaps what's needed is for seekSyncContent to be run with these remotes, but I don't know if it will just work (in particular, it needs to avoid trying to transfer objects to them), so I skipped that for now. (Untested and unused as of yet.) This commit was sponsored by Jochen Bartl on Patreon.
2020-12-18 18:52:57 +00:00
, thirdPartyPopulated = False
}
2011-03-29 03:51:07 +00:00
locationField :: RemoteConfigField
locationField = Accepted "location"
list :: Bool -> Annex [Git.Repo]
list autoinit = do
2011-12-14 19:30:14 +00:00
c <- fromRepo Git.config
rs <- mapM (tweakurl c) =<< Annex.getGitRemotes
rs' <- mapM (configRead autoinit) (filter (not . isGitRemoteAnnex) rs)
proxies <- doQuietAction getProxies
if proxies == mempty
then return rs'
else do
proxied <- listProxied proxies rs'
return (proxied++rs')
where
tweakurl c r = do
let n = fromJust $ Git.remoteName r
case getAnnexUrl r c of
Just url | not (isP2PHttpProtocolUrl url) ->
inRepo $ \g -> Git.Construct.remoteNamed n $
Git.Construct.fromRemoteLocation url
False g
_ -> return r
getAnnexUrl :: Git.Repo -> M.Map Git.ConfigKey Git.ConfigValue -> Maybe String
getAnnexUrl r c = Git.fromConfigValue <$> M.lookup (annexUrlConfigKey r) c
annexUrlConfigKey :: Git.Repo -> Git.ConfigKey
annexUrlConfigKey r = remoteConfig r "annexurl"
isGitRemoteAnnex :: Git.Repo -> Bool
isGitRemoteAnnex r = "annex::" `isPrefixOf` Git.repoLocation r
{- Git remotes are normally set up using standard git commands, not
- git-annex initremote and enableremote.
-
- For initremote, probe the location to find the uuid.
- and set up a git remote.
-
- enableremote simply sets up a git remote using the stored location.
- No attempt is made to make the remote be accessible via ssh key setup,
- etc.
-}
gitSetup :: SetupStage -> Maybe UUID -> Maybe CredPair -> RemoteConfig -> RemoteGitConfig -> Annex (RemoteConfig, UUID)
gitSetup Init mu _ c _ = do
let location = maybe (giveup "Specify location=url") fromProposedAccepted $
M.lookup locationField c
r <- inRepo $ Git.Construct.fromRemoteLocation location False
r' <- tryGitConfigRead False r False
let u = getUncachedUUID r'
if u == NoUUID
2022-06-09 17:40:05 +00:00
then giveup "git repository does not have an annex uuid"
else if isNothing mu || mu == Just u
then enableRemote (Just u) c
else giveup "git repository does not have specified uuid"
gitSetup (Enable _) mu _ c _ = enableRemote mu c
gitSetup (AutoEnable _) mu _ c _ = enableRemote mu c
enableRemote :: Maybe UUID -> RemoteConfig -> Annex (RemoteConfig, UUID)
enableRemote (Just u) c = do
rs <- Annex.getGitRemotes
unless (any (\r -> Git.remoteName r == Just cname) rs) $
inRepo $ Git.Command.run
[ Param "remote"
, Param "add"
, Param cname
, Param clocation
]
return (c, u)
where
cname = fromMaybe (giveup "no name") (SpecialRemote.lookupName c)
clocation = maybe (giveup "no location") fromProposedAccepted (M.lookup locationField c)
enableRemote Nothing _ = giveup "unable to enable git remote with no specified uuid"
{- It's assumed to be cheap to read the config of non-URL remotes, so this is
- done each time git-annex is run in a way that uses remotes, unless
- annex-checkuuid is false.
-
- The config of other URL remotes is only read when there is no
- cached UUID value.
-}
configRead :: Bool -> Git.Repo -> Annex Git.Repo
configRead autoinit r = do
2014-05-16 20:08:20 +00:00
gc <- Annex.getRemoteGitConfig r
hasuuid <- (/= NoUUID) <$> getRepoUUID r
annexignore <- liftIO $ getDynamicConfig (remoteAnnexIgnore gc)
case (repoCheap r, annexignore, hasuuid) of
(_, True, _) -> return r
(True, _, _)
| remoteAnnexCheckUUID gc -> tryGitConfigRead autoinit r hasuuid
| otherwise -> return r
(False, _, False) -> configSpecialGitRemotes r >>= \case
Nothing -> tryGitConfigRead autoinit r False
Just r' -> return r'
_ -> return r
gen :: Git.Repo -> UUID -> RemoteConfig -> RemoteGitConfig -> RemoteStateHandle -> Annex (Maybe Remote)
gen r u rc gc rs
-- Remote.GitLFS may be used with a repo that is also encrypted
-- with gcrypt so is checked first.
| remoteAnnexGitLFS gc = Remote.GitLFS.gen r u rc gc rs
| Git.GCrypt.isEncrypted r = Remote.GCrypt.chainGen r u rc gc rs
| otherwise = case repoP2PAddress r of
Nothing -> do
st <- mkState r u gc
c <- parsedRemoteConfig remote rc
go st c <$> remoteCost gc c (defaultRepoCost r)
Just addr -> Remote.P2P.chainGen addr r u rc gc rs
where
go st c cst = Just new
where
new = Remote
{ uuid = u
, cost = cst
, name = Git.repoDescribe r
, storeKey = copyToRemote new st
, retrieveKeyFile = copyFromRemote new st
, retrieveKeyFileCheap = copyFromRemoteCheap st r
, retrievalSecurityPolicy = RetrievalAllKeysSecure
, removeKey = dropKey new st
, lockContent = Just (lockKey new st)
, checkPresent = inAnnex new st
, checkPresentCheap = repoCheap r
, exportActions = exportUnsupported
2019-02-20 19:55:01 +00:00
, importActions = importUnsupported
, whereisKey = Nothing
, remoteFsck = if Git.repoIsUrl r
then Nothing
else Just $ fsckOnRemote r
, repairRepo = if Git.repoIsUrl r
then Nothing
else Just $ repairRemote r
, config = c
, localpath = localpathCalc r
, getRepo = getRepoFromState st
, gitconfig = gc
, readonly = Git.repoIsHttp r && not (isP2PHttp' gc)
, appendonly = False
, untrustworthy = isJust (remoteAnnexProxiedBy gc)
&& (exportTree c || importTree c)
&& not (isVersioning c)
, availability = repoAvail r
, remotetype = remote
, mkUnavailable = unavailable r u rc gc rs
, getInfo = gitRepoInfo new
2014-12-08 17:40:15 +00:00
, claimUrl = Nothing
, checkUrl = Nothing
, remoteStateHandle = rs
}
defaultRepoCost :: Git.Repo -> Cost
defaultRepoCost r
| repoCheap r = cheapRemoteCost
| otherwise = expensiveRemoteCost
unavailable :: Git.Repo -> UUID -> RemoteConfig -> RemoteGitConfig -> RemoteStateHandle -> Annex (Maybe Remote)
2024-07-24 18:36:37 +00:00
unavailable r u c gc = gen r' u c gc'
where
r' = case Git.location r of
Git.Local { Git.gitdir = d } ->
r { Git.location = Git.LocalUnknown d }
Git.Url url -> case uriAuthority url of
Just auth ->
let auth' = auth { uriRegName = "!dne!" }
in r { Git.location = Git.Url (url { uriAuthority = Just auth' })}
Nothing -> r { Git.location = Git.Unknown }
_ -> r -- already unavailable
2024-07-24 18:36:37 +00:00
gc' = gc
{ remoteAnnexP2PHttpUrl =
unavailableP2PHttpUrl <$> remoteAnnexP2PHttpUrl gc
}
{- Tries to read the config for a specified remote, updates state, and
- returns the updated repo. -}
tryGitConfigRead :: Bool -> Git.Repo -> Bool -> Annex Git.Repo
tryGitConfigRead autoinit r hasuuid
| haveconfig r = return r -- already read
| Git.repoIsSsh r = storeUpdatedRemote $ do
v <- Ssh.onRemote NoConsumeStdin r
( pipedconfig Git.Config.ConfigList autoinit (Git.repoDescribe r)
, return (Left "configlist failed")
)
"configlist" [] configlistfields
case v of
Right r'
| haveconfig r' -> return r'
| otherwise -> configlist_failed
Left _ -> configlist_failed
| Git.repoIsHttp r = storeUpdatedRemote geturlconfig
| Git.GCrypt.isEncrypted r = handlegcrypt =<< getConfigMaybe (remoteAnnexConfig r "uuid")
| Git.repoIsUrl r = do
set_ignore "uses a protocol not supported by git-annex" False
return r
| otherwise = storeUpdatedRemote $
readlocalannexconfig
`catchNonAsync` const failedreadlocalconfig
where
haveconfig = not . M.null . Git.config
pipedconfig st mustincludeuuuid configloc cmd params = do
v <- liftIO $ Git.Config.fromPipe r cmd params st
case v of
Right (r', val, _err) -> do
unless (isUUIDConfigured r' || val == mempty || not mustincludeuuuid) $ do
warning $ UnquotedString $ "Failed to get annex.uuid configuration of repository " ++ Git.repoDescribe r
warning $ UnquotedString $ "Instead, got: " ++ show val
warning "This is unexpected; please check the network transport!"
return $ Right r'
Left l -> do
warning $ UnquotedString $ "Unable to parse git config from " ++ configloc
return $ Left (show l)
geturlconfig = Url.withUrlOptionsPromptingCreds $ \uo -> do
let url = Git.repoLocation r ++ "/config"
v <- withTmpFile "git-annex.tmp" $ \tmpfile h -> do
liftIO $ hClose h
Url.download' nullMeterUpdate Nothing url tmpfile uo >>= \case
Right () -> pipedconfig Git.Config.ConfigNullList
False url "git"
[ Param "config"
, Param "--null"
, Param "--list"
, Param "--file"
, File tmpfile
]
Left err -> return (Left err)
case v of
Right r' -> do
-- Cache when http remote is not bare for
-- optimisation.
unless (fromMaybe False $ Git.Config.isBare r') $
setremote setRemoteBare False
-- When annex.url is set to a P2P http url,
-- store in remote.name.annexUrl
case Git.fromConfigValue <$> Git.Config.getMaybe (annexConfig "url") r' of
Just u | isP2PHttpProtocolUrl u ->
setremote (setConfig . annexUrlConfigKey) u
_ -> noop
return r'
Left err -> do
set_ignore "not usable by git-annex" False
warning $ UnquotedString $ url ++ " " ++ err
return r
{- Is this remote just not available, or does
- it not have git-annex-shell?
- Find out by trying to fetch from the remote. -}
configlist_failed = case Git.remoteName r of
Nothing -> return r
Just n -> do
whenM (inRepo $ Git.Command.runBool [Param "fetch", Param "--quiet", Param n]) $ do
set_ignore "does not have git-annex installed" True
return r
set_ignore msg longmessage = do
case Git.remoteName r of
Nothing -> noop
Just n -> do
warning $ UnquotedString $ "Remote " ++ n ++ " " ++ msg ++ "; setting annex-ignore"
when longmessage $
warning $ UnquotedString $ "This could be a problem with the git-annex installation on the remote. Please make sure that git-annex-shell is available in PATH when you ssh into the remote. Once you have fixed the git-annex installation, run: git annex enableremote " ++ n
setremote setRemoteIgnore True
setremote setter v = case Git.remoteName r of
Nothing -> noop
2016-05-27 15:15:52 +00:00
Just _ -> setter r v
handlegcrypt Nothing = return r
handlegcrypt (Just _cacheduuid) = do
-- Generate UUID from the gcrypt-id
g <- gitRepo
case Git.GCrypt.remoteRepoId g (Git.remoteName r) of
Nothing -> return r
Just v -> storeUpdatedRemote $ liftIO $ setUUID r $
genUUIDInNameSpace gCryptNameSpace (encodeBS v)
{- The local repo may not yet be initialized, so try to initialize
- it if allowed. However, if that fails, still return the read
- git config. -}
readlocalannexconfig = do
let check = do
Annex.BranchState.disableUpdate
remove dead nodes when loading the cluster log This is to avoid inserting a cluster uuid into the location log when only dead nodes in the cluster contain the content of a key. One reason why this is necessary is Remote.keyLocations, which excludes dead repositories from the list. But there are probably many more. Implementing this was challenging, because Logs.Location importing Logs.Cluster which imports Logs.Trust which imports Remote.List resulted in an import cycle through several other modules. Resorted to making Logs.Location not import Logs.Cluster, and instead it assumes that Annex.clusters gets populated when necessary before it's called. That's done in Annex.Startup, which is run by the git-annex command (but not other commands) at early startup in initialized repos. Or, is run after initialization. Note that is Remote.Git, it is unable to import Annex.Startup, because Remote.Git importing Logs.Cluster leads the the same import cycle. So ensureInitialized is not passed annexStartup in there. Other commands, like git-annex-shell currently don't run annexStartup either. So there are cases where Logs.Location will not see clusters. So it won't add any cluster UUIDs when loading the log. That's ok, the only reason to do that is to make display of where objects are located include clusters, and to make commands like git-annex get --from treat keys as being located in a cluster. git-annex-shell certainly does not do anything like that, and I'm pretty sure Remote.Git (and callers to Remote.Git.onLocalRepo) don't either.
2024-06-16 18:35:07 +00:00
catchNonAsync (autoInitialize noop (pure [])) $ \e ->
warning $ UnquotedString $ "Remote " ++ Git.repoDescribe r ++
": " ++ show e
Annex.getState Annex.repo
let r' = r { Git.repoPathSpecifiedExplicitly = True }
s <- newLocal r'
avoid flushing keys db queue after each Annex action The flush was only done Annex.run' to make sure that the queue was flushed before git-annex exits. But, doing it there means that as soon as one change gets queued, it gets flushed soon after, which contributes to excessive writes to the database, slowing git-annex down. (This does not yet speed git-annex up, but it is a stepping stone to doing so.) Database queues do not autoflush when garbage collected, so have to be flushed explicitly. I don't think it's possible to make them autoflush (except perhaps if git-annex sqitched to using ResourceT..). The comment in Database.Keys.closeDb used to be accurate, since the automatic flushing did mean that all writes reached the database even when closeDb was not called. But now, closeDb or flushDb needs to be called before stopping using an Annex state. So, removed that comment. In Remote.Git, change to using quiesce everywhere that it used to use stopCoProcesses. This means that uses on onLocal in there are just as slow as before. I considered only calling closeDb on the local git remotes when git-annex exits. But, the reason that Remote.Git calls stopCoProcesses in each onLocal is so as not to leave git processes running that have files open on the remote repo, when it's on removable media. So, it seemed to make sense to also closeDb after each one, since sqlite may also keep files open. Although that has not seemed to cause problems with removable media so far. It was also just easier to quiesce in each onLocal than once at the end. This does likely leave performance on the floor, so could be revisited. In Annex.Content.saveState, there was no reason to close the db, flushing it is enough. The rest of the changes are from auditing for Annex.new, and making sure that quiesce is called, after any action that might possibly need it. After that audit, I'm pretty sure that the change to Annex.run' is safe. The only concern might be that this does let more changes get queued for write to the db, and if git-annex is interrupted, those will be lost. But interrupting git-annex can obviously already prevent it from writing the most recent change to the db, so it must recover from such lost data... right? Sponsored-by: Dartmouth College's Datalad project
2022-10-12 17:50:46 +00:00
liftIO $ Annex.eval s $ check
`finally` quiesce True
failedreadlocalconfig = do
unless hasuuid $ case Git.remoteName r of
Nothing -> noop
Just n -> do
warning $ UnquotedString $ "Remote " ++ n ++ " cannot currently be accessed."
return r
configlistfields = if autoinit
then [(Fields.autoInit, "1")]
else []
{- Handles special remotes that can be enabled by the presence of
- regular git remotes.
-
- When a remote repo is found to be such a special remote, its
- UUID is cached in the git config, and the repo returned with
- the UUID set.
-}
configSpecialGitRemotes :: Git.Repo -> Annex (Maybe Git.Repo)
configSpecialGitRemotes r = Remote.GitLFS.configKnownUrl r >>= \case
Nothing -> return Nothing
Just r' -> Just <$> storeUpdatedRemote (return r')
storeUpdatedRemote :: Annex Git.Repo -> Annex Git.Repo
storeUpdatedRemote = observe $ \r' -> do
l <- Annex.getGitRemotes
let rs = exchange l r'
Annex.changeState $ \s -> s { Annex.gitremotes = Just rs }
where
exchange [] _ = []
exchange (old:ls) new
| Git.remoteName old == Git.remoteName new =
new : exchange ls new
| otherwise =
old : exchange ls new
{- Checks if a given remote has the content for a key in its annex. -}
inAnnex :: Remote -> State -> Key -> Annex Bool
inAnnex rmt st key = do
repo <- getRepo rmt
inAnnex' repo rmt st key
inAnnex' :: Git.Repo -> Remote -> State -> Key -> Annex Bool
inAnnex' repo rmt st@(State connpool duc _ _ _) key
| isP2PHttp rmt = checkp2phttp
| Git.repoIsHttp repo = checkhttp
| Git.repoIsUrl repo = checkremote
| otherwise = checklocal
where
checkp2phttp = p2pHttpClient rmt giveup (clientCheckPresent key)
checkhttp = do
gc <- Annex.getGitConfig
Url.withUrlOptionsPromptingCreds $ \uo ->
anyM (\u -> Url.checkBoth u (fromKey keySize key) uo)
(keyUrls gc repo rmt key)
checkremote = P2PHelper.checkpresent (Ssh.runProto rmt connpool (cantCheck rmt)) key
checklocal = ifM duc
( guardUsable repo (cantCheck repo) $
maybe (cantCheck repo) return
=<< onLocalFast st (Annex.Content.inAnnexSafe key)
, cantCheck repo
)
keyUrls :: GitConfig -> Git.Repo -> Remote -> Key -> [String]
keyUrls gc repo r key = map tourl locs'
where
tourl l = Git.repoLocation repo ++ "/" ++ l
-- If the remote is known to not be bare, try the hash locations
-- used for non-bare repos first, as an optimisation.
locs
| remoteAnnexBare remoteconfig == Just False = annexLocationsNonBare gc key
| otherwise = annexLocationsBare gc key
#ifndef mingw32_HOST_OS
locs' = map fromRawFilePath locs
#else
locs' = map (replace "\\" "/" . fromRawFilePath) locs
#endif
remoteconfig = gitconfig r
toward SafeDropProof expiry checking Added Maybe POSIXTime to SafeDropProof, which gets set when the proof is based on a LockedCopy. If there are several LockedCopies, it uses the closest expiry time. That is not optimal, it may be that the proof expires based on one LockedCopy but another one has not expired. But that seems unlikely to really happen, and anyway the user can just re-run a drop if it fails due to expiry. Pass the SafeDropProof to removeKey, which is responsible for checking it for expiry in situations where that could be a problem. Which really only means in Remote.Git. Made Remote.Git check expiry when dropping from a local remote. Checking expiry when dropping from a P2P remote is not yet implemented. P2P.Protocol.remove has SafeDropProof plumbed through to it for that purpose. Fixing the remaining 2 build warnings should complete this work. Note that the use of a POSIXTime here means that if the clock gets set forward while git-annex is in the middle of a drop, it may say that dropping took too long. That seems ok. Less ok is that if the clock gets turned back a sufficient amount (eg 5 minutes), proof expiry won't be noticed. It might be better to use the Monotonic clock, but that doesn't advance when a laptop is suspended, and while there is the linux Boottime clock, that is not available on other systems. Perhaps a combination of POSIXTime and the Monotonic clock could detect laptop suspension and also detect clock being turned back? There is a potential future flag day where p2pDefaultLockContentRetentionDuration is not assumed, but is probed using the P2P protocol, and peers that don't support it can no longer produce a LockedCopy. Until that happens, when git-annex is communicating with older peers there is a risk of data loss when a ssh connection closes during LOCKCONTENT.
2024-07-04 16:23:46 +00:00
dropKey :: Remote -> State -> Maybe SafeDropProof -> Key -> Annex ()
dropKey r st proof key = do
repo <- getRepo r
toward SafeDropProof expiry checking Added Maybe POSIXTime to SafeDropProof, which gets set when the proof is based on a LockedCopy. If there are several LockedCopies, it uses the closest expiry time. That is not optimal, it may be that the proof expires based on one LockedCopy but another one has not expired. But that seems unlikely to really happen, and anyway the user can just re-run a drop if it fails due to expiry. Pass the SafeDropProof to removeKey, which is responsible for checking it for expiry in situations where that could be a problem. Which really only means in Remote.Git. Made Remote.Git check expiry when dropping from a local remote. Checking expiry when dropping from a P2P remote is not yet implemented. P2P.Protocol.remove has SafeDropProof plumbed through to it for that purpose. Fixing the remaining 2 build warnings should complete this work. Note that the use of a POSIXTime here means that if the clock gets set forward while git-annex is in the middle of a drop, it may say that dropping took too long. That seems ok. Less ok is that if the clock gets turned back a sufficient amount (eg 5 minutes), proof expiry won't be noticed. It might be better to use the Monotonic clock, but that doesn't advance when a laptop is suspended, and while there is the linux Boottime clock, that is not available on other systems. Perhaps a combination of POSIXTime and the Monotonic clock could detect laptop suspension and also detect clock being turned back? There is a potential future flag day where p2pDefaultLockContentRetentionDuration is not assumed, but is probed using the P2P protocol, and peers that don't support it can no longer produce a LockedCopy. Until that happens, when git-annex is communicating with older peers there is a risk of data loss when a ssh connection closes during LOCKCONTENT.
2024-07-04 16:23:46 +00:00
dropKey' repo r st proof key
toward SafeDropProof expiry checking Added Maybe POSIXTime to SafeDropProof, which gets set when the proof is based on a LockedCopy. If there are several LockedCopies, it uses the closest expiry time. That is not optimal, it may be that the proof expires based on one LockedCopy but another one has not expired. But that seems unlikely to really happen, and anyway the user can just re-run a drop if it fails due to expiry. Pass the SafeDropProof to removeKey, which is responsible for checking it for expiry in situations where that could be a problem. Which really only means in Remote.Git. Made Remote.Git check expiry when dropping from a local remote. Checking expiry when dropping from a P2P remote is not yet implemented. P2P.Protocol.remove has SafeDropProof plumbed through to it for that purpose. Fixing the remaining 2 build warnings should complete this work. Note that the use of a POSIXTime here means that if the clock gets set forward while git-annex is in the middle of a drop, it may say that dropping took too long. That seems ok. Less ok is that if the clock gets turned back a sufficient amount (eg 5 minutes), proof expiry won't be noticed. It might be better to use the Monotonic clock, but that doesn't advance when a laptop is suspended, and while there is the linux Boottime clock, that is not available on other systems. Perhaps a combination of POSIXTime and the Monotonic clock could detect laptop suspension and also detect clock being turned back? There is a potential future flag day where p2pDefaultLockContentRetentionDuration is not assumed, but is probed using the P2P protocol, and peers that don't support it can no longer produce a LockedCopy. Until that happens, when git-annex is communicating with older peers there is a risk of data loss when a ssh connection closes during LOCKCONTENT.
2024-07-04 16:23:46 +00:00
dropKey' :: Git.Repo -> Remote -> State -> Maybe SafeDropProof -> Key -> Annex ()
dropKey' repo r st@(State connpool duc _ _ _) proof key
| isP2PHttp r =
clientRemoveWithProof proof key unabletoremove r >>= \case
2024-07-25 14:12:59 +00:00
RemoveResultPlus True fanoutuuids ->
storefanout fanoutuuids
RemoveResultPlus False fanoutuuids -> do
storefanout fanoutuuids
unabletoremove
| not $ Git.repoIsUrl repo = ifM duc
toward SafeDropProof expiry checking Added Maybe POSIXTime to SafeDropProof, which gets set when the proof is based on a LockedCopy. If there are several LockedCopies, it uses the closest expiry time. That is not optimal, it may be that the proof expires based on one LockedCopy but another one has not expired. But that seems unlikely to really happen, and anyway the user can just re-run a drop if it fails due to expiry. Pass the SafeDropProof to removeKey, which is responsible for checking it for expiry in situations where that could be a problem. Which really only means in Remote.Git. Made Remote.Git check expiry when dropping from a local remote. Checking expiry when dropping from a P2P remote is not yet implemented. P2P.Protocol.remove has SafeDropProof plumbed through to it for that purpose. Fixing the remaining 2 build warnings should complete this work. Note that the use of a POSIXTime here means that if the clock gets set forward while git-annex is in the middle of a drop, it may say that dropping took too long. That seems ok. Less ok is that if the clock gets turned back a sufficient amount (eg 5 minutes), proof expiry won't be noticed. It might be better to use the Monotonic clock, but that doesn't advance when a laptop is suspended, and while there is the linux Boottime clock, that is not available on other systems. Perhaps a combination of POSIXTime and the Monotonic clock could detect laptop suspension and also detect clock being turned back? There is a potential future flag day where p2pDefaultLockContentRetentionDuration is not assumed, but is probed using the P2P protocol, and peers that don't support it can no longer produce a LockedCopy. Until that happens, when git-annex is communicating with older peers there is a risk of data loss when a ssh connection closes during LOCKCONTENT.
2024-07-04 16:23:46 +00:00
( guardUsable repo (giveup "cannot access remote") removelocal
2020-05-14 18:08:09 +00:00
, giveup "remote does not have expected annex.uuid value"
)
| Git.repoIsHttp repo = giveup "dropping from this remote is not supported"
toward SafeDropProof expiry checking Added Maybe POSIXTime to SafeDropProof, which gets set when the proof is based on a LockedCopy. If there are several LockedCopies, it uses the closest expiry time. That is not optimal, it may be that the proof expires based on one LockedCopy but another one has not expired. But that seems unlikely to really happen, and anyway the user can just re-run a drop if it fails due to expiry. Pass the SafeDropProof to removeKey, which is responsible for checking it for expiry in situations where that could be a problem. Which really only means in Remote.Git. Made Remote.Git check expiry when dropping from a local remote. Checking expiry when dropping from a P2P remote is not yet implemented. P2P.Protocol.remove has SafeDropProof plumbed through to it for that purpose. Fixing the remaining 2 build warnings should complete this work. Note that the use of a POSIXTime here means that if the clock gets set forward while git-annex is in the middle of a drop, it may say that dropping took too long. That seems ok. Less ok is that if the clock gets turned back a sufficient amount (eg 5 minutes), proof expiry won't be noticed. It might be better to use the Monotonic clock, but that doesn't advance when a laptop is suspended, and while there is the linux Boottime clock, that is not available on other systems. Perhaps a combination of POSIXTime and the Monotonic clock could detect laptop suspension and also detect clock being turned back? There is a potential future flag day where p2pDefaultLockContentRetentionDuration is not assumed, but is probed using the P2P protocol, and peers that don't support it can no longer produce a LockedCopy. Until that happens, when git-annex is communicating with older peers there is a risk of data loss when a ssh connection closes during LOCKCONTENT.
2024-07-04 16:23:46 +00:00
| otherwise = P2PHelper.remove (uuid r) p2prunner proof key
where
p2prunner = Ssh.runProto r connpool (return (Right False, Nothing))
unabletoremove = giveup "removing content from remote failed"
toward SafeDropProof expiry checking Added Maybe POSIXTime to SafeDropProof, which gets set when the proof is based on a LockedCopy. If there are several LockedCopies, it uses the closest expiry time. That is not optimal, it may be that the proof expires based on one LockedCopy but another one has not expired. But that seems unlikely to really happen, and anyway the user can just re-run a drop if it fails due to expiry. Pass the SafeDropProof to removeKey, which is responsible for checking it for expiry in situations where that could be a problem. Which really only means in Remote.Git. Made Remote.Git check expiry when dropping from a local remote. Checking expiry when dropping from a P2P remote is not yet implemented. P2P.Protocol.remove has SafeDropProof plumbed through to it for that purpose. Fixing the remaining 2 build warnings should complete this work. Note that the use of a POSIXTime here means that if the clock gets set forward while git-annex is in the middle of a drop, it may say that dropping took too long. That seems ok. Less ok is that if the clock gets turned back a sufficient amount (eg 5 minutes), proof expiry won't be noticed. It might be better to use the Monotonic clock, but that doesn't advance when a laptop is suspended, and while there is the linux Boottime clock, that is not available on other systems. Perhaps a combination of POSIXTime and the Monotonic clock could detect laptop suspension and also detect clock being turned back? There is a potential future flag day where p2pDefaultLockContentRetentionDuration is not assumed, but is probed using the P2P protocol, and peers that don't support it can no longer produce a LockedCopy. Until that happens, when git-annex is communicating with older peers there is a risk of data loss when a ssh connection closes during LOCKCONTENT.
2024-07-04 16:23:46 +00:00
-- It could take a long time to eg, automount a drive containing
-- the repo, so check the proof for expiry again after locking the
-- content for removal.
removelocal = do
proofunexpired <- commitOnCleanup repo r st $ onLocalFast st $ do
ifM (Annex.Content.inAnnex key)
( do
let cleanup = do
logStatus NoLiveUpdate key InfoMissing
toward SafeDropProof expiry checking Added Maybe POSIXTime to SafeDropProof, which gets set when the proof is based on a LockedCopy. If there are several LockedCopies, it uses the closest expiry time. That is not optimal, it may be that the proof expires based on one LockedCopy but another one has not expired. But that seems unlikely to really happen, and anyway the user can just re-run a drop if it fails due to expiry. Pass the SafeDropProof to removeKey, which is responsible for checking it for expiry in situations where that could be a problem. Which really only means in Remote.Git. Made Remote.Git check expiry when dropping from a local remote. Checking expiry when dropping from a P2P remote is not yet implemented. P2P.Protocol.remove has SafeDropProof plumbed through to it for that purpose. Fixing the remaining 2 build warnings should complete this work. Note that the use of a POSIXTime here means that if the clock gets set forward while git-annex is in the middle of a drop, it may say that dropping took too long. That seems ok. Less ok is that if the clock gets turned back a sufficient amount (eg 5 minutes), proof expiry won't be noticed. It might be better to use the Monotonic clock, but that doesn't advance when a laptop is suspended, and while there is the linux Boottime clock, that is not available on other systems. Perhaps a combination of POSIXTime and the Monotonic clock could detect laptop suspension and also detect clock being turned back? There is a potential future flag day where p2pDefaultLockContentRetentionDuration is not assumed, but is probed using the P2P protocol, and peers that don't support it can no longer produce a LockedCopy. Until that happens, when git-annex is communicating with older peers there is a risk of data loss when a ssh connection closes during LOCKCONTENT.
2024-07-04 16:23:46 +00:00
return True
Annex.Content.lockContentForRemoval key cleanup $ \lock ->
ifM (liftIO $ checkSafeDropProofEndTime proof)
( do
Annex.Content.removeAnnex lock
cleanup
, return False
)
, return True
)
unless proofunexpired
safeDropProofExpired
storefanout = P2PHelper.storeFanout NoLiveUpdate key InfoMissing (uuid r) . map fromB64UUID
lockKey :: Remote -> State -> Key -> (VerifiedCopy -> Annex r) -> Annex r
lockKey r st key callback = do
repo <- getRepo r
lockKey' repo r st key callback
lockKey' :: Git.Repo -> Remote -> State -> Key -> (VerifiedCopy -> Annex r) -> Annex r
lockKey' repo r st@(State connpool duc _ _ _) key callback
| isP2PHttp r = do
showLocking r
p2pHttpClient r giveup (clientLockContent key) >>= \case
LockResult True (Just lckid) ->
p2pHttpClient r failedlock $
clientKeepLocked lckid (uuid r)
failedlock callback
_ -> failedlock
| not $ Git.repoIsUrl repo = ifM duc
( guardUsable repo failedlock $ do
inorigrepo <- Annex.makeRunner
-- Lock content from perspective of remote,
-- and then run the callback in the original
-- annex monad, not the remote's.
onLocalFast st $
Annex.Content.lockContentShared key Nothing $
liftIO . inorigrepo . callback
, failedlock
)
| Git.repoIsSsh repo = do
showLocking r
let withconn = Ssh.withP2PShellConnection r connpool failedlock
2018-03-09 17:48:10 +00:00
P2PHelper.lock withconn Ssh.runProtoConn (uuid r) key callback
| otherwise = failedlock
where
failedlock = giveup "can't lock content"
{- Tries to copy a key's content from a remote's annex to a file. -}
copyFromRemote :: Remote -> State -> Key -> AssociatedFile -> FilePath -> MeterUpdate -> VerifyConfig -> Annex Verification
copyFromRemote r st key file dest meterupdate vc = do
repo <- getRepo r
copyFromRemote'' repo r st key file dest meterupdate vc
copyFromRemote'' :: Git.Repo -> Remote -> State -> Key -> AssociatedFile -> FilePath -> MeterUpdate -> VerifyConfig -> Annex Verification
copyFromRemote'' repo r st@(State connpool _ _ _ _) key af dest meterupdate vc
| isP2PHttp r = copyp2phttp
| Git.repoIsHttp repo = verifyKeyContentIncrementally vc key $ \iv -> do
gc <- Annex.getGitConfig
ok <- Url.withUrlOptionsPromptingCreds $
Annex.Content.downloadUrl False key meterupdate iv (keyUrls gc repo r key) dest
unless ok $
giveup "failed to download content"
| not $ Git.repoIsUrl repo = guardUsable repo (giveup "cannot access remote") $ do
u <- getUUID
hardlink <- wantHardLink
-- run copy from perspective of remote
onLocalFast st $ Annex.Content.prepSendAnnex' key Nothing >>= \case
Just (object, _sz, check) -> do
let checksuccess = check >>= \case
Just err -> giveup err
Nothing -> return True
copier <- mkFileCopier hardlink st
(ok, v) <- runTransfer (Transfer Download u (fromKey id key))
Nothing af Nothing stdRetry $ \p ->
metered (Just (combineMeterUpdate p meterupdate)) key bwlimit $ \_ p' ->
copier object dest key p' checksuccess vc
if ok
then return v
else giveup "failed to retrieve content from remote"
Nothing -> giveup "content is not present in remote"
| Git.repoIsSsh repo =
P2PHelper.retrieve
(gitconfig r)
(Ssh.runProto r connpool (return (False, UnVerified)))
key af dest meterupdate vc
| otherwise = giveup "copying from this remote is not supported"
where
bwlimit = remoteAnnexBwLimitDownload (gitconfig r)
<|> remoteAnnexBwLimit (gitconfig r)
copyp2phttp = verifyKeyContentIncrementally vc key $ \iv -> do
startsz <- liftIO $ tryWhenExists $
getFileSize (toRawFilePath dest)
bracketIO (openBinaryFile dest ReadWriteMode) (hClose) $ \h -> do
metered (Just meterupdate) key bwlimit $ \_ p -> do
p' <- case startsz of
Just startsz' -> liftIO $ do
resumeVerifyFromOffset startsz' iv p h
_ -> return p
2024-07-24 15:10:19 +00:00
let consumer = meteredWrite' p'
(writeVerifyChunk iv h)
p2pHttpClient r giveup (clientGet key af consumer startsz) >>= \case
Valid -> return ()
Invalid -> giveup "Transfer failed"
copyFromRemoteCheap :: State -> Git.Repo -> Maybe (Key -> AssociatedFile -> FilePath -> Annex ())
#ifndef mingw32_HOST_OS
copyFromRemoteCheap st repo
| not $ Git.repoIsUrl repo = Just $ \key _af file -> guardUsable repo (giveup "cannot access remote") $ do
gc <- getGitConfigFromState st
loc <- liftIO $ gitAnnexLocation key repo gc
liftIO $ ifM (R.doesPathExist loc)
( do
2020-10-30 19:55:59 +00:00
absloc <- absPath loc
R.createSymbolicLink absloc (toRawFilePath file)
, giveup "remote does not contain key"
)
| otherwise = Nothing
2013-08-04 17:12:18 +00:00
#else
2021-02-15 17:35:01 +00:00
copyFromRemoteCheap _ _ = Nothing
2013-08-04 17:12:18 +00:00
#endif
{- Tries to copy a key's content to a remote's annex. -}
copyToRemote :: Remote -> State -> Key -> AssociatedFile -> Maybe FilePath -> MeterUpdate -> Annex ()
copyToRemote r st key af o meterupdate = do
repo <- getRepo r
copyToRemote' repo r st key af o meterupdate
copyToRemote' :: Git.Repo -> Remote -> State -> Key -> AssociatedFile -> Maybe FilePath -> MeterUpdate -> Annex ()
copyToRemote' repo r st@(State connpool duc _ _ _) key af o meterupdate
| isP2PHttp r = prepsendwith copyp2phttp
| not $ Git.repoIsUrl repo = ifM duc
( guardUsable repo (giveup "cannot access remote") $ commitOnCleanup repo r st $
prepsendwith copylocal
, giveup "remote does not have expected annex.uuid value"
)
| Git.repoIsSsh repo =
P2PHelper.store (uuid r) (gitconfig r)
(Ssh.runProto r connpool (return Nothing))
key af o meterupdate
| otherwise = giveup "copying to this remote is not supported"
where
prepsendwith a = Annex.Content.prepSendAnnex' key o >>= \case
Nothing -> giveup "content not available"
Just v -> a v
bwlimit = remoteAnnexBwLimitUpload (gitconfig r)
<|> remoteAnnexBwLimit (gitconfig r)
failedsend = giveup "failed to send content to remote"
copylocal (object, sz, check) = do
-- The check action is going to be run in
deal with unlocked files when calling rsyncParamsRemote In copyFromRemote, it used to check isDirect, but that was not needed; the remote is sending the file, so it doesn't matter if the local, receiving repository is in direct mode or not. And, since the content is not present, yet, it's certianly not unlocked. Note that, the remote may indeed be sending an unlocked file, but sendkey uses sendAnnex, which will detect if the file is modified before or during transfer, and will exit nonzero, aborting the upload. So, the receiver doesn't need any checks. In copyToRemote, it forces recvkey to verify content whenever it's being sent from a v6 repository. recvkey is almost always going to verify content anyway, unless annex.verify is not set. So, this doesn't make it any more expensive, except for in that unusual configuration. The alternative would be to change the recvkey interface, so that the sender checks afterwards if what it was sending changed, and the receiver then throws out the bad transfer. That would be less expensive for the reciever, as it would not need to do a checksum verification. But, it would mean another network round trip, and since rsync closes the connection, it would need to open another ssh connection to do this. Even with connction caching, that would add latency to uploads. It would also complicate the interface, especially because an older git-annex-shell would not have the new interface available. For these reasons, I prefer punting on that at this time, and instead someone might set annex.verify=false and be unhappy that it still verifies.. (One other gotcha not dealt with is that a v5 repo could be upgraded to v6 while an upload is in progress, and a file unlocked and modified.) (Also, I double-checked Remote.GCrypt's calls to rsyncParamsRemote, and they're fine. When a file is being uploaded to gcrypt, or any other special repository, it is mediated by sendAnnex, so changes will be detected at that level and the special remote implementation doesn't need to worry about them.)
2015-12-26 18:05:07 +00:00
-- the remote's Annex, but it needs access to the local
-- Annex monad's state.
checkio <- Annex.withCurrentState check
u <- getUUID
hardlink <- wantHardLink
-- run copy from perspective of remote
res <- onLocalFast st $ ifM (Annex.Content.inAnnex key)
( return True
, runTransfer (Transfer Download u (fromKey id key)) Nothing af Nothing stdRetry $ \p -> do
let verify = RemoteVerify r
copier <- mkFileCopier hardlink st
let rsp = RetrievalAllKeysSecure
let checksuccess = liftIO checkio >>= \case
Just err -> giveup err
Nothing -> return True
logStatusAfter NoLiveUpdate key $ Annex.Content.getViaTmp rsp verify key af (Just sz) $ \dest ->
metered (Just (combineMeterUpdate meterupdate p)) key bwlimit $ \_ p' ->
copier object (fromRawFilePath dest) key p' checksuccess verify
)
unless res $
failedsend
copyp2phttp (object, sz, check) =
let check' = check >>= \case
Just s -> do
warning (UnquotedString s)
return False
Nothing -> return True
in p2pHttpClient r (const $ pure $ PutOffsetResultPlus (Offset 0)) (clientPutOffset key) >>= \case
2024-07-24 16:14:56 +00:00
PutOffsetResultPlus (offset@(Offset (P2P.Offset n))) ->
metered (Just meterupdate) key bwlimit $ \_ p -> do
2024-07-24 16:14:56 +00:00
let p' = offsetMeterUpdate p (BytesProcessed n)
res <- p2pHttpClient r giveup $
2024-07-24 16:14:56 +00:00
clientPut p' key (Just offset) af object sz check'
case res of
PutResultPlus False fanoutuuids -> do
storefanout fanoutuuids
failedsend
PutResultPlus True fanoutuuids ->
storefanout fanoutuuids
PutOffsetResultAlreadyHavePlus fanoutuuids ->
storefanout fanoutuuids
storefanout = P2PHelper.storeFanout NoLiveUpdate key InfoPresent (uuid r) . map fromB64UUID
fsckOnRemote :: Git.Repo -> [CommandParam] -> Annex (IO Bool)
fsckOnRemote r params
| Git.repoIsUrl r = return $ return False
2013-10-14 16:23:38 +00:00
| otherwise = return $ do
program <- programPath
2013-10-14 19:05:10 +00:00
r' <- Git.Config.read r
environ <- getEnvironment
let environ' = addEntries
[ ("GIT_WORK_TREE", fromRawFilePath $ Git.repoPath r')
, ("GIT_DIR", fromRawFilePath $ Git.localGitDir r')
] environ
2015-02-09 18:16:42 +00:00
batchCommandEnv program (Param "fsck" : params) (Just environ')
{- The passed repair action is run in the Annex monad of the remote. -}
repairRemote :: Git.Repo -> Annex Bool -> Annex (IO Bool)
repairRemote r a = return $ do
s <- Annex.new r
Annex.eval s $ do
Annex.BranchState.disableUpdate
remove dead nodes when loading the cluster log This is to avoid inserting a cluster uuid into the location log when only dead nodes in the cluster contain the content of a key. One reason why this is necessary is Remote.keyLocations, which excludes dead repositories from the list. But there are probably many more. Implementing this was challenging, because Logs.Location importing Logs.Cluster which imports Logs.Trust which imports Remote.List resulted in an import cycle through several other modules. Resorted to making Logs.Location not import Logs.Cluster, and instead it assumes that Annex.clusters gets populated when necessary before it's called. That's done in Annex.Startup, which is run by the git-annex command (but not other commands) at early startup in initialized repos. Or, is run after initialization. Note that is Remote.Git, it is unable to import Annex.Startup, because Remote.Git importing Logs.Cluster leads the the same import cycle. So ensureInitialized is not passed annexStartup in there. Other commands, like git-annex-shell currently don't run annexStartup either. So there are cases where Logs.Location will not see clusters. So it won't add any cluster UUIDs when loading the log. That's ok, the only reason to do that is to make display of where objects are located include clusters, and to make commands like git-annex get --from treat keys as being located in a cluster. git-annex-shell certainly does not do anything like that, and I'm pretty sure Remote.Git (and callers to Remote.Git.onLocalRepo) don't either.
2024-06-16 18:35:07 +00:00
ensureInitialized noop (pure [])
avoid flushing keys db queue after each Annex action The flush was only done Annex.run' to make sure that the queue was flushed before git-annex exits. But, doing it there means that as soon as one change gets queued, it gets flushed soon after, which contributes to excessive writes to the database, slowing git-annex down. (This does not yet speed git-annex up, but it is a stepping stone to doing so.) Database queues do not autoflush when garbage collected, so have to be flushed explicitly. I don't think it's possible to make them autoflush (except perhaps if git-annex sqitched to using ResourceT..). The comment in Database.Keys.closeDb used to be accurate, since the automatic flushing did mean that all writes reached the database even when closeDb was not called. But now, closeDb or flushDb needs to be called before stopping using an Annex state. So, removed that comment. In Remote.Git, change to using quiesce everywhere that it used to use stopCoProcesses. This means that uses on onLocal in there are just as slow as before. I considered only calling closeDb on the local git remotes when git-annex exits. But, the reason that Remote.Git calls stopCoProcesses in each onLocal is so as not to leave git processes running that have files open on the remote repo, when it's on removable media. So, it seemed to make sense to also closeDb after each one, since sqlite may also keep files open. Although that has not seemed to cause problems with removable media so far. It was also just easier to quiesce in each onLocal than once at the end. This does likely leave performance on the floor, so could be revisited. In Annex.Content.saveState, there was no reason to close the db, flushing it is enough. The rest of the changes are from auditing for Annex.new, and making sure that quiesce is called, after any action that might possibly need it. After that audit, I'm pretty sure that the change to Annex.run' is safe. The only concern might be that this does let more changes get queued for write to the db, and if git-annex is interrupted, those will be lost. But interrupting git-annex can obviously already prevent it from writing the most recent change to the db, so it must recover from such lost data... right? Sponsored-by: Dartmouth College's Datalad project
2022-10-12 17:50:46 +00:00
a `finally` quiesce True
data LocalRemoteAnnex = LocalRemoteAnnex Git.Repo (MVar [(Annex.AnnexState, Annex.AnnexRead)])
{- This can safely be called on a Repo that is not local, but of course
- onLocal will not work if used with the result. -}
mkLocalRemoteAnnex :: Git.Repo -> Annex (LocalRemoteAnnex)
mkLocalRemoteAnnex repo = LocalRemoteAnnex repo <$> liftIO (newMVar [])
{- Runs an action from the perspective of a local remote.
-
- The AnnexState is cached for speed and to avoid resource leaks.
avoid flushing keys db queue after each Annex action The flush was only done Annex.run' to make sure that the queue was flushed before git-annex exits. But, doing it there means that as soon as one change gets queued, it gets flushed soon after, which contributes to excessive writes to the database, slowing git-annex down. (This does not yet speed git-annex up, but it is a stepping stone to doing so.) Database queues do not autoflush when garbage collected, so have to be flushed explicitly. I don't think it's possible to make them autoflush (except perhaps if git-annex sqitched to using ResourceT..). The comment in Database.Keys.closeDb used to be accurate, since the automatic flushing did mean that all writes reached the database even when closeDb was not called. But now, closeDb or flushDb needs to be called before stopping using an Annex state. So, removed that comment. In Remote.Git, change to using quiesce everywhere that it used to use stopCoProcesses. This means that uses on onLocal in there are just as slow as before. I considered only calling closeDb on the local git remotes when git-annex exits. But, the reason that Remote.Git calls stopCoProcesses in each onLocal is so as not to leave git processes running that have files open on the remote repo, when it's on removable media. So, it seemed to make sense to also closeDb after each one, since sqlite may also keep files open. Although that has not seemed to cause problems with removable media so far. It was also just easier to quiesce in each onLocal than once at the end. This does likely leave performance on the floor, so could be revisited. In Annex.Content.saveState, there was no reason to close the db, flushing it is enough. The rest of the changes are from auditing for Annex.new, and making sure that quiesce is called, after any action that might possibly need it. After that audit, I'm pretty sure that the change to Annex.run' is safe. The only concern might be that this does let more changes get queued for write to the db, and if git-annex is interrupted, those will be lost. But interrupting git-annex can obviously already prevent it from writing the most recent change to the db, so it must recover from such lost data... right? Sponsored-by: Dartmouth College's Datalad project
2022-10-12 17:50:46 +00:00
- However, it is quiesced after each call to avoid git processes
- hanging around on removable media.
-
- The remote will be automatically initialized/upgraded first,
- when possible.
-}
onLocal :: State -> Annex a -> Annex a
onLocal (State _ _ _ _ lra) = onLocal' lra
onLocalRepo :: Git.Repo -> Annex a -> Annex a
onLocalRepo repo a = do
lra <- mkLocalRemoteAnnex repo
onLocal' lra a
newLocal :: Git.Repo -> Annex (Annex.AnnexState, Annex.AnnexRead)
newLocal repo = do
(st, rd) <- liftIO $ Annex.new repo
debugenabled <- Annex.getRead Annex.debugenabled
debugselector <- Annex.getRead Annex.debugselector
force <- Annex.getRead Annex.force
return (st, rd
{ Annex.debugenabled = debugenabled
, Annex.debugselector = debugselector
, Annex.force = force
})
onLocal' :: LocalRemoteAnnex -> Annex a -> Annex a
onLocal' (LocalRemoteAnnex repo mv) a = liftIO (takeMVar mv) >>= \case
[] -> do
liftIO $ putMVar mv []
v <- newLocal repo
remove dead nodes when loading the cluster log This is to avoid inserting a cluster uuid into the location log when only dead nodes in the cluster contain the content of a key. One reason why this is necessary is Remote.keyLocations, which excludes dead repositories from the list. But there are probably many more. Implementing this was challenging, because Logs.Location importing Logs.Cluster which imports Logs.Trust which imports Remote.List resulted in an import cycle through several other modules. Resorted to making Logs.Location not import Logs.Cluster, and instead it assumes that Annex.clusters gets populated when necessary before it's called. That's done in Annex.Startup, which is run by the git-annex command (but not other commands) at early startup in initialized repos. Or, is run after initialization. Note that is Remote.Git, it is unable to import Annex.Startup, because Remote.Git importing Logs.Cluster leads the the same import cycle. So ensureInitialized is not passed annexStartup in there. Other commands, like git-annex-shell currently don't run annexStartup either. So there are cases where Logs.Location will not see clusters. So it won't add any cluster UUIDs when loading the log. That's ok, the only reason to do that is to make display of where objects are located include clusters, and to make commands like git-annex get --from treat keys as being located in a cluster. git-annex-shell certainly does not do anything like that, and I'm pretty sure Remote.Git (and callers to Remote.Git.onLocalRepo) don't either.
2024-06-16 18:35:07 +00:00
go (v, ensureInitialized noop (pure []) >> a)
(v:rest) -> do
liftIO $ putMVar mv rest
go (v, a)
where
go ((st, rd), a') = do
curro <- Annex.getState Annex.output
let act = Annex.run (st { Annex.output = curro }, rd) $
avoid flushing keys db queue after each Annex action The flush was only done Annex.run' to make sure that the queue was flushed before git-annex exits. But, doing it there means that as soon as one change gets queued, it gets flushed soon after, which contributes to excessive writes to the database, slowing git-annex down. (This does not yet speed git-annex up, but it is a stepping stone to doing so.) Database queues do not autoflush when garbage collected, so have to be flushed explicitly. I don't think it's possible to make them autoflush (except perhaps if git-annex sqitched to using ResourceT..). The comment in Database.Keys.closeDb used to be accurate, since the automatic flushing did mean that all writes reached the database even when closeDb was not called. But now, closeDb or flushDb needs to be called before stopping using an Annex state. So, removed that comment. In Remote.Git, change to using quiesce everywhere that it used to use stopCoProcesses. This means that uses on onLocal in there are just as slow as before. I considered only calling closeDb on the local git remotes when git-annex exits. But, the reason that Remote.Git calls stopCoProcesses in each onLocal is so as not to leave git processes running that have files open on the remote repo, when it's on removable media. So, it seemed to make sense to also closeDb after each one, since sqlite may also keep files open. Although that has not seemed to cause problems with removable media so far. It was also just easier to quiesce in each onLocal than once at the end. This does likely leave performance on the floor, so could be revisited. In Annex.Content.saveState, there was no reason to close the db, flushing it is enough. The rest of the changes are from auditing for Annex.new, and making sure that quiesce is called, after any action that might possibly need it. After that audit, I'm pretty sure that the change to Annex.run' is safe. The only concern might be that this does let more changes get queued for write to the db, and if git-annex is interrupted, those will be lost. But interrupting git-annex can obviously already prevent it from writing the most recent change to the db, so it must recover from such lost data... right? Sponsored-by: Dartmouth College's Datalad project
2022-10-12 17:50:46 +00:00
a' `finally` quiesce True
(ret, (st', _rd)) <- liftIO $ act `onException` cache (st, rd)
liftIO $ cache (st', rd)
return ret
cache v = do
l <- takeMVar mv
putMVar mv (v:l)
{- Faster variant of onLocal.
-
- The repository's git-annex branch is not updated, as an optimisation.
- No caller of onLocalFast can query data from the branch and be ensured
- it gets the most current value. Caller of onLocalFast can make changes
- to the branch, however.
-}
onLocalFast :: State -> Annex a -> Annex a
onLocalFast st a = onLocal st $ Annex.BranchState.disableUpdate >> a
commitOnCleanup :: Git.Repo -> Remote -> State -> Annex a -> Annex a
commitOnCleanup repo r st a = go `after` a
where
go = Annex.addCleanupAction (RemoteCleanup $ uuid r) cleanup
cleanup
| not $ Git.repoIsUrl repo = onLocalFast st $
doQuietSideAction $
Annex.Branch.commit =<< Annex.Branch.commitMessage
| otherwise = noop
wantHardLink :: Annex Bool
wantHardLink = (annexHardLink <$> Annex.getGitConfig)
-- Not unlocked files that are hard linked in the work tree,
-- because they can be modified at any time.
<&&> (not <$> annexThin <$> Annex.getGitConfig)
type FileCopier = FilePath -> FilePath -> Key -> MeterUpdate -> Annex Bool -> VerifyConfig -> Annex (Bool, Verification)
2021-08-16 19:25:06 +00:00
-- If either the remote or local repository wants to use hard links,
-- the copier will do so (falling back to copying if a hard link cannot be
-- made).
--
-- When a hard link is created, returns Verified; the repo being linked
-- from is implicitly trusted, so no expensive verification needs to be
-- done. Also returns Verified if the key's content is verified while
-- copying it.
mkFileCopier :: Bool -> State -> Annex FileCopier
mkFileCopier remotewanthardlink (State _ _ copycowtried _ _) = do
localwanthardlink <- wantHardLink
let linker = \src dest -> R.createLink (toRawFilePath src) (toRawFilePath dest) >> return True
2019-08-28 15:53:10 +00:00
if remotewanthardlink || localwanthardlink
then return $ \src dest k p check verifyconfig ->
ifM (liftIO (catchBoolIO (linker src dest)))
( ifM check
( return (True, Verified)
, do
verificationOfContentFailed (toRawFilePath dest)
return (False, UnVerified)
)
, copier src dest k p check verifyconfig
)
2019-08-28 15:53:10 +00:00
else return copier
where
copier src dest k p check verifyconfig = do
iv <- startVerifyKeyContentIncrementally verifyconfig k
liftIO (fileCopier copycowtried src dest p iv) >>= \case
Copied -> ifM check
2021-08-18 17:35:53 +00:00
( finishVerifyKeyContentIncrementally iv
, do
verificationOfContentFailed (toRawFilePath dest)
return (False, UnVerified)
)
CopiedCoW -> unVerified check
{- Normally the UUID of a local repository is checked at startup,
- but annex-checkuuid config can prevent that. To avoid getting
- confused, a deferred check is done just before the repository
- is used.
- This returns False when the repository UUID is not as expected. -}
type DeferredUUIDCheck = Annex Bool
data State = State Ssh.P2PShellConnectionPool DeferredUUIDCheck CopyCoWTried (Annex (Git.Repo, GitConfig)) LocalRemoteAnnex
getRepoFromState :: State -> Annex Git.Repo
getRepoFromState (State _ _ _ a _) = fst <$> a
2019-08-13 17:10:33 +00:00
#ifndef mingw32_HOST_OS
{- The config of the remote git repository, cached for speed. -}
getGitConfigFromState :: State -> Annex GitConfig
getGitConfigFromState (State _ _ _ a _) = snd <$> a
2019-08-13 17:10:33 +00:00
#endif
mkState :: Git.Repo -> UUID -> RemoteGitConfig -> Annex State
mkState r u gc = do
pool <- Ssh.mkP2PShellConnectionPool
copycowtried <- liftIO newCopyCoWTried
lra <- mkLocalRemoteAnnex r
(duc, getrepo) <- go
return $ State pool duc copycowtried getrepo lra
where
go
| remoteAnnexCheckUUID gc = return
(return True, return (r, extractGitConfig FromGitConfig r))
| otherwise = do
rv <- liftIO newEmptyMVar
let getrepo = ifM (liftIO $ isEmptyMVar rv)
( do
r' <- tryGitConfigRead False r True
let t = (r', extractGitConfig FromGitConfig r')
void $ liftIO $ tryPutMVar rv t
return t
, liftIO $ readMVar rv
)
cv <- liftIO newEmptyMVar
let duc = ifM (liftIO $ isEmptyMVar cv)
( do
r' <- fst <$> getrepo
u' <- getRepoUUID r'
let ok = u' == u
void $ liftIO $ tryPutMVar cv ok
unless ok $
warning $ UnquotedString $ Git.repoDescribe r ++ " is not the expected repository. The remote's annex-checkuuid configuration prevented noticing the change until now."
return ok
, liftIO $ readMVar cv
)
return (duc, getrepo)
listProxied :: M.Map UUID (S.Set Proxy) -> [Git.Repo] -> Annex [Git.Repo]
listProxied proxies rs = concat <$> mapM go rs
where
go r = do
g <- Annex.gitRepo
u <- getRepoUUID r
gc <- Annex.getRemoteGitConfig r
let cu = fromMaybe u $ remoteAnnexConfigUUID gc
2024-06-12 17:24:25 +00:00
if not (canproxy gc r) || cu == NoUUID
then pure []
else case M.lookup cu proxies of
Nothing -> pure []
Just proxied -> catMaybes
<$> mapM (mkproxied g r gc proxied)
(S.toList proxied)
proxiedremotename r p = do
n <- Git.remoteName r
pure $ n ++ "-" ++ proxyRemoteName p
mkproxied g r gc proxied p = case proxiedremotename r p of
Nothing -> pure Nothing
Just proxyname -> mkproxied' g r gc proxied p proxyname
-- The proxied remote is constructed by renaming the proxy remote,
-- changing its uuid, and setting the proxied remote's inherited
-- configs and uuid in Annex state.
mkproxied' g r gc proxied p proxyname
| any isconfig (M.keys (Git.config g)) = pure Nothing
| otherwise = do
don't sync with cluster nodes by default Avoid `git-annex sync --content` etc from operating on cluster nodes by default since syncing with a cluster implicitly syncs with its nodes. This avoids a lot of unncessary work when a cluster has a lot of nodes just in checking if each node's preferred content is satisfied. And it avoids content being sent to nodes individually, so instead syncing with clusters always fanout uploads to nodes. The downside is that there are situations where a cluster's preferred content settings can be met, but those of its nodes are not. Or where a node does not contain a key, but the cluster does, and there are not enough copies of the key yet, so it would be desirable the send it there. I think that's an acceptable tradeoff. These kind of situations are ones where the cluster itself should probably be responsible for copying content to the node. Which it can do much less expensively than a client can. Part of the balanced preferred content design that I will be working on in a couple of months involves rebalancing clusters, so I expect to revisit this. The use of annex-sync config does allow running git-annex sync with a specific node, or nodes, and it will sync with it. And it's also possible to set annex-sync git configs to make it sync with a node by default. (Although that will require setting up an explicit git remote for the node rather than relying on the proxied remote.) Logs.Cluster.Basic is needed because Remote.Git cannot import Logs.Cluster due to a cycle. And the Annex.Startup load of clusters happens too late for Remote.Git to use that. This does mean one redundant load of the cluster log, though only when there is a proxy.
2024-06-25 14:06:28 +00:00
clusters <- getClustersWith id
-- Not using addGitConfigOverride for inherited
-- configs, because child git processes do not
-- need them to be provided with -c.
don't sync with cluster nodes by default Avoid `git-annex sync --content` etc from operating on cluster nodes by default since syncing with a cluster implicitly syncs with its nodes. This avoids a lot of unncessary work when a cluster has a lot of nodes just in checking if each node's preferred content is satisfied. And it avoids content being sent to nodes individually, so instead syncing with clusters always fanout uploads to nodes. The downside is that there are situations where a cluster's preferred content settings can be met, but those of its nodes are not. Or where a node does not contain a key, but the cluster does, and there are not enough copies of the key yet, so it would be desirable the send it there. I think that's an acceptable tradeoff. These kind of situations are ones where the cluster itself should probably be responsible for copying content to the node. Which it can do much less expensively than a client can. Part of the balanced preferred content design that I will be working on in a couple of months involves rebalancing clusters, so I expect to revisit this. The use of annex-sync config does allow running git-annex sync with a specific node, or nodes, and it will sync with it. And it's also possible to set annex-sync git configs to make it sync with a node by default. (Although that will require setting up an explicit git remote for the node rather than relying on the proxied remote.) Logs.Cluster.Basic is needed because Remote.Git cannot import Logs.Cluster due to a cycle. And the Annex.Startup load of clusters happens too late for Remote.Git to use that. This does mean one redundant load of the cluster log, though only when there is a proxy.
2024-06-25 14:06:28 +00:00
Annex.adjustGitRepo (pure . annexconfigadjuster clusters)
return $ Just $ renamedr
where
renamedr =
let c = adduuid configkeyUUID $
Git.fullconfig r
in r
{ Git.remoteName = Just proxyname
, Git.config = M.map NE.head c
, Git.fullconfig = c
}
don't sync with cluster nodes by default Avoid `git-annex sync --content` etc from operating on cluster nodes by default since syncing with a cluster implicitly syncs with its nodes. This avoids a lot of unncessary work when a cluster has a lot of nodes just in checking if each node's preferred content is satisfied. And it avoids content being sent to nodes individually, so instead syncing with clusters always fanout uploads to nodes. The downside is that there are situations where a cluster's preferred content settings can be met, but those of its nodes are not. Or where a node does not contain a key, but the cluster does, and there are not enough copies of the key yet, so it would be desirable the send it there. I think that's an acceptable tradeoff. These kind of situations are ones where the cluster itself should probably be responsible for copying content to the node. Which it can do much less expensively than a client can. Part of the balanced preferred content design that I will be working on in a couple of months involves rebalancing clusters, so I expect to revisit this. The use of annex-sync config does allow running git-annex sync with a specific node, or nodes, and it will sync with it. And it's also possible to set annex-sync git configs to make it sync with a node by default. (Although that will require setting up an explicit git remote for the node rather than relying on the proxied remote.) Logs.Cluster.Basic is needed because Remote.Git cannot import Logs.Cluster due to a cycle. And the Annex.Startup load of clusters happens too late for Remote.Git to use that. This does mean one redundant load of the cluster log, though only when there is a proxy.
2024-06-25 14:06:28 +00:00
annexconfigadjuster clusters r' =
let c = adduuid (configRepoUUID renamedr) $
addurl $
addproxiedby $
don't sync with cluster nodes by default Avoid `git-annex sync --content` etc from operating on cluster nodes by default since syncing with a cluster implicitly syncs with its nodes. This avoids a lot of unncessary work when a cluster has a lot of nodes just in checking if each node's preferred content is satisfied. And it avoids content being sent to nodes individually, so instead syncing with clusters always fanout uploads to nodes. The downside is that there are situations where a cluster's preferred content settings can be met, but those of its nodes are not. Or where a node does not contain a key, but the cluster does, and there are not enough copies of the key yet, so it would be desirable the send it there. I think that's an acceptable tradeoff. These kind of situations are ones where the cluster itself should probably be responsible for copying content to the node. Which it can do much less expensively than a client can. Part of the balanced preferred content design that I will be working on in a couple of months involves rebalancing clusters, so I expect to revisit this. The use of annex-sync config does allow running git-annex sync with a specific node, or nodes, and it will sync with it. And it's also possible to set annex-sync git configs to make it sync with a node by default. (Although that will require setting up an explicit git remote for the node rather than relying on the proxied remote.) Logs.Cluster.Basic is needed because Remote.Git cannot import Logs.Cluster due to a cycle. And the Annex.Startup load of clusters happens too late for Remote.Git to use that. This does mean one redundant load of the cluster log, though only when there is a proxy.
2024-06-25 14:06:28 +00:00
adjustclusternode clusters $
inheritconfigs $ Git.fullconfig r'
in r'
{ Git.config = M.map NE.head c
, Git.fullconfig = c
}
adduuid ck = M.insert ck $
(Git.ConfigValue $ fromUUID $ proxyRemoteUUID p)
NE.:| []
addurl = M.insert (mkRemoteConfigKey renamedr (remoteGitConfigKey UrlField)) $
(Git.ConfigValue $ encodeBS $ Git.repoLocation r)
NE.:| []
addproxiedby = case remoteAnnexUUID gc of
Just u -> addremoteannexfield ProxiedByField
(Git.ConfigValue $ fromUUID u)
Nothing -> id
don't sync with cluster nodes by default Avoid `git-annex sync --content` etc from operating on cluster nodes by default since syncing with a cluster implicitly syncs with its nodes. This avoids a lot of unncessary work when a cluster has a lot of nodes just in checking if each node's preferred content is satisfied. And it avoids content being sent to nodes individually, so instead syncing with clusters always fanout uploads to nodes. The downside is that there are situations where a cluster's preferred content settings can be met, but those of its nodes are not. Or where a node does not contain a key, but the cluster does, and there are not enough copies of the key yet, so it would be desirable the send it there. I think that's an acceptable tradeoff. These kind of situations are ones where the cluster itself should probably be responsible for copying content to the node. Which it can do much less expensively than a client can. Part of the balanced preferred content design that I will be working on in a couple of months involves rebalancing clusters, so I expect to revisit this. The use of annex-sync config does allow running git-annex sync with a specific node, or nodes, and it will sync with it. And it's also possible to set annex-sync git configs to make it sync with a node by default. (Although that will require setting up an explicit git remote for the node rather than relying on the proxied remote.) Logs.Cluster.Basic is needed because Remote.Git cannot import Logs.Cluster due to a cycle. And the Annex.Startup load of clusters happens too late for Remote.Git to use that. This does mean one redundant load of the cluster log, though only when there is a proxy.
2024-06-25 14:06:28 +00:00
-- A node of a cluster that is being proxied along with
-- that cluster does not need to be synced with
-- by default, because syncing with the cluster will
-- effectively sync with all of its nodes.
--
-- Also, give it a slightly higher cost than the
-- cluster by default, to encourage using the cluster.
don't sync with cluster nodes by default Avoid `git-annex sync --content` etc from operating on cluster nodes by default since syncing with a cluster implicitly syncs with its nodes. This avoids a lot of unncessary work when a cluster has a lot of nodes just in checking if each node's preferred content is satisfied. And it avoids content being sent to nodes individually, so instead syncing with clusters always fanout uploads to nodes. The downside is that there are situations where a cluster's preferred content settings can be met, but those of its nodes are not. Or where a node does not contain a key, but the cluster does, and there are not enough copies of the key yet, so it would be desirable the send it there. I think that's an acceptable tradeoff. These kind of situations are ones where the cluster itself should probably be responsible for copying content to the node. Which it can do much less expensively than a client can. Part of the balanced preferred content design that I will be working on in a couple of months involves rebalancing clusters, so I expect to revisit this. The use of annex-sync config does allow running git-annex sync with a specific node, or nodes, and it will sync with it. And it's also possible to set annex-sync git configs to make it sync with a node by default. (Although that will require setting up an explicit git remote for the node rather than relying on the proxied remote.) Logs.Cluster.Basic is needed because Remote.Git cannot import Logs.Cluster due to a cycle. And the Annex.Startup load of clusters happens too late for Remote.Git to use that. This does mean one redundant load of the cluster log, though only when there is a proxy.
2024-06-25 14:06:28 +00:00
adjustclusternode clusters =
case M.lookup (ClusterNodeUUID (proxyRemoteUUID p)) (clusterNodeUUIDs clusters) of
Just cs
| any (\c -> S.member (fromClusterUUID c) proxieduuids) (S.toList cs) ->
addremoteannexfield SyncField
(Git.ConfigValue $ Git.Config.boolConfig' False)
. addremoteannexfield CostField
(Git.ConfigValue $ encodeBS $ show $ defaultRepoCost r + 0.1)
don't sync with cluster nodes by default Avoid `git-annex sync --content` etc from operating on cluster nodes by default since syncing with a cluster implicitly syncs with its nodes. This avoids a lot of unncessary work when a cluster has a lot of nodes just in checking if each node's preferred content is satisfied. And it avoids content being sent to nodes individually, so instead syncing with clusters always fanout uploads to nodes. The downside is that there are situations where a cluster's preferred content settings can be met, but those of its nodes are not. Or where a node does not contain a key, but the cluster does, and there are not enough copies of the key yet, so it would be desirable the send it there. I think that's an acceptable tradeoff. These kind of situations are ones where the cluster itself should probably be responsible for copying content to the node. Which it can do much less expensively than a client can. Part of the balanced preferred content design that I will be working on in a couple of months involves rebalancing clusters, so I expect to revisit this. The use of annex-sync config does allow running git-annex sync with a specific node, or nodes, and it will sync with it. And it's also possible to set annex-sync git configs to make it sync with a node by default. (Although that will require setting up an explicit git remote for the node rather than relying on the proxied remote.) Logs.Cluster.Basic is needed because Remote.Git cannot import Logs.Cluster due to a cycle. And the Annex.Startup load of clusters happens too late for Remote.Git to use that. This does mean one redundant load of the cluster log, though only when there is a proxy.
2024-06-25 14:06:28 +00:00
_ -> id
proxieduuids = S.map proxyRemoteUUID proxied
addremoteannexfield f = M.insert
(mkRemoteConfigKey renamedr (remoteGitConfigKey f))
. (\v -> v NE.:| [])
inheritconfigs c = foldl' inheritconfig c proxyInheritedFields
inheritconfig c k = case (M.lookup dest c, M.lookup src c) of
(Nothing, Just v) -> M.insert dest v c
_ -> c
where
src = mkRemoteConfigKey r k
dest = mkRemoteConfigKey renamedr k
-- When the git config has anything set for a remote,
-- avoid making a proxied remote with the same name.
-- It is possible to set git configs of proxies, but it
-- needs both the url and uuid config to be manually set.
isconfig (Git.ConfigKey configkey) =
proxyconfigprefix `B.isPrefixOf` configkey
where
Git.ConfigKey proxyconfigprefix = remoteConfig proxyname mempty
-- Git remotes that are gcrypt or git-lfs special remotes cannot
-- proxy. Local git remotes cannot proxy either because
-- git-annex-shell is not used to access a local git url.
-- Proxing is also yet supported for remotes using P2P
-- addresses.
canproxy gc r
| isP2PHttp' gc = True
| remoteAnnexGitLFS gc = False
| Git.GCrypt.isEncrypted r = False
| Git.repoIsLocal r || Git.repoIsLocalUnknown r = False
| otherwise = isNothing (repoP2PAddress r)
isP2PHttp :: Remote -> Bool
isP2PHttp = isP2PHttp' . gitconfig
isP2PHttp' :: RemoteGitConfig -> Bool
isP2PHttp' = isJust . remoteAnnexP2PHttpUrl