git-annex/Types/Remote.hs

418 lines
16 KiB
Haskell
Raw Normal View History

{- git-annex remotes types
-
2011-12-31 08:14:33 +00:00
- Most things should not need this, using Types instead
-
toward SafeDropProof expiry checking Added Maybe POSIXTime to SafeDropProof, which gets set when the proof is based on a LockedCopy. If there are several LockedCopies, it uses the closest expiry time. That is not optimal, it may be that the proof expires based on one LockedCopy but another one has not expired. But that seems unlikely to really happen, and anyway the user can just re-run a drop if it fails due to expiry. Pass the SafeDropProof to removeKey, which is responsible for checking it for expiry in situations where that could be a problem. Which really only means in Remote.Git. Made Remote.Git check expiry when dropping from a local remote. Checking expiry when dropping from a P2P remote is not yet implemented. P2P.Protocol.remove has SafeDropProof plumbed through to it for that purpose. Fixing the remaining 2 build warnings should complete this work. Note that the use of a POSIXTime here means that if the clock gets set forward while git-annex is in the middle of a drop, it may say that dropping took too long. That seems ok. Less ok is that if the clock gets turned back a sufficient amount (eg 5 minutes), proof expiry won't be noticed. It might be better to use the Monotonic clock, but that doesn't advance when a laptop is suspended, and while there is the linux Boottime clock, that is not available on other systems. Perhaps a combination of POSIXTime and the Monotonic clock could detect laptop suspension and also detect clock being turned back? There is a potential future flag day where p2pDefaultLockContentRetentionDuration is not assumed, but is probed using the P2P protocol, and peers that don't support it can no longer produce a LockedCopy. Until that happens, when git-annex is communicating with older peers there is a risk of data loss when a ssh connection closes during LOCKCONTENT.
2024-07-04 16:23:46 +00:00
- Copyright 2011-2024 Joey Hess <id@joeyh.name>
-
- Licensed under the GNU AGPL version 3 or higher.
-}
{-# LANGUAGE RankNTypes #-}
module Types.Remote
( module Types.RemoteConfig
, RemoteTypeA(..)
, RemoteA(..)
, RemoteStateHandle
, SetupStage(..)
, Availability(..)
, VerifyConfigA(..)
, Verification(..)
, unVerified
, RetrievalSecurityPolicy(..)
, isExportSupported
, isImportSupported
, ExportActions(..)
2019-02-20 19:34:33 +00:00
, ImportActions(..)
, ByteSize
toward SafeDropProof expiry checking Added Maybe POSIXTime to SafeDropProof, which gets set when the proof is based on a LockedCopy. If there are several LockedCopies, it uses the closest expiry time. That is not optimal, it may be that the proof expires based on one LockedCopy but another one has not expired. But that seems unlikely to really happen, and anyway the user can just re-run a drop if it fails due to expiry. Pass the SafeDropProof to removeKey, which is responsible for checking it for expiry in situations where that could be a problem. Which really only means in Remote.Git. Made Remote.Git check expiry when dropping from a local remote. Checking expiry when dropping from a P2P remote is not yet implemented. P2P.Protocol.remove has SafeDropProof plumbed through to it for that purpose. Fixing the remaining 2 build warnings should complete this work. Note that the use of a POSIXTime here means that if the clock gets set forward while git-annex is in the middle of a drop, it may say that dropping took too long. That seems ok. Less ok is that if the clock gets turned back a sufficient amount (eg 5 minutes), proof expiry won't be noticed. It might be better to use the Monotonic clock, but that doesn't advance when a laptop is suspended, and while there is the linux Boottime clock, that is not available on other systems. Perhaps a combination of POSIXTime and the Monotonic clock could detect laptop suspension and also detect clock being turned back? There is a potential future flag day where p2pDefaultLockContentRetentionDuration is not assumed, but is probed using the P2P protocol, and peers that don't support it can no longer produce a LockedCopy. Until that happens, when git-annex is communicating with older peers there is a risk of data loss when a ssh connection closes during LOCKCONTENT.
2024-07-04 16:23:46 +00:00
, SafeDropProof
)
where
import Data.Ord
import qualified Git
import Types.Key
import Types.UUID
import Types.GitConfig
import Types.Availability
import Types.Creds
import Types.RemoteState
import Types.UrlContents
2015-10-09 16:36:04 +00:00
import Types.NumCopies
2017-09-15 20:34:45 +00:00
import Types.Export
import Types.Import
import Types.RemoteConfig
import Utility.Hash (IncrementalVerifier)
import Config.Cost
import Utility.Metered
import Git.Types (RemoteName)
import Utility.SafeCommand
2014-12-08 17:40:15 +00:00
import Utility.Url
import Utility.DataUnits
data SetupStage = Init | Enable RemoteConfig | AutoEnable RemoteConfig
2011-03-29 03:51:07 +00:00
{- There are different types of remotes. -}
data RemoteTypeA a = RemoteType
2011-03-29 03:51:07 +00:00
-- human visible type name
{ typename :: String
2011-03-29 21:57:20 +00:00
-- enumerates remotes of this type
-- The Bool is True if automatic initialization of remotes is desired
, enumerate :: Bool -> a [Git.Repo]
-- generates a remote of this type
, generate :: Git.Repo -> UUID -> RemoteConfig -> RemoteGitConfig -> RemoteStateHandle -> a (Maybe (RemoteA a))
-- parse configs of remotes of this type
, configParser :: RemoteConfig -> a RemoteConfigParser
-- initializes or enables a remote
, setup :: SetupStage -> Maybe UUID -> Maybe CredPair -> RemoteConfig -> RemoteGitConfig -> a (RemoteConfig, UUID)
add thirdPartyPopulated interface This is to support, eg a borg repo as a special remote, which is populated not by running git-annex commands, but by using borg. Then git-annex sync lists the content of the remote, learns which files are annex objects, and treats those as present in the remote. So, most of the import machinery is reused, to a new purpose. While normally importtree maintains a remote tracking branch, this does not, because the files stored in the remote are annex object files, not user-visible filenames. But, internally, a git tree is still generated, of the files on the remote that are annex objects. This tree is used by retrieveExportWithContentIdentifier, etc. As with other import/export remotes, that the tree is recorded in the export log, and gets grafted into the git-annex branch. importKey changed to be able to return Nothing, to indicate when an ImportLocation is not an annex object and so should be skipped from being included in the tree. It did not seem to make sense to have git-annex import do this, since from the user's perspective, it's not like other imports. So only git-annex sync does it. Note that, git-annex sync does not yet download objects from such remotes that are preferred content. importKeys is run with content downloading disabled, to avoid getting the content of all objects. Perhaps what's needed is for seekSyncContent to be run with these remotes, but I don't know if it will just work (in particular, it needs to avoid trying to transfer objects to them), so I skipped that for now. (Untested and unused as of yet.) This commit was sponsored by Jochen Bartl on Patreon.
2020-12-18 18:52:57 +00:00
-- check if a remote of this type is able to support export
, exportSupported :: ParsedRemoteConfig -> RemoteGitConfig -> a Bool
add thirdPartyPopulated interface This is to support, eg a borg repo as a special remote, which is populated not by running git-annex commands, but by using borg. Then git-annex sync lists the content of the remote, learns which files are annex objects, and treats those as present in the remote. So, most of the import machinery is reused, to a new purpose. While normally importtree maintains a remote tracking branch, this does not, because the files stored in the remote are annex object files, not user-visible filenames. But, internally, a git tree is still generated, of the files on the remote that are annex objects. This tree is used by retrieveExportWithContentIdentifier, etc. As with other import/export remotes, that the tree is recorded in the export log, and gets grafted into the git-annex branch. importKey changed to be able to return Nothing, to indicate when an ImportLocation is not an annex object and so should be skipped from being included in the tree. It did not seem to make sense to have git-annex import do this, since from the user's perspective, it's not like other imports. So only git-annex sync does it. Note that, git-annex sync does not yet download objects from such remotes that are preferred content. importKeys is run with content downloading disabled, to avoid getting the content of all objects. Perhaps what's needed is for seekSyncContent to be run with these remotes, but I don't know if it will just work (in particular, it needs to avoid trying to transfer objects to them), so I skipped that for now. (Untested and unused as of yet.) This commit was sponsored by Jochen Bartl on Patreon.
2020-12-18 18:52:57 +00:00
-- check if a remote of this type is able to support import
, importSupported :: ParsedRemoteConfig -> RemoteGitConfig -> a Bool
add thirdPartyPopulated interface This is to support, eg a borg repo as a special remote, which is populated not by running git-annex commands, but by using borg. Then git-annex sync lists the content of the remote, learns which files are annex objects, and treats those as present in the remote. So, most of the import machinery is reused, to a new purpose. While normally importtree maintains a remote tracking branch, this does not, because the files stored in the remote are annex object files, not user-visible filenames. But, internally, a git tree is still generated, of the files on the remote that are annex objects. This tree is used by retrieveExportWithContentIdentifier, etc. As with other import/export remotes, that the tree is recorded in the export log, and gets grafted into the git-annex branch. importKey changed to be able to return Nothing, to indicate when an ImportLocation is not an annex object and so should be skipped from being included in the tree. It did not seem to make sense to have git-annex import do this, since from the user's perspective, it's not like other imports. So only git-annex sync does it. Note that, git-annex sync does not yet download objects from such remotes that are preferred content. importKeys is run with content downloading disabled, to avoid getting the content of all objects. Perhaps what's needed is for seekSyncContent to be run with these remotes, but I don't know if it will just work (in particular, it needs to avoid trying to transfer objects to them), so I skipped that for now. (Untested and unused as of yet.) This commit was sponsored by Jochen Bartl on Patreon.
2020-12-18 18:52:57 +00:00
-- is a remote of this type not a usual key/value store,
-- or export/import of a tree of files, but instead a collection
-- of files, populated by something outside git-annex, some of
-- which may be annex objects?
, thirdPartyPopulated :: Bool
}
2011-03-29 03:51:07 +00:00
2011-12-31 08:11:39 +00:00
instance Eq (RemoteTypeA a) where
x == y = typename x == typename y
2011-03-29 03:51:07 +00:00
{- An individual remote. -}
data RemoteA a = Remote
-- each Remote has a unique uuid
{ uuid :: UUID
-- each Remote has a human visible name
, name :: RemoteName
-- Remotes have a use cost; higher is more expensive
, cost :: Cost
2014-07-26 17:25:06 +00:00
-- Transfers a key's contents from disk to the remote.
resume interrupted chunked uploads Leverage the new chunked remotes to automatically resume uploads. Sort of like rsync, although of course not as efficient since this needs to start at a chunk boundry. But, unlike rsync, this method will work for S3, WebDAV, external special remotes, etc, etc. Only directory special remotes so far, but many more soon! This implementation will also allow starting an upload from one repository, interrupting it, and then resuming the upload to the same remote from an entirely different repository. Note that I added a comment that storeKey should atomically move the content into place once it's all received. This was already an undocumented requirement -- it's necessary for hasKey to work reliably. This resume code just uses hasKey to find the first chunk that's missing. Note that if there are two uploads of the same key to the same chunked remote, one might resume at the point the other had gotten to, but both will then redundantly upload. As before. In the non-resume case, this adds one hasKey call per storeKey, and only if the remote is configured to use chunks. Future work: Try to eliminate that hasKey. Notice that eg, `git annex copy --to` checks if the key is present before sending it, so is already running hasKey.. which could perhaps be cached and reused. However, this additional overhead is not very large compared with transferring an entire large file, and the ability to resume is certianly worth it. There is an optimisation in place for small files, that avoids trying to resume if the whole file fits within one chunk. This commit was sponsored by Georg Bauer.
2014-07-28 18:18:08 +00:00
-- The key should not appear to be present on the remote until
-- all of its contents have been transferred.
-- Throws exception on failure.
, storeKey :: Key -> AssociatedFile -> Maybe FilePath -> MeterUpdate -> a ()
-- Retrieves a key's contents to a file.
-- (The MeterUpdate does not need to be used if it writes
-- sequentially to the file.)
-- Throws exception on failure.
, retrieveKeyFile :: Key -> AssociatedFile -> FilePath -> MeterUpdate -> VerifyConfigA a -> a Verification
2015-04-18 17:07:57 +00:00
-- Retrieves a key's contents to a tmp file, if it can be done cheaply.
-- It's ok to create a symlink or hardlink.
-- Throws exception on failure.
, retrieveKeyFileCheap :: Maybe (Key -> AssociatedFile -> FilePath -> a ())
-- Security policy for reteiving keys from this remote.
, retrievalSecurityPolicy :: RetrievalSecurityPolicy
2020-05-14 18:08:09 +00:00
-- Removes a key's contents (succeeds even the contents are not present)
-- Can throw exception if unable to access remote, or if remote
toward SafeDropProof expiry checking Added Maybe POSIXTime to SafeDropProof, which gets set when the proof is based on a LockedCopy. If there are several LockedCopies, it uses the closest expiry time. That is not optimal, it may be that the proof expires based on one LockedCopy but another one has not expired. But that seems unlikely to really happen, and anyway the user can just re-run a drop if it fails due to expiry. Pass the SafeDropProof to removeKey, which is responsible for checking it for expiry in situations where that could be a problem. Which really only means in Remote.Git. Made Remote.Git check expiry when dropping from a local remote. Checking expiry when dropping from a P2P remote is not yet implemented. P2P.Protocol.remove has SafeDropProof plumbed through to it for that purpose. Fixing the remaining 2 build warnings should complete this work. Note that the use of a POSIXTime here means that if the clock gets set forward while git-annex is in the middle of a drop, it may say that dropping took too long. That seems ok. Less ok is that if the clock gets turned back a sufficient amount (eg 5 minutes), proof expiry won't be noticed. It might be better to use the Monotonic clock, but that doesn't advance when a laptop is suspended, and while there is the linux Boottime clock, that is not available on other systems. Perhaps a combination of POSIXTime and the Monotonic clock could detect laptop suspension and also detect clock being turned back? There is a potential future flag day where p2pDefaultLockContentRetentionDuration is not assumed, but is probed using the P2P protocol, and peers that don't support it can no longer produce a LockedCopy. Until that happens, when git-annex is communicating with older peers there is a risk of data loss when a ssh connection closes during LOCKCONTENT.
2024-07-04 16:23:46 +00:00
-- refuses to remove the content, or if the proof is expired.
--
-- The proof is verified not to have expired shortly
-- before calling this. But, if the remote's lockContent returns
-- LockedCopy, the proof's expiry should be checked on the remote,
-- so that a delay in communicating with the remote does not
-- cause the removal to happen after the proof expires.
, removeKey :: Maybe SafeDropProof -> Key -> a ()
-- Uses locking to prevent removal of a key's contents,
-- thus producing a VerifiedCopy, which is passed to the callback.
-- If unable to lock, does not run the callback, and throws an
2020-05-14 18:08:09 +00:00
-- exception.
-- This is optional; remotes do not have to support locking.
, lockContent :: forall r. Maybe (Key -> (VerifiedCopy -> a r) -> a r)
-- Checks if a key is present in the remote.
-- Throws an exception if the remote cannot be accessed.
, checkPresent :: Key -> a Bool
-- Some remotes can checkPresent without an expensive network
-- operation.
, checkPresentCheap :: Bool
add thirdPartyPopulated interface This is to support, eg a borg repo as a special remote, which is populated not by running git-annex commands, but by using borg. Then git-annex sync lists the content of the remote, learns which files are annex objects, and treats those as present in the remote. So, most of the import machinery is reused, to a new purpose. While normally importtree maintains a remote tracking branch, this does not, because the files stored in the remote are annex object files, not user-visible filenames. But, internally, a git tree is still generated, of the files on the remote that are annex objects. This tree is used by retrieveExportWithContentIdentifier, etc. As with other import/export remotes, that the tree is recorded in the export log, and gets grafted into the git-annex branch. importKey changed to be able to return Nothing, to indicate when an ImportLocation is not an annex object and so should be skipped from being included in the tree. It did not seem to make sense to have git-annex import do this, since from the user's perspective, it's not like other imports. So only git-annex sync does it. Note that, git-annex sync does not yet download objects from such remotes that are preferred content. importKeys is run with content downloading disabled, to avoid getting the content of all objects. Perhaps what's needed is for seekSyncContent to be run with these remotes, but I don't know if it will just work (in particular, it needs to avoid trying to transfer objects to them), so I skipped that for now. (Untested and unused as of yet.) This commit was sponsored by Jochen Bartl on Patreon.
2020-12-18 18:52:57 +00:00
-- Some remotes support export.
, exportActions :: ExportActions a
add thirdPartyPopulated interface This is to support, eg a borg repo as a special remote, which is populated not by running git-annex commands, but by using borg. Then git-annex sync lists the content of the remote, learns which files are annex objects, and treats those as present in the remote. So, most of the import machinery is reused, to a new purpose. While normally importtree maintains a remote tracking branch, this does not, because the files stored in the remote are annex object files, not user-visible filenames. But, internally, a git tree is still generated, of the files on the remote that are annex objects. This tree is used by retrieveExportWithContentIdentifier, etc. As with other import/export remotes, that the tree is recorded in the export log, and gets grafted into the git-annex branch. importKey changed to be able to return Nothing, to indicate when an ImportLocation is not an annex object and so should be skipped from being included in the tree. It did not seem to make sense to have git-annex import do this, since from the user's perspective, it's not like other imports. So only git-annex sync does it. Note that, git-annex sync does not yet download objects from such remotes that are preferred content. importKeys is run with content downloading disabled, to avoid getting the content of all objects. Perhaps what's needed is for seekSyncContent to be run with these remotes, but I don't know if it will just work (in particular, it needs to avoid trying to transfer objects to them), so I skipped that for now. (Untested and unused as of yet.) This commit was sponsored by Jochen Bartl on Patreon.
2020-12-18 18:52:57 +00:00
-- Some remotes support import.
2019-02-20 19:34:33 +00:00
, importActions :: ImportActions a
-- Some remotes can provide additional details for whereis.
, whereisKey :: Maybe (Key -> a [String])
-- Some remotes can run a fsck operation on the remote,
-- without transferring all the data to the local repo
-- The parameters are passed to the fsck command on the remote.
, remoteFsck :: Maybe ([CommandParam] -> a (IO Bool))
-- Runs an action to repair the remote's git repository.
, repairRepo :: Maybe (a Bool -> a (IO Bool))
2012-11-30 04:55:59 +00:00
-- a Remote has a persistent configuration store
, config :: ParsedRemoteConfig
-- Get the git repo for the Remote.
, getRepo :: a Git.Repo
-- a Remote's configuration from git
, gitconfig :: RemoteGitConfig
2023-03-14 02:39:16 +00:00
-- a Remote can be associated with a specific local filesystem path
, localpath :: Maybe FilePath
-- a Remote can be known to be readonly
, readonly :: Bool
-- a Remote can allow writes but not have a way to delete content
-- from it.
, appendonly :: Bool
-- Set if a remote cannot be trusted to continue to contain the
-- contents of files stored there. Notably, most export/import
-- remotes are untrustworthy because they are not key/value stores.
-- Since this prevents the user from adjusting a remote's trust
-- level, it's often better not not set it and instead let the user
-- decide.
, untrustworthy :: Bool
2013-03-15 23:16:13 +00:00
-- a Remote can be globally available. (Ie, "in the cloud".)
-- Some Remotes can mark themselves unavailable.
, availability :: a Availability
-- the type of the remote
, remotetype :: RemoteTypeA a
-- For testing, makes a version of this remote that is not
-- available for use. All its actions should fail.
, mkUnavailable :: a (Maybe (RemoteA a))
-- Information about the remote, for git annex info to display.
, getInfo :: a [(String, String)]
2020-05-21 15:58:57 +00:00
-- Some remotes can download from an url (or uri). This asks the
-- remote if it can handle a particular url. The actual download
-- will be done using retrieveKeyFile, and the remote can look up
-- up the url to download for a key using Logs.Web.getUrls.
, claimUrl :: Maybe (URLString -> a Bool)
-- Checks that the url is accessible, and gets information about
-- its contents, without downloading the full content.
-- Throws an exception if the url is inaccessible.
, checkUrl :: Maybe (URLString -> a UrlContents)
, remoteStateHandle :: RemoteStateHandle
}
instance RemoteNameable (RemoteA a) where
getRemoteName = name
2011-12-31 08:11:39 +00:00
instance Show (RemoteA a) where
2011-03-30 19:15:46 +00:00
show remote = "Remote { name =\"" ++ name remote ++ "\" }"
-- two remotes are the same if they have the same uuid
2011-12-31 08:11:39 +00:00
instance Eq (RemoteA a) where
x == y = uuid x == uuid y
-- Order by cost since that is the important order of remotes
-- when deciding which to use. But since remotes often have the same cost
-- and Ord must be total, do a secondary ordering by uuid.
2011-12-31 08:11:39 +00:00
instance Ord (RemoteA a) where
compare a b
| cost a == cost b = comparing uuid a b
| otherwise = comparing cost a b
instance ToUUID (RemoteA a) where
toUUID = uuid
data VerifyConfigA a
= AlwaysVerify
| NoVerify
| RemoteVerify (RemoteA a)
| DefaultVerify
data Verification
= UnVerified
-- ^ Content was not verified during transfer, but is probably
-- ok, so if verification is disabled, don't verify it
| Verified
-- ^ Content was verified during transfer, so don't verify it
-- again. The verification does not need to use a
-- cryptographically secure hash, but the hash does need to
-- have preimage resistance.
| IncompleteVerify IncrementalVerifier
-- ^ Content was partially verified during transfer, but
-- the verification is not complete.
| MustVerify
-- ^ Content likely to have been altered during transfer,
-- verify even if verification is normally disabled
| MustFinishIncompleteVerify IncrementalVerifier
-- ^ Content likely to have been altered during transfer,
-- finish verification even if verification is normally disabled.
unVerified :: Monad m => m a -> m (a, Verification)
unVerified a = do
ok <- a
return (ok, UnVerified)
add API for exporting Implemented so far for the directory special remote. Several remotes don't make sense to export to. Regular Git remotes, obviously, do not. Bup remotes almost certianly do not, since bup would need to be used to extract the export; same store for Ddar. Web and Bittorrent are download-only. GCrypt is always encrypted so exporting to it would be pointless. There's probably no point complicating the Hook remotes with exporting at this point. External, S3, Glacier, WebDAV, Rsync, and possibly Tahoe should be modified to support export. Thought about trying to reuse the storeKey/retrieveKeyFile/removeKey interface, rather than adding a new interface. But, it seemed better to keep it separate, to avoid a complicated interface that sometimes encrypts/chunks key/value storage and sometimes users non-key/value storage. Any common parts can be factored out. Note that storeExport is not atomic. doc/design/exporting_trees_to_special_remotes.mdwn has some things in the "resuming exports" section that bear on this decision. Basically, I don't think, at this time, that an atomic storeExport would help with resuming, because exports are not key/value storage, and we can't be sure that a partially uploaded file is the same content we're currently trying to export. Also, note that ExportLocation will always use unix path separators. This is important, because users may export from a mix of windows and unix, and it avoids complicating the API with path conversions, and ensures that in such a mix, they always use the same locations for exports. This commit was sponsored by Bruno BEAUFILS on Patreon.
2017-08-29 17:00:41 +00:00
-- Security policy indicating what keys can be safely retrieved from a
-- remote.
data RetrievalSecurityPolicy
= RetrievalVerifiableKeysSecure
-- ^ Transfer of keys whose content can be verified
-- with a hash check is secure; transfer of unverifiable keys is
-- not secure and should not be allowed.
--
-- This is used eg, when HTTP to a remote could be redirected to a
-- local private web server or even a file:// url, causing private
-- data from it that is not the intended content of a key to make
-- its way into the git-annex repository.
--
-- It's also used when content is stored encrypted on a remote,
-- which could replace it with a different encrypted file, and
-- trick git-annex into decrypting it and leaking the decryption
-- into the git-annex repository.
--
-- It's not (currently) used when the remote could alter the
-- content stored on it, because git-annex does not provide
-- strong guarantees about the content of keys that cannot be
-- verified with a hash check.
-- (But annex.securehashesonly does provide such guarantees.)
| RetrievalAllKeysSecure
-- ^ Any key can be securely retrieved.
isExportSupported :: RemoteA a -> a Bool
isExportSupported r = exportSupported (remotetype r) (config r) (gitconfig r)
isImportSupported :: RemoteA a -> a Bool
isImportSupported r = importSupported (remotetype r) (config r) (gitconfig r)
data ExportActions a = ExportActions
-- Exports content to an ExportLocation.
-- The exported file should not appear to be present on the remote
-- until all of its contents have been transferred.
2020-05-15 16:17:15 +00:00
-- Throws exception on failure.
{ storeExport :: FilePath -> Key -> ExportLocation -> MeterUpdate -> a ()
-- Retrieves exported content to a file.
-- (The MeterUpdate does not need to be used if it writes
-- sequentially to the file.)
-- Throws exception on failure.
, retrieveExport :: Key -> ExportLocation -> FilePath -> MeterUpdate -> a Verification
-- Removes an exported file (succeeds if the contents are not present)
-- Can throw exception if unable to access remote, or if remote
-- refuses to remove the content.
, removeExport :: Key -> ExportLocation -> a ()
-- Removes an exported directory. Typically the directory will be
2019-06-04 18:40:07 +00:00
-- empty, but it could possibly contain files or other directories,
2019-06-05 01:47:29 +00:00
-- and it's ok to delete those (but not required to).
-- If the remote does not use directories, or automatically cleans
-- up empty directories, this can be Nothing.
--
2019-06-05 01:47:29 +00:00
-- Should not fail if the directory was already removed.
--
-- Throws exception if unable to contact the remote, or perhaps if
-- the remote refuses to let the directory be removed.
, removeExportDirectory :: Maybe (ExportDirectory -> a ())
-- Checks if anything is exported to the remote at the specified
-- ExportLocation. It may check the size or other characteristics
-- of the Key, but does not need to guarantee that the content on
-- the remote is the same as the Key's content.
-- Throws an exception if the remote cannot be accessed.
, checkPresentExport :: Key -> ExportLocation -> a Bool
-- Renames an already exported file.
--
-- If the remote does not support the requested rename,
-- it can return Nothing. It's ok if the remove deletes
-- the file in such a situation too; it will be re-exported to
-- recover.
--
-- Throws an exception if the remote cannot be accessed, or
-- the file doesn't exist or cannot be renamed.
, renameExport :: Maybe (Key -> ExportLocation -> ExportLocation -> a (Maybe ()))
}
2019-02-20 19:34:33 +00:00
data ImportActions a = ImportActions
-- Finds the current set of files that are stored in the remote,
-- along with their content identifiers and size.
2019-02-20 19:34:33 +00:00
--
-- May also find old versions of files that are still stored in the
-- remote.
--
-- Throws exception on failure to access the remote.
-- May return Nothing when the remote is unchanged since last time.
{ listImportableContents :: a (Maybe (ImportableContentsChunkable a (ContentIdentifier, ByteSize)))
2020-12-17 16:29:44 +00:00
-- Generates a Key (of any type) for the file stored on the
-- remote at the ImportLocation. Does not download the file
-- from the remote.
--
-- May update the progress meter if it needs to perform an
-- expensive operation, such as hashing a local file.
--
-- Ensures that the key corresponds to the ContentIdentifier,
-- bearing in mind that the file on the remote may have changed
-- since the ContentIdentifier was generated.
--
-- When it returns nothing, the file at the ImportLocation
-- will not be included in the imported tree.
--
add thirdPartyPopulated interface This is to support, eg a borg repo as a special remote, which is populated not by running git-annex commands, but by using borg. Then git-annex sync lists the content of the remote, learns which files are annex objects, and treats those as present in the remote. So, most of the import machinery is reused, to a new purpose. While normally importtree maintains a remote tracking branch, this does not, because the files stored in the remote are annex object files, not user-visible filenames. But, internally, a git tree is still generated, of the files on the remote that are annex objects. This tree is used by retrieveExportWithContentIdentifier, etc. As with other import/export remotes, that the tree is recorded in the export log, and gets grafted into the git-annex branch. importKey changed to be able to return Nothing, to indicate when an ImportLocation is not an annex object and so should be skipped from being included in the tree. It did not seem to make sense to have git-annex import do this, since from the user's perspective, it's not like other imports. So only git-annex sync does it. Note that, git-annex sync does not yet download objects from such remotes that are preferred content. importKeys is run with content downloading disabled, to avoid getting the content of all objects. Perhaps what's needed is for seekSyncContent to be run with these remotes, but I don't know if it will just work (in particular, it needs to avoid trying to transfer objects to them), so I skipped that for now. (Untested and unused as of yet.) This commit was sponsored by Jochen Bartl on Patreon.
2020-12-18 18:52:57 +00:00
-- When the remote is thirdPartyPopulated, this should check if the
-- file stored on the remote is the content of an annex object,
-- and return its Key, or Nothing if it is not.
add thirdPartyPopulated interface This is to support, eg a borg repo as a special remote, which is populated not by running git-annex commands, but by using borg. Then git-annex sync lists the content of the remote, learns which files are annex objects, and treats those as present in the remote. So, most of the import machinery is reused, to a new purpose. While normally importtree maintains a remote tracking branch, this does not, because the files stored in the remote are annex object files, not user-visible filenames. But, internally, a git tree is still generated, of the files on the remote that are annex objects. This tree is used by retrieveExportWithContentIdentifier, etc. As with other import/export remotes, that the tree is recorded in the export log, and gets grafted into the git-annex branch. importKey changed to be able to return Nothing, to indicate when an ImportLocation is not an annex object and so should be skipped from being included in the tree. It did not seem to make sense to have git-annex import do this, since from the user's perspective, it's not like other imports. So only git-annex sync does it. Note that, git-annex sync does not yet download objects from such remotes that are preferred content. importKeys is run with content downloading disabled, to avoid getting the content of all objects. Perhaps what's needed is for seekSyncContent to be run with these remotes, but I don't know if it will just work (in particular, it needs to avoid trying to transfer objects to them), so I skipped that for now. (Untested and unused as of yet.) This commit was sponsored by Jochen Bartl on Patreon.
2020-12-18 18:52:57 +00:00
--
-- Throws exception on failure to access the remote.
, importKey :: Maybe (ImportLocation -> ContentIdentifier -> ByteSize -> MeterUpdate -> a (Maybe Key))
2019-02-20 19:34:33 +00:00
-- Retrieves a file from the remote. Ensures that the file
-- it retrieves has one of the requested ContentIdentifiers.
2019-02-20 19:34:33 +00:00
--
-- This has to be used rather than retrieveExport
-- when a special remote supports imports, since files on such a
-- special remote can be changed at any time.
--
-- Throws exception on failure.
2019-02-20 19:34:33 +00:00
, retrieveExportWithContentIdentifier
:: ExportLocation
-> [ContentIdentifier]
-- file to write content to
-> FilePath
-- Either the key, or when it's not yet known, a callback
-- that generates a key from the downloaded content.
-> Either Key (a Key)
2019-02-20 19:34:33 +00:00
-> MeterUpdate
-> a (Key, Verification)
2019-02-20 19:34:33 +00:00
-- Exports content to an ExportLocation, and returns the
-- ContentIdentifier corresponding to the content it stored.
--
-- This is used rather than storeExport when a special remote
2019-02-20 19:34:33 +00:00
-- supports imports, since files on such a special remote can be
-- changed at any time.
--
-- Since other things can modify the same file on the special
-- remote, this must take care to not overwrite such modifications,
-- and only overwrite a file that has one of the ContentIdentifiers
-- passed to it, unless listContents can recover an overwritten file.
2019-02-20 19:34:33 +00:00
--
-- Also, since there can be concurrent writers, the implementation
-- needs to make sure that the ContentIdentifier it returns
-- corresponds to what it wrote, not to what some other writer
-- wrote.
2020-05-15 16:17:15 +00:00
--
-- Throws exception on failure.
2019-02-20 19:34:33 +00:00
, storeExportWithContentIdentifier
:: FilePath
-> Key
-> ExportLocation
-- old content that it's safe to overwrite
2019-02-20 19:34:33 +00:00
-> [ContentIdentifier]
-> MeterUpdate
2020-05-15 16:17:15 +00:00
-> a ContentIdentifier
-- This is used rather than removeExport when a special remote
-- supports imports.
--
-- It should only remove a file from the remote when it has one
-- of the ContentIdentifiers passed to it, unless listContents
-- can recover an overwritten file.
--
-- It needs to handle races similar to storeExportWithContentIdentifier.
--
-- Throws an exception when unable to remove.
, removeExportWithContentIdentifier
:: Key
-> ExportLocation
-> [ContentIdentifier]
-> a ()
-- Removes a directory from the export, but only when it's empty.
-- Used instead of removeExportDirectory when a special remote
-- supports imports.
--
-- If the directory is not empty, it should succeed.
--
-- Throws exception if unable to contact the remote, or perhaps if
-- the remote refuses to let the directory be removed.
, removeExportDirectoryWhenEmpty :: Maybe (ExportDirectory -> a ())
-- Checks if the specified ContentIdentifier is exported to the
-- remote at the specified ExportLocation.
-- Throws an exception if the remote cannot be accessed.
, checkPresentExportWithContentIdentifier
:: Key
-> ExportLocation
-> [ContentIdentifier]
-> a Bool
2019-02-20 19:34:33 +00:00
}