2014-08-03 19:35:23 +00:00
|
|
|
{- helpers for special remotes
|
2011-03-30 18:00:54 +00:00
|
|
|
-
|
2020-01-13 16:35:39 +00:00
|
|
|
- Copyright 2011-2020 Joey Hess <id@joeyh.name>
|
2011-03-30 18:00:54 +00:00
|
|
|
-
|
2019-03-13 19:48:14 +00:00
|
|
|
- Licensed under the GNU AGPL version 3 or higher.
|
2011-03-30 18:00:54 +00:00
|
|
|
-}
|
|
|
|
|
2019-11-27 20:54:11 +00:00
|
|
|
{-# LANGUAGE OverloadedStrings #-}
|
|
|
|
|
2014-08-03 19:35:23 +00:00
|
|
|
module Remote.Helper.Special (
|
|
|
|
findSpecialRemotes,
|
|
|
|
gitConfigSpecialRemote,
|
2018-09-25 19:32:50 +00:00
|
|
|
mkRetrievalVerifiableKeysSecure,
|
2014-08-03 19:35:23 +00:00
|
|
|
Storer,
|
|
|
|
Retriever,
|
run Preparer to get Remover and CheckPresent actions
This will allow special remotes to eg, open a http connection and reuse it,
while checking if chunks are present, or removing chunks.
S3 and WebDAV both need this to support chunks with reasonable speed.
Note that a special remote might want to cache a http connection across
multiple requests. A simple case of this is that CheckPresent is typically
called before Store or Remove. A remote using this interface can certianly
use a Preparer that eg, uses a MVar to cache a http connection.
However, it's up to the remote to then deal with things like stale or
stalled http connections when eg, doing a series of downloads from a remote
and other places. There could be long delays between calls to a remote,
which could lead to eg, http connection stalls; the machine might even
move to a new network, etc.
It might be nice to improve this interface later to allow
the simple case without needing to handle the full complex case.
One way to do it would be to have a `Transaction SpecialRemote cache`,
where SpecialRemote contains methods for Storer, Retriever, Remover, and
CheckPresent, that all expect to be passed a `cache`.
2014-08-06 18:28:36 +00:00
|
|
|
Remover,
|
|
|
|
CheckPresent,
|
2014-08-03 19:35:23 +00:00
|
|
|
ContentSource,
|
|
|
|
fileStorer,
|
|
|
|
byteStorer,
|
|
|
|
fileRetriever,
|
|
|
|
byteRetriever,
|
|
|
|
storeKeyDummy,
|
2020-05-13 21:05:56 +00:00
|
|
|
retrieveKeyFileDummy,
|
run Preparer to get Remover and CheckPresent actions
This will allow special remotes to eg, open a http connection and reuse it,
while checking if chunks are present, or removing chunks.
S3 and WebDAV both need this to support chunks with reasonable speed.
Note that a special remote might want to cache a http connection across
multiple requests. A simple case of this is that CheckPresent is typically
called before Store or Remove. A remote using this interface can certianly
use a Preparer that eg, uses a MVar to cache a http connection.
However, it's up to the remote to then deal with things like stale or
stalled http connections when eg, doing a series of downloads from a remote
and other places. There could be long delays between calls to a remote,
which could lead to eg, http connection stalls; the machine might even
move to a new network, etc.
It might be nice to improve this interface later to allow
the simple case without needing to handle the full complex case.
One way to do it would be to have a `Transaction SpecialRemote cache`,
where SpecialRemote contains methods for Storer, Retriever, Remover, and
CheckPresent, that all expect to be passed a `cache`.
2014-08-06 18:28:36 +00:00
|
|
|
removeKeyDummy,
|
|
|
|
checkPresentDummy,
|
2014-08-03 19:35:23 +00:00
|
|
|
SpecialRemoteCfg(..),
|
|
|
|
specialRemoteCfg,
|
add LISTCONFIGS to external special remote protocol
Special remote programs that use GETCONFIG/SETCONFIG are recommended
to implement it.
The description is not yet used, but will be useful later when adding a way
to make initremote list all accepted configs.
configParser now takes a RemoteConfig parameter. Normally, that's not
needed, because configParser returns a parter, it does not parse it
itself. But, it's needed to look at externaltype and work out what
external remote program to run for LISTCONFIGS.
Note that, while externalUUID is changed to a Maybe UUID, checkExportSupported
used to use NoUUID. The code that now checks for Nothing used to behave
in some undefined way if the external program made requests that
triggered it.
Also, note that in externalSetup, once it generates external,
it parses the RemoteConfig strictly. That generates a
ParsedRemoteConfig, which is thrown away. The reason it's ok to throw
that away, is that, if the strict parse succeeded, the result must be
the same as the earlier, lenient parse.
initremote of an external special remote now runs the program three
times. First for LISTCONFIGS, then EXPORTSUPPORTED, and again
LISTCONFIGS+INITREMOTE. It would not be hard to eliminate at least
one of those, and it should be possible to only run the program once.
2020-01-17 19:30:14 +00:00
|
|
|
specialRemoteConfigParsers,
|
2020-01-14 16:35:08 +00:00
|
|
|
specialRemoteType,
|
2014-08-03 19:35:23 +00:00
|
|
|
specialRemote,
|
|
|
|
specialRemote',
|
2019-10-10 17:08:17 +00:00
|
|
|
lookupName,
|
2014-08-03 19:35:23 +00:00
|
|
|
module X
|
|
|
|
) where
|
2011-03-30 18:00:54 +00:00
|
|
|
|
2016-01-20 20:36:33 +00:00
|
|
|
import Annex.Common
|
2015-09-09 22:06:49 +00:00
|
|
|
import qualified Annex
|
2019-10-10 16:48:26 +00:00
|
|
|
import Annex.SpecialRemote.Config
|
2014-08-03 19:35:23 +00:00
|
|
|
import Types.StoreRetrieve
|
2011-06-02 01:56:04 +00:00
|
|
|
import Types.Remote
|
2014-08-03 19:35:23 +00:00
|
|
|
import Crypto
|
2019-10-10 16:37:47 +00:00
|
|
|
import Annex.UUID
|
2015-08-17 15:21:13 +00:00
|
|
|
import Config
|
2014-08-03 19:35:23 +00:00
|
|
|
import Config.Cost
|
|
|
|
import Utility.Metered
|
|
|
|
import Remote.Helper.Chunked as X
|
convert WebDAV to new special remote interface, adding new-style chunking support
Reusing http connection when operating on chunks is not done yet,
I had to submit some patches to DAV to support that. However, this is no
slower than old-style chunking was.
Note that it's a fileRetriever and a fileStorer, despite DAV using
bytestrings that would allow streaming. As a result, upload/download of
encrypted files is made a bit more expensive, since it spools them to temp
files. This was needed to get the progress meters to work.
There are probably ways to avoid that.. But it turns out that the current
DAV interface buffers the whole file content in memory, and I have
sent in a patch to DAV to improve its interfaces. Using the new interfaces,
it's certainly going to need to be a fileStorer, in order to read the file
size from the file (getting the size of a bytestring would destroy
laziness). It should be possible to use the new interface to make it be a
byteRetriever, so I'll change that when I get to it.
This commit was sponsored by Andreas Olsson.
2014-08-06 20:55:32 +00:00
|
|
|
import Remote.Helper.Encryptable as X
|
2014-08-03 19:35:23 +00:00
|
|
|
import Annex.Content
|
2015-04-03 19:33:28 +00:00
|
|
|
import Messages.Progress
|
2011-06-30 17:16:57 +00:00
|
|
|
import qualified Git
|
2011-12-13 19:05:07 +00:00
|
|
|
import qualified Git.Construct
|
2019-12-02 14:57:09 +00:00
|
|
|
import Git.Types
|
2020-11-24 16:38:12 +00:00
|
|
|
import qualified Utility.RawFilePath as R
|
2011-03-30 18:00:54 +00:00
|
|
|
|
2019-11-27 20:54:11 +00:00
|
|
|
import qualified Data.ByteString as S
|
2014-08-03 19:35:23 +00:00
|
|
|
import qualified Data.ByteString.Lazy as L
|
|
|
|
import qualified Data.Map as M
|
|
|
|
|
2011-03-30 18:00:54 +00:00
|
|
|
{- Special remotes don't have a configured url, so Git.Repo does not
|
|
|
|
- automatically generate remotes for them. This looks for a different
|
|
|
|
- configuration key instead.
|
|
|
|
-}
|
|
|
|
findSpecialRemotes :: String -> Annex [Git.Repo]
|
|
|
|
findSpecialRemotes s = do
|
2011-12-14 19:30:14 +00:00
|
|
|
m <- fromRepo Git.config
|
|
|
|
liftIO $ mapM construct $ remotepairs m
|
2012-11-11 04:51:07 +00:00
|
|
|
where
|
|
|
|
remotepairs = M.toList . M.filterWithKey match
|
2019-11-27 20:54:11 +00:00
|
|
|
construct (k,_) = Git.Construct.remoteNamedFromKey k
|
|
|
|
(pure Git.Construct.fromUnknown)
|
2019-12-02 14:57:09 +00:00
|
|
|
match (ConfigKey k) _ =
|
|
|
|
"remote." `S.isPrefixOf` k
|
|
|
|
&& (".annex-" <> encodeBS' s) `S.isSuffixOf` k
|
2011-03-30 18:00:54 +00:00
|
|
|
|
|
|
|
{- Sets up configuration for a special remote in .git/config. -}
|
2018-03-27 16:41:57 +00:00
|
|
|
gitConfigSpecialRemote :: UUID -> RemoteConfig -> [(String, String)] -> Annex ()
|
|
|
|
gitConfigSpecialRemote u c cfgs = do
|
|
|
|
forM_ cfgs $ \(k, v) ->
|
2020-02-19 17:45:11 +00:00
|
|
|
setConfig (remoteAnnexConfig c (encodeBS' k)) v
|
|
|
|
storeUUIDIn (remoteAnnexConfig c "uuid") u
|
2014-08-03 19:35:23 +00:00
|
|
|
|
2018-09-25 19:32:50 +00:00
|
|
|
-- RetrievalVerifiableKeysSecure unless overridden by git config.
|
|
|
|
--
|
|
|
|
-- Only looks at the RemoteGitConfig; the GitConfig's setting is
|
|
|
|
-- checked at the same place the RetrievalSecurityPolicy is checked.
|
|
|
|
mkRetrievalVerifiableKeysSecure :: RemoteGitConfig -> RetrievalSecurityPolicy
|
|
|
|
mkRetrievalVerifiableKeysSecure gc
|
|
|
|
| remoteAnnexAllowUnverifiedDownloads gc = RetrievalAllKeysSecure
|
|
|
|
| otherwise = RetrievalVerifiableKeysSecure
|
|
|
|
|
2014-08-03 19:35:23 +00:00
|
|
|
-- A Storer that expects to be provided with a file containing
|
|
|
|
-- the content of the key to store.
|
2020-05-13 18:03:00 +00:00
|
|
|
fileStorer :: (Key -> FilePath -> MeterUpdate -> Annex ()) -> Storer
|
2014-08-03 19:35:23 +00:00
|
|
|
fileStorer a k (FileContent f) m = a k f m
|
|
|
|
fileStorer a k (ByteContent b) m = withTmp k $ \f -> do
|
2020-10-29 18:20:57 +00:00
|
|
|
let f' = fromRawFilePath f
|
|
|
|
liftIO $ L.writeFile f' b
|
|
|
|
a k f' m
|
2014-08-03 19:35:23 +00:00
|
|
|
|
|
|
|
-- A Storer that expects to be provided with a L.ByteString of
|
|
|
|
-- the content to store.
|
2020-05-13 18:03:00 +00:00
|
|
|
byteStorer :: (Key -> L.ByteString -> MeterUpdate -> Annex ()) -> Storer
|
2014-08-03 19:35:23 +00:00
|
|
|
byteStorer a k c m = withBytes c $ \b -> a k b m
|
|
|
|
|
|
|
|
-- A Retriever that writes the content of a Key to a provided file.
|
|
|
|
-- It is responsible for updating the progress meter as it retrieves data.
|
|
|
|
fileRetriever :: (FilePath -> Key -> MeterUpdate -> Annex ()) -> Retriever
|
|
|
|
fileRetriever a k m callback = do
|
|
|
|
f <- prepTmp k
|
2020-10-29 18:20:57 +00:00
|
|
|
a (fromRawFilePath f) k m
|
|
|
|
pruneTmpWorkDirBefore f (callback . FileContent . fromRawFilePath)
|
2014-08-03 19:35:23 +00:00
|
|
|
|
|
|
|
-- A Retriever that generates a lazy ByteString containing the Key's
|
|
|
|
-- content, and passes it to a callback action which will fully consume it
|
|
|
|
-- before returning.
|
2020-05-13 21:05:56 +00:00
|
|
|
byteRetriever :: (Key -> (L.ByteString -> Annex ()) -> Annex ()) -> Retriever
|
2014-08-03 19:35:23 +00:00
|
|
|
byteRetriever a k _m callback = a k (callback . ByteContent)
|
|
|
|
|
|
|
|
{- The base Remote that is provided to specialRemote needs to have
|
convert WebDAV to new special remote interface, adding new-style chunking support
Reusing http connection when operating on chunks is not done yet,
I had to submit some patches to DAV to support that. However, this is no
slower than old-style chunking was.
Note that it's a fileRetriever and a fileStorer, despite DAV using
bytestrings that would allow streaming. As a result, upload/download of
encrypted files is made a bit more expensive, since it spools them to temp
files. This was needed to get the progress meters to work.
There are probably ways to avoid that.. But it turns out that the current
DAV interface buffers the whole file content in memory, and I have
sent in a patch to DAV to improve its interfaces. Using the new interfaces,
it's certainly going to need to be a fileStorer, in order to read the file
size from the file (getting the size of a bytestring would destroy
laziness). It should be possible to use the new interface to make it be a
byteRetriever, so I'll change that when I get to it.
This commit was sponsored by Andreas Olsson.
2014-08-06 20:55:32 +00:00
|
|
|
- storeKey, retrieveKeyFile, removeKey, and checkPresent methods,
|
run Preparer to get Remover and CheckPresent actions
This will allow special remotes to eg, open a http connection and reuse it,
while checking if chunks are present, or removing chunks.
S3 and WebDAV both need this to support chunks with reasonable speed.
Note that a special remote might want to cache a http connection across
multiple requests. A simple case of this is that CheckPresent is typically
called before Store or Remove. A remote using this interface can certianly
use a Preparer that eg, uses a MVar to cache a http connection.
However, it's up to the remote to then deal with things like stale or
stalled http connections when eg, doing a series of downloads from a remote
and other places. There could be long delays between calls to a remote,
which could lead to eg, http connection stalls; the machine might even
move to a new network, etc.
It might be nice to improve this interface later to allow
the simple case without needing to handle the full complex case.
One way to do it would be to have a `Transaction SpecialRemote cache`,
where SpecialRemote contains methods for Storer, Retriever, Remover, and
CheckPresent, that all expect to be passed a `cache`.
2014-08-06 18:28:36 +00:00
|
|
|
- but they are never actually used (since specialRemote replaces them).
|
2014-08-03 19:35:23 +00:00
|
|
|
- Here are some dummy ones.
|
|
|
|
-}
|
2020-05-13 18:03:00 +00:00
|
|
|
storeKeyDummy :: Key -> AssociatedFile -> MeterUpdate -> Annex ()
|
|
|
|
storeKeyDummy _ _ _ = error "missing storeKey implementation"
|
2020-05-13 21:05:56 +00:00
|
|
|
retrieveKeyFileDummy :: Key -> AssociatedFile -> FilePath -> MeterUpdate -> Annex Verification
|
|
|
|
retrieveKeyFileDummy _ _ _ _ = error "missing retrieveKeyFile implementation"
|
2020-05-14 18:08:09 +00:00
|
|
|
removeKeyDummy :: Key -> Annex ()
|
|
|
|
removeKeyDummy _ = error "missing removeKey implementation"
|
run Preparer to get Remover and CheckPresent actions
This will allow special remotes to eg, open a http connection and reuse it,
while checking if chunks are present, or removing chunks.
S3 and WebDAV both need this to support chunks with reasonable speed.
Note that a special remote might want to cache a http connection across
multiple requests. A simple case of this is that CheckPresent is typically
called before Store or Remove. A remote using this interface can certianly
use a Preparer that eg, uses a MVar to cache a http connection.
However, it's up to the remote to then deal with things like stale or
stalled http connections when eg, doing a series of downloads from a remote
and other places. There could be long delays between calls to a remote,
which could lead to eg, http connection stalls; the machine might even
move to a new network, etc.
It might be nice to improve this interface later to allow
the simple case without needing to handle the full complex case.
One way to do it would be to have a `Transaction SpecialRemote cache`,
where SpecialRemote contains methods for Storer, Retriever, Remover, and
CheckPresent, that all expect to be passed a `cache`.
2014-08-06 18:28:36 +00:00
|
|
|
checkPresentDummy :: Key -> Annex Bool
|
|
|
|
checkPresentDummy _ = error "missing checkPresent implementation"
|
|
|
|
|
|
|
|
type RemoteModifier
|
2020-01-13 16:35:39 +00:00
|
|
|
= ParsedRemoteConfig
|
2020-05-13 15:50:31 +00:00
|
|
|
-> Storer
|
|
|
|
-> Retriever
|
|
|
|
-> Remover
|
|
|
|
-> CheckPresent
|
run Preparer to get Remover and CheckPresent actions
This will allow special remotes to eg, open a http connection and reuse it,
while checking if chunks are present, or removing chunks.
S3 and WebDAV both need this to support chunks with reasonable speed.
Note that a special remote might want to cache a http connection across
multiple requests. A simple case of this is that CheckPresent is typically
called before Store or Remove. A remote using this interface can certianly
use a Preparer that eg, uses a MVar to cache a http connection.
However, it's up to the remote to then deal with things like stale or
stalled http connections when eg, doing a series of downloads from a remote
and other places. There could be long delays between calls to a remote,
which could lead to eg, http connection stalls; the machine might even
move to a new network, etc.
It might be nice to improve this interface later to allow
the simple case without needing to handle the full complex case.
One way to do it would be to have a `Transaction SpecialRemote cache`,
where SpecialRemote contains methods for Storer, Retriever, Remover, and
CheckPresent, that all expect to be passed a `cache`.
2014-08-06 18:28:36 +00:00
|
|
|
-> Remote
|
|
|
|
-> Remote
|
2014-08-03 19:35:23 +00:00
|
|
|
|
|
|
|
data SpecialRemoteCfg = SpecialRemoteCfg
|
|
|
|
{ chunkConfig :: ChunkConfig
|
|
|
|
, displayProgress :: Bool
|
|
|
|
}
|
|
|
|
|
2020-01-13 16:35:39 +00:00
|
|
|
specialRemoteCfg :: ParsedRemoteConfig -> SpecialRemoteCfg
|
2014-08-03 19:35:23 +00:00
|
|
|
specialRemoteCfg c = SpecialRemoteCfg (getChunkConfig c) True
|
|
|
|
|
2020-01-14 16:35:08 +00:00
|
|
|
-- Modifies a base RemoteType to support chunking and encryption configs.
|
|
|
|
specialRemoteType :: RemoteType -> RemoteType
|
|
|
|
specialRemoteType r = r
|
add LISTCONFIGS to external special remote protocol
Special remote programs that use GETCONFIG/SETCONFIG are recommended
to implement it.
The description is not yet used, but will be useful later when adding a way
to make initremote list all accepted configs.
configParser now takes a RemoteConfig parameter. Normally, that's not
needed, because configParser returns a parter, it does not parse it
itself. But, it's needed to look at externaltype and work out what
external remote program to run for LISTCONFIGS.
Note that, while externalUUID is changed to a Maybe UUID, checkExportSupported
used to use NoUUID. The code that now checks for Nothing used to behave
in some undefined way if the external program made requests that
triggered it.
Also, note that in externalSetup, once it generates external,
it parses the RemoteConfig strictly. That generates a
ParsedRemoteConfig, which is thrown away. The reason it's ok to throw
that away, is that, if the strict parse succeeded, the result must be
the same as the earlier, lenient parse.
initremote of an external special remote now runs the program three
times. First for LISTCONFIGS, then EXPORTSUPPORTED, and again
LISTCONFIGS+INITREMOTE. It would not be hard to eliminate at least
one of those, and it should be possible to only run the program once.
2020-01-17 19:30:14 +00:00
|
|
|
{ configParser = \c -> addRemoteConfigParser specialRemoteConfigParsers
|
|
|
|
<$> configParser r c
|
2020-01-14 16:35:08 +00:00
|
|
|
}
|
|
|
|
|
2020-01-14 17:18:15 +00:00
|
|
|
specialRemoteConfigParsers :: [RemoteConfigFieldParser]
|
|
|
|
specialRemoteConfigParsers = chunkConfigParsers ++ encryptionConfigParsers
|
2020-01-14 16:35:08 +00:00
|
|
|
|
2014-08-03 19:35:23 +00:00
|
|
|
-- Modifies a base Remote to support both chunking and encryption,
|
|
|
|
-- which special remotes typically should support.
|
2019-01-31 17:34:12 +00:00
|
|
|
--
|
|
|
|
-- Handles progress displays when displayProgress is set.
|
2014-08-03 19:35:23 +00:00
|
|
|
specialRemote :: RemoteModifier
|
|
|
|
specialRemote c = specialRemote' (specialRemoteCfg c) c
|
|
|
|
|
|
|
|
specialRemote' :: SpecialRemoteCfg -> RemoteModifier
|
2020-05-13 15:50:31 +00:00
|
|
|
specialRemote' cfg c storer retriever remover checkpresent baser = encr
|
2014-08-03 19:35:23 +00:00
|
|
|
where
|
|
|
|
encr = baser
|
2015-11-17 01:00:54 +00:00
|
|
|
{ storeKey = \k _f p -> cip >>= storeKeyGen k p
|
2020-05-13 21:05:56 +00:00
|
|
|
, retrieveKeyFile = \k _f d p -> cip >>= retrieveKeyFileGen k d p
|
|
|
|
, retrieveKeyFileCheap = case retrieveKeyFileCheap baser of
|
|
|
|
Nothing -> Nothing
|
|
|
|
Just a
|
|
|
|
-- retrieval of encrypted keys is never cheap
|
|
|
|
| isencrypted -> Nothing
|
|
|
|
| otherwise -> Just $ \k f d -> a k f d
|
2018-06-21 15:35:27 +00:00
|
|
|
-- When encryption is used, the remote could provide
|
|
|
|
-- some other content encrypted by the user, and trick
|
|
|
|
-- git-annex into decrypting it, leaking the decryption
|
|
|
|
-- into the git-annex repository. Verifiable keys
|
|
|
|
-- are the main protection against this attack.
|
|
|
|
, retrievalSecurityPolicy = if isencrypted
|
2018-09-25 19:32:50 +00:00
|
|
|
then mkRetrievalVerifiableKeysSecure (gitconfig baser)
|
2018-06-21 15:35:27 +00:00
|
|
|
else retrievalSecurityPolicy baser
|
2014-08-03 19:35:23 +00:00
|
|
|
, removeKey = \k -> cip >>= removeKeyGen k
|
2014-08-06 17:45:19 +00:00
|
|
|
, checkPresent = \k -> cip >>= checkPresentGen k
|
2015-08-19 18:13:19 +00:00
|
|
|
, cost = if isencrypted
|
|
|
|
then cost baser + encryptedRemoteCostAdj
|
|
|
|
else cost baser
|
2014-10-21 18:36:09 +00:00
|
|
|
, getInfo = do
|
|
|
|
l <- getInfo baser
|
|
|
|
return $ l ++
|
|
|
|
[ ("encryption", describeEncryption c)
|
|
|
|
, ("chunking", describeChunkConfig (chunkConfig cfg))
|
|
|
|
]
|
2015-08-19 18:13:19 +00:00
|
|
|
, whereisKey = if noChunks (chunkConfig cfg) && not isencrypted
|
|
|
|
then whereisKey baser
|
|
|
|
else Nothing
|
2019-01-31 17:34:12 +00:00
|
|
|
, exportActions = (exportActions baser)
|
|
|
|
{ storeExport = \f k l p -> displayprogress p k (Just f) $
|
|
|
|
storeExport (exportActions baser) f k l
|
|
|
|
, retrieveExport = \k l f p -> displayprogress p k Nothing $
|
|
|
|
retrieveExport (exportActions baser) k l f
|
|
|
|
}
|
2014-08-03 19:35:23 +00:00
|
|
|
}
|
2016-05-23 21:27:15 +00:00
|
|
|
cip = cipherKey c (gitconfig baser)
|
2020-01-13 16:35:39 +00:00
|
|
|
isencrypted = isEncrypted c
|
2014-08-03 19:35:23 +00:00
|
|
|
|
|
|
|
-- chunk, then encrypt, then feed to the storer
|
2020-05-13 18:03:00 +00:00
|
|
|
storeKeyGen k p enc = sendAnnex k rollback $ \src ->
|
2020-05-13 15:50:31 +00:00
|
|
|
displayprogress p k (Just src) $ \p' ->
|
|
|
|
storeChunks (uuid baser) chunkconfig enck k src p'
|
|
|
|
(storechunk enc)
|
|
|
|
checkpresent
|
2014-08-03 19:35:23 +00:00
|
|
|
where
|
|
|
|
rollback = void $ removeKey encr k
|
2016-04-27 16:54:43 +00:00
|
|
|
enck = maybe id snd enc
|
2014-08-03 19:35:23 +00:00
|
|
|
|
2020-05-13 15:50:31 +00:00
|
|
|
storechunk Nothing k content p = storer k content p
|
|
|
|
storechunk (Just (cipher, enck)) k content p = do
|
2015-09-09 22:06:49 +00:00
|
|
|
cmd <- gpgCmd <$> Annex.getGitConfig
|
2014-08-03 19:35:23 +00:00
|
|
|
withBytes content $ \b ->
|
2016-05-23 21:03:20 +00:00
|
|
|
encrypt cmd encr cipher (feedBytes b) $
|
2014-08-03 19:35:23 +00:00
|
|
|
readBytes $ \encb ->
|
|
|
|
storer (enck k) (ByteContent encb) p
|
|
|
|
|
2015-04-27 21:40:21 +00:00
|
|
|
-- call retriever to get chunks; decrypt them; stream to dest file
|
2020-05-13 21:05:56 +00:00
|
|
|
retrieveKeyFileGen k dest p enc = do
|
2020-05-13 15:50:31 +00:00
|
|
|
displayprogress p k Nothing $ \p' ->
|
2014-08-03 19:35:23 +00:00
|
|
|
retrieveChunks retriever (uuid baser) chunkconfig
|
2016-05-23 21:03:20 +00:00
|
|
|
enck k dest p' (sink dest enc encr)
|
2020-05-13 21:05:56 +00:00
|
|
|
return UnVerified
|
2020-05-13 15:50:31 +00:00
|
|
|
where
|
2014-08-03 19:35:23 +00:00
|
|
|
enck = maybe id snd enc
|
|
|
|
|
2020-05-14 18:08:09 +00:00
|
|
|
removeKeyGen k enc =
|
2020-05-13 15:50:31 +00:00
|
|
|
removeChunks remover (uuid baser) chunkconfig enck k
|
2014-08-03 19:35:23 +00:00
|
|
|
where
|
|
|
|
enck = maybe id snd enc
|
|
|
|
|
2020-05-13 15:50:31 +00:00
|
|
|
checkPresentGen k enc =
|
|
|
|
checkPresentChunks checkpresent (uuid baser) chunkconfig enck k
|
2014-08-03 19:35:23 +00:00
|
|
|
where
|
|
|
|
enck = maybe id snd enc
|
|
|
|
|
|
|
|
chunkconfig = chunkConfig cfg
|
|
|
|
|
2017-11-14 20:27:39 +00:00
|
|
|
displayprogress p k srcfile a
|
2019-06-25 16:30:18 +00:00
|
|
|
| displayProgress cfg =
|
2020-11-05 15:26:34 +00:00
|
|
|
metered (Just p) (KeySizer k (pure (fmap toRawFilePath srcfile))) (const a)
|
2014-08-03 19:35:23 +00:00
|
|
|
| otherwise = a p
|
|
|
|
|
|
|
|
{- Sink callback for retrieveChunks. Stores the file content into the
|
|
|
|
- provided Handle, decrypting it first if necessary.
|
|
|
|
-
|
|
|
|
- If the remote did not store the content using chunks, no Handle
|
|
|
|
- will be provided, and it's up to us to open the destination file.
|
|
|
|
-
|
|
|
|
- Note that when neither chunking nor encryption is used, and the remote
|
|
|
|
- provides FileContent, that file only needs to be renamed
|
|
|
|
- into place. (And it may even already be in the right place..)
|
|
|
|
-}
|
|
|
|
sink
|
2016-05-23 21:03:20 +00:00
|
|
|
:: LensGpgEncParams c
|
|
|
|
=> FilePath
|
2014-08-03 19:35:23 +00:00
|
|
|
-> Maybe (Cipher, EncKey)
|
2016-05-23 21:03:20 +00:00
|
|
|
-> c
|
2014-08-03 19:35:23 +00:00
|
|
|
-> Maybe Handle
|
|
|
|
-> Maybe MeterUpdate
|
|
|
|
-> ContentSource
|
2020-05-13 21:05:56 +00:00
|
|
|
-> Annex ()
|
|
|
|
sink dest enc c mh mp content = case (enc, mh, content) of
|
|
|
|
(Nothing, Nothing, FileContent f)
|
|
|
|
| f == dest -> noop
|
|
|
|
| otherwise -> liftIO $ moveFile f dest
|
|
|
|
(Just (cipher, _), _, ByteContent b) -> do
|
|
|
|
cmd <- gpgCmd <$> Annex.getGitConfig
|
|
|
|
decrypt cmd c cipher (feedBytes b) $
|
|
|
|
readBytes write
|
|
|
|
(Just (cipher, _), _, FileContent f) -> do
|
|
|
|
cmd <- gpgCmd <$> Annex.getGitConfig
|
|
|
|
withBytes content $ \b ->
|
2016-05-23 21:03:20 +00:00
|
|
|
decrypt cmd c cipher (feedBytes b) $
|
2014-08-03 19:35:23 +00:00
|
|
|
readBytes write
|
2020-11-24 16:38:12 +00:00
|
|
|
liftIO $ removeWhenExistsWith R.removeLink (toRawFilePath f)
|
2020-05-13 21:05:56 +00:00
|
|
|
(Nothing, _, FileContent f) -> do
|
|
|
|
withBytes content write
|
2020-11-24 16:38:12 +00:00
|
|
|
liftIO $ removeWhenExistsWith R.removeLink (toRawFilePath f)
|
2020-05-13 21:05:56 +00:00
|
|
|
(Nothing, _, ByteContent b) -> write b
|
2014-08-03 19:35:23 +00:00
|
|
|
where
|
|
|
|
write b = case mh of
|
|
|
|
Just h -> liftIO $ b `streamto` h
|
|
|
|
Nothing -> liftIO $ bracket opendest hClose (b `streamto`)
|
|
|
|
streamto b h = case mp of
|
|
|
|
Just p -> meteredWrite p h b
|
|
|
|
Nothing -> L.hPut h b
|
|
|
|
opendest = openBinaryFile dest WriteMode
|
|
|
|
|
|
|
|
withBytes :: ContentSource -> (L.ByteString -> Annex a) -> Annex a
|
|
|
|
withBytes (ByteContent b) a = a b
|
|
|
|
withBytes (FileContent f) a = a =<< liftIO (L.readFile f)
|