180 lines
8 KiB
Markdown
180 lines
8 KiB
Markdown
When `git annex export treeish --to remote` is used to export to a remote,
|
|
and the remote allows files to somehow be edited on it, then there ought
|
|
to be a way to import the changes back from the remote into the git repository.
|
|
The command could be `git annex import --from remote`
|
|
|
|
There also ought to be a way to make `git annex sync` automatically import.
|
|
|
|
See [[design/importing_trees_from_special_remotes]] for current design for
|
|
this.
|
|
|
|
## implementation notes
|
|
|
|
* startExport uses checkPresentExport, but when there's a modified file,
|
|
it's not unexported, so it present, so checkPresentExport succeeds,
|
|
and so startExport does the wrong thing. Seems to indicate checkPresentExport
|
|
needs to be replaced too.
|
|
|
|
* Is renameExport safe to use with an import?
|
|
|
|
When a rename() is done, it could result in a modified file being
|
|
renamed. But this would not result in data loss; the next import would
|
|
see the modification and import it from the new name. The only
|
|
potentially confusing thing is that there was essentially a conflict, and
|
|
it got resolved in a way the user may not expect.
|
|
|
|
But.. S3 implements rename as a copy followed by a
|
|
delete. But if there's a race, that means that the modified content does
|
|
get deleted.
|
|
|
|
So, it seems it's not safe.
|
|
Probably simplest to just make it not be provided for import remotes.
|
|
|
|
* Should the ContentIdentifier db be multiwriter? It would simplify
|
|
the situation with the long-lived lock of it in adjustExportImport
|
|
|
|
OTOH, if a single process, such as the assistant, exported one tree,
|
|
and then another, multiwriter would mean no guarantee that reading
|
|
from the contentidentifier database would see values that were earlier
|
|
written by the same process.
|
|
|
|
* Need to support annex-tracking-branch configuration, which documentation
|
|
says makes git-annex sync and assistant do imports.
|
|
|
|
* Database.ContentIdentifier needs a way to update the database with
|
|
information coming from the git-annex branch. This will allow multiple
|
|
clones to import from the same remote, and share content identifier
|
|
information amoung them.
|
|
|
|
It will only need to be updated when listContents returns a
|
|
ContentIdentifier that is not already known in the database.
|
|
|
|
* When on an adjusted unlocked branch, need to import the files unlocked.
|
|
Also, the tracking branch code needs to know about such branches,
|
|
currently it will generate the wrong tracking branch.
|
|
|
|
* What if the remote lists importable filenames that are absolute paths,
|
|
or contain a "../" attack? Does git already guard against merging such
|
|
trees?
|
|
|
|
* Need to support annex.largefiles when importing.
|
|
|
|
* If a tree containing a non-annexed file is exported,
|
|
and then an import is done from the remote, the new tree will have that
|
|
file annexed, and so merging it converts to annexed (there is no merge
|
|
conflict). This problem seems hard to avoid, other than relaying on
|
|
annex.largefiles to tell git-annex if a file should be imported
|
|
non-annexed.
|
|
|
|
Although.. The importer could check for each file,
|
|
if there's a corresponding file in the branch it's generating the
|
|
import for, if that file is annexed. But this might be slow and seems a
|
|
lot of bother for an edge case?
|
|
|
|
## race conditions
|
|
|
|
(Some thoughts about races that the design should cover now, but kept here
|
|
for reference.)
|
|
|
|
A file could be modified on the remote while
|
|
it's being exported, and if the remote then uses the mtime of the modified
|
|
file in the content identifier, the modification would never be noticed by
|
|
imports.
|
|
|
|
To fix this race, we need an atomic move operation on the remote. Upload
|
|
the file to a temp file, then get its content identifier, and then move it
|
|
from the temp file to its final location. Alternatively, upload a file and
|
|
get the content identifier atomically, which eg S3 with versioning enabled
|
|
provides. It would make sense to have the storeExport operation always return
|
|
a content identifier and document that it needs to get it atomically by
|
|
either using a temp file or something specific to the remote.
|
|
|
|
----
|
|
|
|
There's also a race where a file gets changed on the remote after an
|
|
import tree, and an export then overwrites it with something else.
|
|
|
|
One solution would be to only allow one of importtree or exporttree
|
|
to a given remote. This reduces the use cases a lot though, and perhaps
|
|
so far that the import tree feature is not worth building. The adb
|
|
special remote needs both. Also, such a limitation seems like one that
|
|
users might try to work around by initializing two remotes using the same
|
|
data and trying to use one for import and the other for export.
|
|
|
|
Really fixing this race needs locking or an atomic operation. Locking seems
|
|
unlikely to be a portable enough solution.
|
|
|
|
An atomic rename operation could at least narrow the race significantly, eg:
|
|
|
|
1. get content identifier of $file, check if it's what was expected else
|
|
abort (optional but would catch most problems)
|
|
2. upload new version of $file to $tmp1
|
|
3. rename current $file to $tmp2
|
|
4. Get content identifier of $tmp2, check if it's what was expected to
|
|
be. If not, $file was modified after the last import tree, and that
|
|
conflict has to be resolved. Otherwise, delete $tmp2
|
|
5. rename $tmp1 to $file
|
|
|
|
That leaves a race if the file gets overwritten after it's moved out
|
|
of the way. If the rename refuses to overwrite existing files, that race
|
|
would be detected by it failing. renameat(2) with `RENAME_NOREPLACE` can do that,
|
|
but probably many special remote interfaces don't provide a way to do that.
|
|
|
|
S3 lacks a rename operation, can only copy and then delete. Which is not
|
|
good enough; it risks the file being replaced with new content before
|
|
the delete and the new content being deleted.
|
|
|
|
Is this race really a significant problem? One way to look at it is
|
|
analagous to a git merge overwriting a locally modified file.
|
|
Git can certianly use similar techniques to entirely detect and recover
|
|
from such races (but not the similar race described in the next section).
|
|
But, git does not actually do that! I modified git's
|
|
merge.c to sleep for 10 seconds after `refresh_index()`, and verified
|
|
that changes made to the work tree in that window were silently overwritten
|
|
by git merge. In git's case, the race window is normally quite narrow
|
|
and this is very unlikely to happen (the similar race described in the next
|
|
section is more likely).
|
|
|
|
If git-annex could get the race window similarly small out would perhaps be
|
|
ok. Eg:
|
|
|
|
1. upload new version of $file to $tmp
|
|
2. get content identifier of $file, check if it's what was expected else
|
|
abort
|
|
3. rename (or copy and delete) $tmp to $file
|
|
|
|
The race window between #2 and #3 could be quite narrow for some remotes.
|
|
But S3, lacking a rename, does a copy that can be very slow for large files.
|
|
|
|
S3, with versioning, could detect the race after the fact, by listing
|
|
the versions of the file, and checking if any of the versions is one
|
|
that git-annex did not know the file already had.
|
|
[Using this api](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETVersion.html),
|
|
with version-id-marker set to the previous version of the file,
|
|
should list only the previous and current versions; if there's an
|
|
intermediate version then the race occurred and it could roll the change
|
|
back, or otherwise recover the overwritten version. This could be done at
|
|
import time, to detect a previous race, and recover from it; importing
|
|
a tree with the file(s) that were overwritten due to the race, leading to a
|
|
tree import conflict that the user can resolve. This likely generalizes
|
|
to importing a sequence of trees, so each version written to S3 gets
|
|
imported.
|
|
|
|
----
|
|
|
|
A remaining race is that, if the file is open for write at the same
|
|
time it's renamed, the write might happen after the content identifer
|
|
is checked, and then whatever is written to it will be lost.
|
|
|
|
But: Git worktree update has the same race condition. Verified with
|
|
this perl oneliner, run in a worktree and a second later
|
|
followed by a git pull. The lines that it appended to the
|
|
file got lost:
|
|
|
|
perl -e 'open (OUT, ">>foo") || die "$!"; sleep(10); while (<>) { print OUT $_ }'
|
|
|
|
Since this is acceptable in git, I suppose we can accept it here too..
|
|
|
|
----
|
|
|
|
See also, [[adb_special_remote]]
|