git-annex/doc/todo/export.mdwn
Joey Hess 44cd5ae313
S3 export (untested)
It opens a http connection per file exported, but then so does git
annex copy --to s3.

Decided not to munge exported filenames for IA. Too large a chance of
the munging having confusing results. Instead, export of files not
supported by IA, eg with spaces in their name, will fail.

This commit was supported by the NSF-funded DataLad project.
2017-09-08 15:46:24 -04:00

46 lines
2.2 KiB
Markdown

`git annex export` corresponding to import. This might be useful for eg,
datalad. There are some requests to make eg a S3 bucket mirror the
filenames in the git annex repository with incremental updates,
which seem out of scope (and there are many tools to do stuff like that
search "deploy files to S3 bucket"),
but something simpler like `git annex export` could be worth doing.
`git annex export --to remote files` would copy the files to the remote,
using the names in the working tree. For remotes like S3, it could add the
url of the exported file, so that another clone of the repo could use the
exported data.
Would this be able to reuse the existing `storeKey` interface, or would
there need to be a new interface in supported remotes?
--[[Joey]]
Work is in progress. Todo list:
* `git annex get --from export` works in the repo that exported to it,
but in another repo, the export db won't be populated, so it won't work.
Maybe just show a useful error message in this case?
However, exporting from one repository and then trying to update the
export from another repository also doesn't work right, because the
export database is not populated. So, seems that the export database needs
to get populated based on the export log in these cases.
* Support export to aditional special remotes (webdav etc)
* Support export in the assistant (when eg setting up a S3 special remote).
Would need git-annex sync to export to the master tree?
This is similar to the little-used preferreddir= preferred content
setting and the "public" repository group.
* Test S3 export.
* Test export to IA via S3. In particualar, does removing an exported file
work?
Low priority:
* When there are two pairs of duplicate files, and the filenames are
swapped around, the current rename handling renames both dups to a single
temp file, and so the other file in the pair gets re-uploaded
unncessarily. This could be improved.
Perhaps: Find pairs of renames that swap content between two files.
Run each pair in turn. Then run the current rename code. Although this
still probably misses cases, where eg, content cycles amoung 3 files, and
the same content amoung 3 other files. Is there a general algorythm?