9d4a766cd7
Leverage the new chunked remotes to automatically resume downloads. Sort of like rsync, although of course not as efficient since this needs to start at a chunk boundry. But, unlike rsync, this method will work for S3, WebDAV, external special remotes, etc, etc. Only directory special remotes so far, but many more soon! This implementation will also properly handle starting a download from one remote, interrupting, and resuming from another one, and so on. (Resuming interrupted chunked uploads is similarly doable, although slightly more expensive.) This commit was sponsored by Thomas Djärv.
24 lines
946 B
Markdown
24 lines
946 B
Markdown
Some [[special_remotes]] have support for breaking large files up into
|
|
chunks that are stored on the remote.
|
|
|
|
This can be useful to work around limitations on the size of files
|
|
on the remote.
|
|
|
|
Chunking also allows for resuming interrupted downloads and uploads.
|
|
|
|
Note that git-annex has to buffer chunks in memory before they are sent to
|
|
a remote. So, using a large chunk size will make it use more memory.
|
|
|
|
To enable chunking, pass a `chunk=XXmb` parameter to `git annex
|
|
initremote`.
|
|
|
|
To disable chunking of a remote that was using chunking,
|
|
pass `chunk=0` to `git annex enableremote`. Any content already stored on
|
|
the remote using chunks will continue to be accessed via chunks, this
|
|
just prevents using chunks when storing new content.
|
|
|
|
To change the chunk size, pass a `chunk=XXmb` parameter to
|
|
`git annex enableremote`. This only affects the chunk sized used when
|
|
storing new content.
|
|
|
|
See also: [[design document|design/assistant/chunks]]
|