2014-07-27 03:39:51 +00:00
|
|
|
Some [[special_remotes]] have support for breaking large files up into
|
|
|
|
chunks that are stored on the remote.
|
|
|
|
|
|
|
|
This can be useful to work around limitations on the size of files
|
|
|
|
on the remote.
|
|
|
|
|
resume interrupted chunked downloads
Leverage the new chunked remotes to automatically resume downloads.
Sort of like rsync, although of course not as efficient since this
needs to start at a chunk boundry.
But, unlike rsync, this method will work for S3, WebDAV, external
special remotes, etc, etc. Only directory special remotes so far,
but many more soon!
This implementation will also properly handle starting a download
from one remote, interrupting, and resuming from another one, and so on.
(Resuming interrupted chunked uploads is similarly doable, although
slightly more expensive.)
This commit was sponsored by Thomas Djärv.
2014-07-27 22:52:42 +00:00
|
|
|
Chunking also allows for resuming interrupted downloads and uploads.
|
|
|
|
|
2014-07-27 03:39:51 +00:00
|
|
|
Note that git-annex has to buffer chunks in memory before they are sent to
|
|
|
|
a remote. So, using a large chunk size will make it use more memory.
|
|
|
|
|
|
|
|
To enable chunking, pass a `chunk=XXmb` parameter to `git annex
|
|
|
|
initremote`.
|
|
|
|
|
|
|
|
To disable chunking of a remote that was using chunking,
|
|
|
|
pass `chunk=0` to `git annex enableremote`. Any content already stored on
|
|
|
|
the remote using chunks will continue to be accessed via chunks, this
|
|
|
|
just prevents using chunks when storing new content.
|
|
|
|
|
|
|
|
To change the chunk size, pass a `chunk=XXmb` parameter to
|
|
|
|
`git annex enableremote`. This only affects the chunk sized used when
|
|
|
|
storing new content.
|
|
|
|
|
|
|
|
See also: [[design document|design/assistant/chunks]]
|