2012-11-25 18:21:10 +00:00
|
|
|
Resuming interrupted uploads to encrypted special remotes is not currently
|
|
|
|
possible, because gpg does not produce consistent output. Special remotes
|
|
|
|
that could support resuming include rsync and glacier.
|
|
|
|
|
|
|
|
Without consistent output, git-annex would need to locally cache the encrypted
|
|
|
|
file, and reuse that cache when resuming an upload. This would make
|
|
|
|
encrypted uploads more expensive in terms of both file IO and disk space
|
|
|
|
used.
|
|
|
|
|
|
|
|
[It would be possible to write to the cache at the same time the special
|
|
|
|
remote is being fed data, and if the special remote upload fails, continue
|
|
|
|
writing the rest of the file. That would avoid half the overhead, since
|
|
|
|
the file would not need to be read from, just written to. (Although OS
|
|
|
|
caching may accomplish the same thing.)]
|
|
|
|
|
|
|
|
Also, `git annex unused` would need to show temp files for uploads,
|
|
|
|
the same as it currently shows temp files for downloads, and users would
|
|
|
|
sometimes need to manually dropunused old uploads, that never completed.
|
|
|
|
|
|
|
|
The question, then, is whether resuming uploads is useful enough to add
|
|
|
|
this overhead and user-visible complexity.
|
|
|
|
--[[Joey]]
|
2015-01-13 22:19:07 +00:00
|
|
|
|
|
|
|
> The new-style chunking code chunks and then encrypts. This means that
|
|
|
|
> each individual chunk is a stand-alone file that can be decrypted later,
|
|
|
|
> and so resumes of uploads to encrypted, chunked remotes works now.
|
|
|
|
>
|
|
|
|
> I think that's better than the ideas discussed above, so [[done]]
|
|
|
|
> --[[Joey]]
|