re: backend variants that compute checksum of chunk checksums
This commit is contained in:
parent
d9fcc9c6cc
commit
ae04ab3b91
1 changed files with 1 additions and 0 deletions
1
doc/todo/key_checksum_from_chunk_checksums.mdwn
Normal file
1
doc/todo/key_checksum_from_chunk_checksums.mdwn
Normal file
|
@ -0,0 +1 @@
|
|||
Would it be hard to add a variantion to checksumming [[backends]], that would change how the checksum is computed: instead of computing it on the whole file, it would first be computed on file chunks of given size, and then the final checksum computed on the concatenation of the chunk checksums? You'd add a new [[key field|internals/key_format]], say cNNNNN, specifying the chunking size (the last chunk might be shorter). Then (1) for large files, checksum computation could be parallelized (there could be a config option specifying the default chunk size for newly added files); (2) I often have large files on a remote, for which I have md5 for each chunk, but not for the full file; this would enable me to register the location of these fies with git-annex without downloading them, while still using a checksum-based key.
|
Loading…
Add table
Add a link
Reference in a new issue