From ae04ab3b91402d5f4a9a92e4c77f151ba72f3b11 Mon Sep 17 00:00:00 2001 From: Ilya_Shlyakhter Date: Wed, 24 Apr 2019 17:40:13 +0000 Subject: [PATCH] re: backend variants that compute checksum of chunk checksums --- doc/todo/key_checksum_from_chunk_checksums.mdwn | 1 + 1 file changed, 1 insertion(+) create mode 100644 doc/todo/key_checksum_from_chunk_checksums.mdwn diff --git a/doc/todo/key_checksum_from_chunk_checksums.mdwn b/doc/todo/key_checksum_from_chunk_checksums.mdwn new file mode 100644 index 0000000000..7d30881255 --- /dev/null +++ b/doc/todo/key_checksum_from_chunk_checksums.mdwn @@ -0,0 +1 @@ +Would it be hard to add a variantion to checksumming [[backends]], that would change how the checksum is computed: instead of computing it on the whole file, it would first be computed on file chunks of given size, and then the final checksum computed on the concatenation of the chunk checksums? You'd add a new [[key field|internals/key_format]], say cNNNNN, specifying the chunking size (the last chunk might be shorter). Then (1) for large files, checksum computation could be parallelized (there could be a config option specifying the default chunk size for newly added files); (2) I often have large files on a remote, for which I have md5 for each chunk, but not for the full file; this would enable me to register the location of these fies with git-annex without downloading them, while still using a checksum-based key.