incremental checksum for local remotes
This benchmarks only slightly faster than the old git-annex. Eg, for a 1 gb file, 14.56s vs 15.57s. (On a ram disk; there would certianly be more of an effect if the file was written to disk and didn't stay in cache.) Commenting out the updateIncremental calls make the same run in 6.31s. May be that overhead in the implementation, other than the actual checksumming, is slowing it down. Eg, MVar access. (I also tried using 10x larger chunks, which did not change the speed.)
This commit is contained in:
parent
48f63c2798
commit
f44d4704c6
4 changed files with 42 additions and 19 deletions
|
@ -0,0 +1,10 @@
|
|||
[[!comment format=mdwn
|
||||
username="joey"
|
||||
subject="""comment 10"""
|
||||
date="2021-02-10T19:48:58Z"
|
||||
content="""
|
||||
Incremental hashing implemented for local git remotes.
|
||||
|
||||
Next step should be a special remote, such as directory,
|
||||
that uses byteRetriever. Chunking and encryption will complicate them..
|
||||
"""]]
|
|
@ -20,6 +20,6 @@ checksum.
|
|||
|
||||
Urk: Using rsync currently protects against
|
||||
[[bugs/URL_key_potential_data_loss]], so the replacement would also need to
|
||||
deal with that. Probably by refusing to resume a partial transfer of an
|
||||
affected key. (Or it could just fall back to rsync for such keys.)
|
||||
deal with that. Eg, by comparing the temp file content with the start of
|
||||
the object when resuming.
|
||||
"""]]
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue