correction
This commit is contained in:
parent
962f1f2363
commit
c08d5612ee
1 changed files with 13 additions and 4 deletions
|
@ -3,11 +3,20 @@
|
|||
subject="""comment 4"""
|
||||
date="2020-01-30T16:44:52Z"
|
||||
content="""
|
||||
When git-annex downloads chunks, it uses a single file and seeks forward to
|
||||
the next chunk boundry when resuming, for example.
|
||||
When git-annex downloads chunks, it downloads one chunk at a time
|
||||
(no parallelisation downloads of chunks of the same key) to either a temp
|
||||
file or a memory buffer, decrypts if necessary, and then appends the
|
||||
chunk to the destination file.
|
||||
|
||||
I agree with chrysn's analysis on all points.
|
||||
Since chunks are often stored entirely in ram, the chunk size is typically
|
||||
a small fraction of ram. It seems unlikely to me that the kernel would
|
||||
often decide to unncessarily flush a small write to a temp file out to disk
|
||||
and drop it from the cache when the very next operation after writing the
|
||||
file is reading it back in.
|
||||
|
||||
chrysn's analysis seems right.
|
||||
|
||||
Also, this smells of premature optimisation, and tying it to features that
|
||||
have not even been agreed on, let alone implemented or profiled, is weird.
|
||||
have not even been agreed on, let alone implemented, makes it kind of super
|
||||
low priority?
|
||||
"""]]
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue