From 007d89f3e94dda7bc3f2e0a5c6ee92bec07ec3b6 Mon Sep 17 00:00:00 2001 From: Atemu Date: Tue, 9 Aug 2022 06:14:05 +0000 Subject: [PATCH] Added a comment --- ...nt_2_0f5ff10d84450a6df35cb974cdb4739e._comment | 15 +++++++++++++++ 1 file changed, 15 insertions(+) create mode 100644 doc/bugs/Copying_many_files_to_bup_remotes_is_very_slow/comment_2_0f5ff10d84450a6df35cb974cdb4739e._comment diff --git a/doc/bugs/Copying_many_files_to_bup_remotes_is_very_slow/comment_2_0f5ff10d84450a6df35cb974cdb4739e._comment b/doc/bugs/Copying_many_files_to_bup_remotes_is_very_slow/comment_2_0f5ff10d84450a6df35cb974cdb4739e._comment new file mode 100644 index 0000000000..c82b5371e9 --- /dev/null +++ b/doc/bugs/Copying_many_files_to_bup_remotes_is_very_slow/comment_2_0f5ff10d84450a6df35cb974cdb4739e._comment @@ -0,0 +1,15 @@ +[[!comment format=mdwn + username="Atemu" + avatar="http://cdn.libravatar.org/avatar/d1f0f4275931c552403f4c6707bead7a" + subject="comment 2" + date="2022-08-09T06:14:05Z" + content=""" +Agreed. + +I see two potential ways to improve performance: + +* Batching: + Bup can split multiple files at once. If it's given 10, 20, 100 files at a time, the per-split overhead matters less. Batching is something git-annex might need to learn sooner or later anyways because file transfer generally doesn't scale well currently (bup's slowness just exacerbates the problem). +* Bup index+save: + Use the same pattern as Borg. Back up the whole git-annex repo at a time and selectively restore in order to `get`. Not sure this would be a great idea but it should improve performance in my use-case (copy everything). +"""]]