Merge branch 'master' of ssh://git-annex.branchable.com

This commit is contained in:
Joey Hess 2023-12-12 10:49:46 -04:00
commit 9f17383c00
No known key found for this signature in database
GPG key ID: DB12DB0FF05F8F38
4 changed files with 48 additions and 0 deletions

View file

@ -0,0 +1,18 @@
[[!comment format=mdwn
username="imlew"
avatar="http://cdn.libravatar.org/avatar/23858c3eed3c3ea9e21522f4c999f1ed"
subject="comment 12"
date="2023-12-12T11:56:05Z"
content="""
Hi joey,
if it was slow because the inodes changed it should only be slow the first time `git status` etc are run.
What I experienced was that it got slower the more files were in the repo whie the drive was continuously connected.
Thanks for your suggestion to use a directory special remote, but it's not clear to me how that would be an improvement over a bare repo.
The only drawback to using a bare repo is the lack of a working tree and special remotes don't seem to have that either.
"""]]

View file

@ -0,0 +1,8 @@
[[!comment format=mdwn
username="nobodyinperson"
avatar="http://cdn.libravatar.org/avatar/736a41cd4988ede057bae805d000f4f5"
subject="comment 13"
date="2023-12-12T12:44:38Z"
content="""
A directory special remote is just a bunch of files. A bare repo has the git history and all the metadata for the bunch of files. Git itself on slow and bad filesystems is not fun and git-annex having to comb through many git objects to extract metadata for the actual annexed files is most likely the bottleneck here. Best is to not run any git-annex commands directly on the bad filesystem, but elsewhere and operate the bad filesystem repo as a remote. Then you let git-annex gather its information on a fast filesysetm and hardware and let it do only the copying of real files to and from the bad filesystem. At least that's my experience.
"""]]

View file

@ -0,0 +1,11 @@
[[!comment format=mdwn
username="imlew"
avatar="http://cdn.libravatar.org/avatar/23858c3eed3c3ea9e21522f4c999f1ed"
subject="comment 14"
date="2023-12-12T13:36:20Z"
content="""
The speeds reported by `get` and `copy` were similar to what `rsync` reports when I just copy files to and from the disks.
It was really just the work tree (and I guess the clean/smudge filters) being slow.
And bare repos have the advantage that they carry for metadata and possibly file history.
"""]]

View file

@ -0,0 +1,11 @@
[[!comment format=mdwn
username="imlew"
avatar="http://cdn.libravatar.org/avatar/23858c3eed3c3ea9e21522f4c999f1ed"
subject="comment 16"
date="2023-12-12T13:32:17Z"
content="""
I thought I had the same problem as lh (`git add` not respecting the largefiles config), but when I tried to make a minimal example I noticed that `git add` does add files to the annex, it just doesn't print the progress message that `git annex add` usually prints.
Is there any way to get it to do that?
It would help newbs like me know that largefiles is indeed working and for files that are actually large it can be helpful to see the progress.
"""]]