I doubt that warning has ever been right, but I'm sure it is not right
now.
For there to be a risk of git gc deleting objects that are in the annex
index, journal files would have to be staged into it, and deleted, but
the index not committed to the git-annex branch. And AFAICS, there is no
code path where that actually happens. I considered adding one recently,
but didn't.
The way it actually works is, as long as the user has annex.alwayscommit=false
the data lives in the journal, where it's safe from git gc. Then when
git-annex is run w/o that config, the journal is staged into the index,
which is immediately committed to the branch. There's no window where
git gc could delete the objects, because git gc only deletes objects
after some time (2 weeks by default).
Now, if git-annex gets suspended at just the wrong time, or interrupted,
then yes, it's possible. But doesn't matter whether that config was ever
set or not. And many uses of git-annex also recover from that situation
by committing to the git-annex branch.
The journal read optimisation in aeca7c220 later got fixed in eedd73b84
to stage and commit any files that were left in the journal by a
previous git-annex run. That's necessary for the optimisation to work
correctly. But it also meant that alwayscommit=false started committing
the previous git-annex processes journalled changes, which defeated the
purpose of the config setting entirely.
So, disable the optimisation when alwayscommit=false, leaving the
files in the journal and not committing them. See my comments on the bug
report for why this seemed the best approach.
Also fixes a problem when annex.merge-annex-branches=false and there
are changes in the journal. That config indirectly prevents committing
the journal. (Which seems a bit odd given its name, but it always has..)
So, when there were changes in the journal, perhaps left there due to
alwayscommit=false being set before, the optimisation would prevent
git-annex from reading the journal files, and it would operate with out
of date information.
The only price paid is one additional MVar read per write to the journal.
Presumably writing a journal file dominiates over a MVar read time by
several orders of magnitude.
--batch does not get the speedup because then it needs to notice when
another process has made a change. Also made the assistant and other damon
modes bypass the optimisation, which would not help them anyway.