Commit graph

2074 commits

Author SHA1 Message Date
Joey Hess
a69871491f
avoid build warning on windows
Since append was only exported by Annex.Common on unix, excluding it
from import caused a build warning on windows.
2024-03-26 13:16:33 -04:00
Joey Hess
f04d9574d6
fix transfer lock file for Download to not include uuid
While redundant concurrent transfers were already prevented in most
cases, it failed to prevent the case where two different repositories were
sending the same content to the same repository. By removing the uuid
from the transfer lock file for Download transfers, one repository
sending content will block the other one from also sending the same
content.

In order to interoperate with old git-annex, the old lock file is still
locked, as well as locking the new one. That added a lot of extra code
and work, and the plan is to eventually stop locking the old lock file,
at some point in time when an old git-annex process is unlikely to be
running at the same time.

Note that in the case of 2 repositories both doing eg
`git-annex copy foo --to origin`
the output is not that great:

copy b (to origin...)
  transfer already in progress, or unable to take transfer lock
git-annex: transfer already in progress, or unable to take transfer lock
97%   966.81 MiB      534 GiB/s 0sp2pstdio: 1 failed

  Lost connection (fd:14: hPutBuf: resource vanished (Broken pipe))

  Transfer failed

Perhaps that output could be cleaned up? Anyway, it's a lot better than letting
the redundant transfer happen and then failing with an obscure error about
a temp file, which is what it did before. And it seems users don't often
try to do this, since nobody ever reported this bug to me before.
(The "97%" there is actually how far along the *other* transfer is.)

Sponsored-by: Joshua Antonishen on Patreon
2024-03-25 14:47:46 -04:00
Joey Hess
62129f0b24
fix windows transfer lock check
If the lock file was not able to be exclusivlely locked, don't indicate
locking failed. I'm pretty sure this was a typo. It goes all the way
back to 891c85cd88 where locking was first
introduced on windows, and there's no indication of why it would make
sense to return True here.

Sponsored-by: Leon Schuermann on Patreon
2024-03-25 14:11:25 -04:00
Joey Hess
9c988ee607
handle multiple VURL checksums in one pass
git-annex fsck and some other commands that verify the content of a key
were using the non-incremental verification interface. But for VURL
urls, that interface is innefficient because when there are multiple
equivilant keys, it has to separately read and checksum for each key in
turn until one matches. It's more efficient for those to use the
incremental interface, since the file can be read a single time.

There's no real downside to using the incremental interface when available.

Note that more speedup could be had for VURL, if it was able to
calculate the checksum a single time and then compare with the
equivilant keys checksums. When the equivilant keys use the same type of
checksum.

Sponsored-by: k0ld on Patreon
2024-03-01 14:41:10 -04:00
Joey Hess
cc17ac423b
implement isCryptographicallySecureKey for VURL
Considerable difficulty to work around an import cycle. Had to move the
list of backends (except for VURL) to Backend.Variety to VURL could use
it.

Sponsored-by: Kevin Mueller on Patreon
2024-02-29 17:26:35 -04:00
Joey Hess
e7b7ea78af
lift isCryptographicallySecure to Annex
Needed for VURL backend.

Sponsored-by: Nicholas Golder-Manning on Patreon
2024-02-29 16:14:13 -04:00
Joey Hess
0f7143d226
support VURL backend
Not yet implemented is recording hashes on download from web and
verifying hashes.

addurl --verifiable option added with -V short option because I
expect a lot of people will want to use this.

It seems likely that --verifiable will become the default eventually,
and possibly rather soon. While old git-annex versions don't support
VURL, that doesn't prevent using them with keys that use VURL. Of
course, they won't verify the content on transfer, and fsck will warn
that it doesn't know about VURL. So there's not much problem with
starting to use VURL even when interoperating with old versions.

Sponsored-by: Joshua Antonishen on Patreon
2024-02-29 13:48:51 -04:00
Joey Hess
70cb41028e
Pass --no-warnings to yt-dlp
Notice a warning with -J2 causing git-annex progress output to get slightly
messed up.

Error output would also probably do that, so perhaps it should capture
stderr and only display it when yt-dlp exited nonzero?

This option might also make sense for youtube-dl, I don't have an
installation handy anymore to check.
2024-02-19 18:35:57 -04:00
Joey Hess
68e99513f0
added annex.commitmessage-command config
Sponsored-by: the NIH-funded NICEMAN (ReproNim TR&D3) project
2024-02-12 14:35:22 -04:00
Joey Hess
90db97d9a2
importfeed: Added --scrape option
Which uses yt-dlp to screen scrape the equivilant of an RSS feed.

Note that youtubedlscraped is a speed optimisation. Since yt-dlp found
the urls, we know it can download them. That avoids calling
youtubeDlSupported on each url, which makes --fast a lot faster.

Almost all the same metadata fields and file formatting fields are
populated, when yt-dlp is able to get the data. Note that yt-dlp has some
additional useful metadata that could be exposed. But, much of it is
specific to particular websites, and it would be hard to document on the
git-annex importfeed man page.

Sponsored-by: unqueued on Patreon
2024-01-30 15:37:29 -04:00
Joey Hess
2114253eaf
update comment
The segfault seems to be fixed with git 2.43, I'm not sure what the
affected range was.
2024-01-20 11:25:22 -04:00
Joey Hess
20567e605a
add directional stalldetection and bwlimit configs
Sponsored-by: Dartmouth College's DANDI project
2024-01-19 15:27:53 -04:00
Joey Hess
8da85fd3a3
RawFilePath conversion
Sponsored-by: Dartmouth College's DANDI project
2024-01-19 14:26:21 -04:00
Joey Hess
703a70cafa
avoid watchFileSize running backward
This is groundwork for using watchFileSize for downloads from external
special remotes.

In Annex.Content.downloadUrl, this potentially avoids jitter in the
progress meter. When downloading with conduit, the meter gets updated based
on both the size of the file, and on the data flowing through conduit.
If that has not yet been flushed to the file, it seems possible for the
meter to run backwards when meter is updated with the file size.
It's probably only a few kb of jitter, so may not be visible.

Sponsored-by: Dartmouth College's DANDI project
2024-01-19 14:11:27 -04:00
Joey Hess
df35f70801
tweak stall detection scaling
Refactored to allow offline experimentation, and ended up changing the
allowedvariation (aka fudge factor) to 3. 10 seems too high, and 1.5 too low.

Scale earlier, so even if the first chunk takes less than the configured
time period, allowance is made that later chunks might transfer slower.
Decided to use the same allowedvariation to decide when to start
scaling.

Smoothed the scaling out.

Some examples:

ghci> upscale (BwRate 10 (Duration 60)) 25
BwRate 13 (Duration {durationSeconds = 75})
-- A small scaling upwards after 1/3rd the time. Not noticable.
ghci> upscale (BwRate 10 (Duration 60)) 60
BwRate 30 (Duration {durationSeconds = 180})
-- At the configured time, 3x scaling.
ghci> upscale (BwRate 10 (Duration 60)) 120
BwRate 60 (Duration {durationSeconds = 360})
-- A typical upscaling, here a 1 minute duration became 6 minutes
-- due to the first chunk taking 2 minutes to transfer.
ghci> upscale (BwRate 10 (Duration 60)) 600
BwRate 300 (Duration {durationSeconds = 1800})
-- Here the first chunk took 10 minutes to transfer, so it will
-- take 30 minutes to detect a stall.

Sponsored-by: Dartmouth College's DANDI project
2024-01-19 12:58:41 -04:00
Joey Hess
c2634e7df2
automatically adjust stall detection period
Improve annex.stalldetection to handle remotes that update progress less
frequently than the configured time period.

In particular, this makes remotes that don't report progress but are
chunked work when transferring a single chunk takes longer than the
specified time period.

Any remotes that just have very low update granulatity would also be
handled by this.

The change to Remote.Helper.Chunked avoids an extra progress update when
resuming an interrupted upload. In that case, the code saw first Nothing
and then Just the already transferred number of bytes, which defeated this
new heuristic. This change will mean that, when resuming an interrupted
upload to a chunked remote that does not do its own progress reporting, the
progress display does not start out displaying the amount sent so far,
until after the first chunk is sent. This behavior change does not seem
like a major problem.

About the scalefudgefactor, it seems reasonable to expect subsequent chunks
to take no more than 1.5 times as long as the first chunk to transfer.
Could set it to 1, but then any chunk taking a little longer would be
treated as a stall. 2 also seems a likely value. Even 10 might be fine?

Sponsored-by: Dartmouth College's DANDI project
2024-01-18 17:12:10 -04:00
Joey Hess
f6cf2dec4c
disk free checking for unsized keys
Improve disk free space checking when transferring unsized keys to
local git remotes. Since the size of the object file is known, can
check that instead.

Getting unsized keys from local git remotes does not check the actual
object size. It would be harder to handle that direction because the size
check is run locally, before anything involving the remote is done. So it
doesn't know the size of the file on the remote.

Also, transferring unsized keys to other remotes, including ssh remotes and
p2p remotes don't do disk size checking for unsized keys. This would need a
change in protocol.

(It does seem like it would be possible to implement the same thing for
directory special remotes though.)

In some sense, it might be better to not ever do disk free checking for
unsized keys, than to do it only sometimes. A user might notice this
direction working and consider it a bug that the other direction does not.
On the other hand, disk reserve checking is not implemented for most
special remotes at all, and yet it is implemented for a few, which is also
inconsistent, but best effort. And so doing this best effort seems to make
some sense. Fundamentally, if the user wants the size to always be checked,
they should not use unsized keys.

Sponsored-by: Brock Spratlen on Patreon
2024-01-16 14:29:10 -04:00
Joey Hess
11b9069dc2
bump copyright year
after my first commit of 2024
2024-01-02 14:10:52 -04:00
Joey Hess
a5b9c2ca69
import: Sped up import from special remote when the imported tree is unchanged
I saw a nearly 2 minute speed up from this, in a repo with 56000 files some
of which are preferred content of the special remote and others not. In
such a case, addBackExportExcluded has to do a lot of work, which is
unncessary when the tree is unchanged.

When using sync --content, preferred content checking of that many files
takes about 1 minute. So this speeds up sync --content by 3x.
When using git-annex import, the speed up is much larger.

Sponsored-by: Nicholas Golder-Manning on Patreon
2024-01-02 13:57:31 -04:00
Joey Hess
9a67ed0f10
importtree: support preferred content expressions needing keys
When importing from a special remote, support preferred content expressions
that use terms that match on keys (eg "present", "copies=1"). Such terms
are ignored when importing, since the key is not known yet.

When "standard" or "groupwanted" is used, the terms in those
expressions also get pruned accordingly.

This does allow setting preferred content to "not (copies=1)" to make a
special remote into a "source" type of repository. Importing from it will
import all files. Then exporting to it will drop all files from it.

In the case of setting preferred content to "present", it's pruned on
import, so everything gets imported from it. Then on export, it's applied,
and everything in it is left on it, and no new content is exported to it.

Since the old behavior on these preferred content expressions was for
importtree to error out, there's no backwards compatability to worry about.
Except that sync/pull/etc will now import where before it errored out.
2023-12-18 16:27:59 -04:00
Joey Hess
eb59da9dd2
Lower precision of timestamps in git-annex branch
This can reduce the size of the branch by up to 8%. My test was
running git-annex add 1000 times on one file each.
Lots of different high-resolution timestamps were recorded before
and eliminating those, after packing, the git repo was 8% smaller.

Due to the use of vector clocks, high resolution timestamps are
not necessary to make clear which information is most recent when
eg, a value is changed repeatedly in the same second. In such a
case, the vector clock will be advanced to the next second after
the last modification. For example, running
git-annex numcopies 1; git-annex numcopies 2
The first will record the current second, while the next records
the second after that even if it runs in the same second.

As for conflicting information written to two different clones of the
repository, this will make git-annex sometimes pick information that
was written earlier in a second over information written later in the
same second. Usually git-annex does not write conflicting information,
but there are some cases where it could. Eg, storing an object on a remote
can update the remote state log with some state. If two repos both store the
same object, and end up storing different remote state for some reason,
this can result in one that ran a tiny bit later winning. Such a situation
seems unlikely to be user visible. And a small amount of clock skew could
already result in such things.

The only case I can think of where this might be a user visible change
is if a configuration command like git-annex numcopies is being run
in 2 clones of a repository on the same machine at very
close to the same time. Then the user will know which they ran last,
and git-annex won't.

If that did become a problem, this could be dialed back to eg log
milliseconds with still some space saving.
2023-12-11 15:04:06 -04:00
Joey Hess
86dbe9a825
migrate: support adding size back to URL keys
migrate: Support adding size to URL keys that were added with --relaxed, by
running eg: git-annex migrate --backend=URL foo

Since url keys cannot be generated, that used to fail. Make it notice that
the backend is not changed, and just get the size of the content.

Sponsored-by: Brock Spratlen on Patreon
2023-12-08 16:22:14 -04:00
Joey Hess
b65379a107
fix missing space in warning message 2023-12-08 12:36:33 -04:00
Joey Hess
f1ce15036f
started migrate --update
This is most of the way there, but not quite working.

The layout of migrate.tree/ needs to be changed to follow this approach.
git log will list all the files in tree order, so the new layout needs
to alternate old and new keys. Can that be done? git may not document
tree order, or may not preserve it here.

Alternatively, change to using git log --format=raw and extract
the tree header from that, then use
git diff --raw $tree:migrate.tree/old $tree:migrate.tree/new
That will be a little more expensive, but only when there are lots of
migrations.

Sponsored-by: Joshua Antonishen on Patreon
2023-12-07 15:50:52 -04:00
Joey Hess
0bd8b17b59
log migration trees to git-annex branch
This will allow distributed migration: Start a migration in one clone of
a repo, and then update other clones.

commitMigration is a bit of a bear.. There is some inversion of control
that needs some TMVars. Also streamLogFile's finalizer does not handle
recording the trees, so an interrupt at just the wrong time can cause
migration.log to be emptied but the git-annex branch not updated.

Sponsored-by: Graham Spencer on Patreon
2023-12-06 15:40:03 -04:00
Joey Hess
fd0b510573
improve message about 1 copy
"Could only verify the existence of 0 out of 1 necessary copy"
does not sound right, but neither does it with "copies".

Kept the "1" rather than "only" or such since numcopies is mentioned.

Sponsored-by: Brock Spratlen on Patreon
2023-12-04 11:12:54 -04:00
Joey Hess
1654572bc1
fix --from overriding annex-ignore
Make git-annex get/copy/move --from foo override configuration of
remote.foo.annex-ignore, as documented.

This already worked for remotes supporting hasKeyCheap. For others though,
git-annex copy --from foo would silently not do anything, while
git-annex copy --to foo would use the annex-ignored remote.

Also improved the annex-ignore docs, to reflect that `git-annex get`
without --from will skip using annex-ignored remotes, for example.

Sponsored-by: Dartmouth College's DANDI project
2023-11-30 15:12:07 -04:00
Joey Hess
38b9ebc5fd
newtype MapLog
Noticed that Semigroup instance of Map is not suitable to use
for MapLog. For example, it behaved like this:

ghci>  parseTrustLog "foo 1 timestamp=10\nfoo 2 timestamp=11" <> parseTrustLog "foo X timestamp=12"
fromList [(UUID "foo",LogEntry {changed = VectorClock 11s, value = SemiTrusted})]

Which was wrong, it lost the newer DeadTrusted value.

Luckily, nothing used that Semigroup when operating on a MapLog. And this
provides a safe instance.

Sponsored-by: Graham Spencer on Patreon
2023-11-13 14:37:22 -04:00
Joey Hess
be6b56df4c
remove unused import 2023-11-01 13:14:39 -04:00
Joey Hess
eb42935e58
Windows: Fix CRLF handling in some log files
In particular, the mergedrefs file was written with CR added to each line,
but read without CRLF handling. This resulted in each update of the file
adding CR to each line in it, growing the number of lines, while also
preventing the optimisation from working, so it remerged unncessarily.

writeFile and readFile do NewlineMode translation on Windows. But the
ByteString conversion prevented that from happening any longer.

I've audited for other cases of this, and found three more
(.git/annex/index.lck, .git/annex/ignoredrefs, and .git/annex/import/). All
of those also only prevent optimisations from working. Some other files are
currently both read and written with ByteString, but old git-annex may have
written them with NewlineMode translation. Other files are at risk for
breakage later if the reader gets converted to ByteString.

This is a minimal fix, but should be enough, as long as I remember to use
fileLines when splitting a ByteString into lines. This leaves files written
using ByteString without CR added, but that's ok because old git-annex has
no difficulty reading such files.

When the mergedrefs file has gotten lines that end with "\r\r\r\n", this
will eventually clean it up. Each update will remove a single trailing CR.

Note that S8.lines is still used in eg Command.Unused, where it is parsing
git show-ref, and similar in Git/*. git commands don't include CR in their
output so that's ok.

Sponsored-by: Joshua Antonishen on Patreon
2023-10-30 14:23:23 -04:00
Joey Hess
d9fd205cbb
push RawFilePath down into Annex.ReplaceFile
Minor optimisation, but a win in every case, except for a couple where
it's a wash.

Note that replaceFile still takes a FilePath, because it needs to
operate on Chars to truncate unicode filenames properly.
2023-10-26 13:36:49 -04:00
Joey Hess
c873586e14
eliminate s2w8 and w82s
Note that the use of s2w8 in genUUIDInNameSpace made it truncate unicode
characters. Luckily, genUUIDInNameSpace is only ever used on ASCII
strings as far as I can determine. In particular, git-remote-gcrypt's
gcrypt-id is an ASCII string.
2023-10-26 13:12:57 -04:00
Joey Hess
3742263c99
simplify base64 to only use ByteString
Note the use of fromString and toString from Data.ByteString.UTF8 dated
back to commit 9b93278e8a. Back then it
was using the dataenc package for base64, which operated on Word8 and
String. But with the switch to sandi, it uses ByteString, and indeed
fromB64' and toB64' were already using ByteString without that
complication. So I think there is no risk of such an encoding related
breakage.

I also tested the case that 9b93278e8a
fixed:

	git-annex metadata -s foo='a …' x
	git-annex metadata x
	metadata x
	  foo=a …

In Remote.Helper.Encryptable, it was avoiding using Utility.Base64
because of that UTF8 conversion. Since that's no longer done, it can
just use it now.
2023-10-26 13:10:05 -04:00
Joey Hess
0da1d40cd4
Improve memory use of --all when using annex.private
This does not improve Annex.Branch.files at all, since it still uses ++ to
combine the lists, so forcing all but the last one.

But when there are a lot of files in the private journal, it does avoid
--all (or a bare repo) from buffering the filenames in memory.

See commit 653b719472 for prior discussion of
this buffering.

Sponsored-by: Graham Spencer on Patreon
2023-10-24 13:20:55 -04:00
Joey Hess
8bde6101e3
sqlite datbase for importfeed
importfeed: Use caching database to avoid needing to list urls on every
run, and avoid using too much memory.

Benchmarking in my podcasts repo, importfeed got 1.42 seconds faster,
and memory use dropped from 203000k to 59408k.

Database.ImportFeed is Database.ContentIdentifier with the serial number
filed off. There is a bit of code duplication I would like to avoid,
particularly recordAnnexBranchTree, and getAnnexBranchTree. But these use
the persistent sqlite tables, so despite the code being the same, they
cannot be factored out.

Since this database includes the contentidentifier metadata, it will be
slightly redundant if a sqlite database is ever added for metadata. I
did consider making such a generic database and using it for this. But,
that would then need importfeed to update both the url database and the
metadata database, which is twice as much work diffing the git-annex
branch trees. Or would entagle updating two databases in a complex way.
So instead it seems better to optimise the database that
importfeed needs, and if the metadata database is used by another command,
use a little more disk space and do a little bit of redundant work to
update it.

Sponsored-by: unqueued on Patreon
2023-10-23 16:46:22 -04:00
Joey Hess
c268dc5878
only stage regular files from the journal
git-annex only writes regular files there, but other things may drop junk
like empty .DAV directories around the tree. And trying to hash such things
can have weird and hard to understand effects. So it seems best to do a
small amount of work in statting the journal file to make sure it's a
regular file.

Sponsored-by: Jack Hill on Patreon
2023-10-10 13:22:02 -04:00
Joey Hess
724ceeb1a9
avoid unncessary use of curl when conduit will do
Avoid using curl when annex.security.allowed-ip-addresses is set but
neither annex.web-options nor annex.security.allowed-url-schemes is set to
a value that needs curl.

Bug introduced in 840bd50390

Sponsored-By: Brock Spratlen on Patreon
2023-08-22 10:25:53 -04:00
Joey Hess
10b5f79e2d
fix empty tree import when directory does not exist
Fix behavior when importing a tree from a directory remote when the
directory does not exist. An empty tree was imported, rather than the
import failing. Merging that tree would delete every file in the
branch, if those files had been exported to the directory before.

The problem was that dirContentsRecursive returned [] when the directory
did not exist. Better for it to throw an exception. But in commit
74f0d67aa3 back in 2012, I made it never
theow exceptions, because exceptions throw inside unsafeInterleaveIO become
untrappable when the list is being traversed.

So, changed it to list the contents of the directory before entering
unsafeInterleaveIO. So exceptions are thrown for the directory. But still
not if it's unable to list the contents of a subdirectory. That's less of a
problem, because the subdirectory does exist (or if not, it got removed
after being listed, and it's ok to not include it in the list). A
subdirectory that has permissions that don't allow listing it will have its
contents omitted from the list still.

(Might be better to have it return a type that includes indications of
errors listing contents of subdirectories?)

The rest of the changes are making callers of dirContentsRecursive
use emptyWhenDoesNotExist when they relied on the behavior of it not
throwing an exception when the directory does not exist. Note that
it's possible some callers of dirContentsRecursive that used to ignore
permissions problems listing a directory will now start throwing exceptions
on them.

The fix to the directory special remote consisted of not making its
call in listImportableContentsM use emptyWhenDoesNotExist. So it will
throw an exception as desired.

Sponsored-by: Joshua Antonishen on Patreon
2023-08-15 12:57:41 -04:00
Joey Hess
be028f10e5
split out Utility.Url.Parse
This is mostly for git-repair which can't include all of Utility.Url
without adding many dependencies that are not really necessary.
2023-08-14 12:28:10 -04:00
Joey Hess
adda6c1088
Add git-annex remote refs that are not newer to the merged refs list
Significant startup speed increase by avoiding repeatedly checking if some
remote git-annex branch refs need to be merged when it is not newer.

One way this could happen is when there are 2 remotes that are themselves
connected. The git-annex branch on the first remote gets updated. Then the
second remote pulls from the first, and merges in its git-annex branch.
Then the local repo pulls from the second remote, and merges its git-annex
branch. At this point, a pull from the first remote will get a git-annex
branch that is not newer, but is not on the merged refs list.

In my big repo, git-annex startup time dropped from 4 seconds to 0.1 seconds.
There were 5 to 10 such remote refs out of 18 remotes.

Sponsored-by: Graham Spencer on Patreon
2023-08-09 13:31:36 -04:00
Joey Hess
3a52b4c4c3
fix hang when built with unix-2.8
git-annex test hang when running git-annex add in an adjusted unlocked
branch. I couldn't seem to reproduce the hang outside the test suite.

Seems that the code added in 26a9ea12d1
was buggy, and as that commit was made without testing it, building with
unix-2.8 exposed the bug.

I don't fully understand the bug, which involves fdToHandle
and then closing the fd, vs closing the handle. May somehow involve
laziness or forcing around the S.hGet? Using hClose solved it
in any case.

(Also eliminated checkcontentfollowssymlinks to fix a build warning
when it's not used.)
2023-08-01 20:22:28 -04:00
Joey Hess
eb8e30a2f1
fix build with unix-2.8.0
got the arguments the wrong way around when I wrote this

also squelch a build warning
2023-08-01 18:27:12 -04:00
Joey Hess
fa92383993
onlyingroup
* Support "onlyingroup=" in preferred content expressions.
* Support --onlyingroup= matching option.

Sponsored-by: Jack Hill on Patreon
2023-07-31 14:43:58 -04:00
Joey Hess
473d66132d
display explanations in --debug too
When --explain is not enabled. This can be useful debugging information
as well.

Sponsored-by: Dartmouth College's DANDI project
2023-07-31 13:06:40 -04:00
Joey Hess
846384fc3a
--explain for numcopies checks
And closed the todo as completed.

Sponsored-by: Dartmouth College's DANDI project
2023-07-31 12:53:17 -04:00
Joey Hess
518a51a8a0
--explain for preferred/required content matching
And annex.largefiles and annex.addunlocked.

Also git-annex matchexpression --explain explains why its input
expression matches or fails to match.

When there is no limit, avoid explaining why the lack of limit
matches. This is also done when no preferred content expression is set,
although in a few cases it defaults to a non-empty matcher, which will
be explained.

Sponsored-by: Dartmouth College's DANDI project
2023-07-26 14:50:04 -04:00
Joey Hess
f25eeedeac
initial implementation of --explain
Currently it only displays explanations of options like --in and --copies.

In the future, it should explain preferred content expression evaluation
and other decisions.

The explanations of a few things could be better. In particular,
"standard" will just appear as-is (or as "!standard" if it doesn't
match), rather than explaining why the standard preferred content expression
for the group matches or not.

Currently as implemented, it goes to stdout, and so commands like
git-annex find that have custom output will not display --explain
information. Perhaps that should change, dunno.

Sponsored-by: Dartmouth College's DANDI project
2023-07-25 16:52:57 -04:00
Joey Hess
cf40e2d4b6
Revert "use existing debug machinery for explain"
This reverts commit 409572c9e4.
2023-07-25 15:53:50 -04:00
Joey Hess
409572c9e4
use existing debug machinery for explain
explain is a kind of debug message, but not formatted in the same way.
So it makes sense to reuse the debug machinery for it, since that is
already quite optimised.

Sponsored-by: Dartmouth College's DANDI project
2023-07-25 15:47:58 -04:00
Joey Hess
e82823d448
nub list of files
yt-dlp when resumed was observed having written the same filename twice
into the file list. Perhaps once by the first download and once by the
resumed one?
2023-07-09 14:18:25 -04:00
Joey Hess
240bae38f6
sync: When in an adjusted branch, merge changes from the original branch
This causes changes to the original branch to get merged with a single
sync. Before, it took 2 syncs; the first happened to update the synced/
branch, and the second merged changes from the synced/ branch into the
ajusted branch.

Using mergeToAdjustedBranch when tomerge == origbranch is probably
overkill, but it does work fine.

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-07-06 12:42:24 -04:00
Joey Hess
adb09117f1
propigateAdjustedCommits: avoid overwriting diverged original branch
Bug fix: Re-running git-annex adjust or sync when in an adjusted branch
would overwrite the original branch, losing any commits that had been made
to it since the adjusted branch was created.

When git-annex adjust is run in this situation, it will display a warning
about the diverged branches.

When git-annex sync is run in this situation, mergeToAdjustedBranch
will merge the changes from the original branch to the adjusted branch.
So it does not need to display the divergence warning.

Note that for some reason, I'm needing to run sync twice for that to
happen. The first run does not do the merge and the second does. I'm unsure
why and so am not fully done with this bug.

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-07-05 17:09:49 -04:00
Joey Hess
a05bc6a314
Fix breakage when git is configured with safe.bareRepository = explicit
Running git config --list inside .git then fails, so better to only
do that when --git-dir was specified explicitly. Otherwise, when the
repository is not bare, run the command inside the working tree.

Also make init detect when the uuid it just set cannot be read and fail
with an error, in case git changes something that breaks this later.

I still don't actually understand why git-annex add/assist -J2 was
affected but -J1 was not. But I did show that it was skipping writing to
the location log, because the uuid was NoUUID.

Sponsored-by: Graham Spencer on Patreon
2023-07-05 14:43:14 -04:00
Joey Hess
928b2a4839
create journal directory in withJournalHandle
Fixes a crash by git-annex repair when .git/annex/journal/ does not exist.

Normally the journal directory is created before withJournalHandle gets
run, but git-annex repair can be run in a situation where it does not
exist.
2023-06-21 15:23:59 -04:00
Joey Hess
72715845a1
display destination file before youtube-dl download
Rather than after it, which can leave one wondering what file it's
downloading.

youtubeDl should not ever return Right Nothing in normal operation,
becaause it's already asked youtube-dl if it supports the url. So it
would have to succeed at that, then not download any file, but also
exit successfully, in order for the new error message to display.

Also display the name of yt-dlp when using it.
2023-06-20 14:55:25 -04:00
Joey Hess
a861d56428
httpalso: Support being used with special remotes that use chunking.
Sponsored-by: k0ld on Patreon
2023-06-20 13:35:28 -04:00
Joey Hess
a36a81dea3
Improve resuming interrupted download when using yt-dlp
Sometimes resuming an interrupted download will fail to resume and download
more files with different names. That resulted in the workdir having
multiple files at the end, which causes git-annex to give up because it
does not know what was downloaded.

To fix this, use a yt-dlp feature, which appends to a file the name of each
file after it's finished downloading it. So the presence of other cruft in
the workdir will not confuse git-annex.
2023-06-19 14:39:08 -04:00
Joey Hess
64738ea157
config: Added the --show-origin and --for-file options
* config: Added the --show-origin and --for-file options.
* config: Support annex.numcopies and annex.mincopies.

There is a little bit of redundancy here with other code elsewhere that
combines the various configs and selects which to use. But really only
for the special case of annex.numcopies, which is a git config that does
not override the annex branch setting and for annex.mincopies, which does
not have a git config but does have gitattributes settings as well as the
annex branch setting.

That seems small enough, and unlikely enough to grow into a mess that it was
worth supporting annex.numcopies and annex.mincopies in git-annex config
--show-origin. Because these settings are a prime thing that someone might
get confused about and want to know where they were configured.

And, it followed that git-annex config might as well support those two
for --set and --get as well. While this is redundant with the speclialized
commands, it's only a little code and it makes it more consistent.

Note that --set does not have as nice output as numcopies/mincopies
commands in some special cases like setting to 0 or a negative number.
It does avoid setting to a bad value thanks to the smart
constructors (eg configuredNumCopies).

As for other git-annex branch configurations that are not set by git-annex
config, things like trust and wanted that are specific to a repository
don't map to a git config name, so don't really fit into git-annex config.
And they are only configured in the git-annex branch with no local override
(at least so far), so --show-origin would not be useful for them.

Sponsored-by: Dartmouth College's DANDI project
2023-06-12 16:24:31 -04:00
Joey Hess
ae98fb1b31
move unspecifiedAttr check to checkAttr
It just so happens that everywhere that checks attrs other than
annex.largefiles parses the value further, and failed to parse
unspecifiedAttr in a way that behaved the same as if nothing was set. So
this is not a bug fix or behavior change. What it does so is prevent
future uses of checkAttr from needing to remember to handle checking for
this edge case.

Sponsored-by: Dartmouth College's DANDI project
2023-06-12 14:37:42 -04:00
Joey Hess
532b227086
update exportdb tree in getImportableContents
This avoids bottlenecking on git check-ignore in a particular situation.
Also, there may have been a correctness issue with it not having updated it.
When the exportdb is already up-to-date, this is not expensive. And the
exportdb is updated elsewhere, so usually it is up-to-date.

Sponsored-by: Joshua Antonishen on Patreon
2023-06-08 18:36:24 -04:00
Joey Hess
6821ba8dab
sync: use log to track adjusted branch needs updating
Speeds up sync in an adjusted branch by avoiding re-adjusting the branch
unncessarily, particularly when it is adjusted with --hide-missing or
--unlock-present.

When there are a lot of files, that was the majority of the time of a
--no-content sync.

Uses a log file, which is updated when content presence changes. This
adds a little bit of overhead to every file get/drop when on such an
adjusted branch. The overhead is minimal for get of any size of file,
but might be noticable for drop in some cases. It seems like a reasonable
trade-off. It would be possible to update the log file only at the end, but
then it would not happen if the command is interrupted.

When not in an adjusted branch, there should be no additional overhead.
(getCurrentBranch is an MVar read, and it avoids the MVar read of
getGitConfig.)

Note that this does not deal with situations such as:
git checkout master, git-annex get, git checkout adjusted branch,
git-annex sync. The sync won't know that the adjusted branch needs to be
updated. Dealing with that would add overhead to operation in non-adjusted
branches, which I don't like. Also, there are other situations like having
two adjusted branches that both need to be updated like this, and switching
between them and sync not updating.

This does mean a behavior change to sync, since it did previously deal
with those situations. But, the documentation did not say that it did.
The man pages only talk about sync updating the adjusted branch after
it transfers content.

I did consider making sync keep track of content it transferred (and
dropped) and only update the adjusted branch then, not to catch up to other
changes made previously. That would perform better. But it seemed rather
hard to implement, and also it would have problems with races with a
concurrent get/drop, which this implementation avoids.

And it seemed pretty likely someone had gotten used to get/drop followed by
sync updating the branch. It seems much less likely someone is switching
branches, doing get/drop, and then switching back and expecting sync to update
the branch.

Re-running git-annex adjust still does a full re-adjusting of the branch,
for anyone who needs that.

Sponsored-by: Leon Schuermann on Patreon
2023-06-08 14:35:41 -04:00
Joey Hess
637f19bebb
fix adjusted branch update breakage
Introduced recently in commit 64fc34b3da.

adjustBranch changes the sha that is recorded for the current branch
(eg the adjusted branch). So, have to get the original sha before
calling it.

Sponsored-by: Jack Hill on Patreon
2023-06-08 13:33:58 -04:00
Joey Hess
64fc34b3da
narrow window where HEAD is detached
Updating an adjusted branch can take a while when there are a lot of
files. HEAD was detached at the start, so if eg git-annex sync was
interrupted at the wrong point, there was a possibly wide window where
it would leave the repo with HEAD detached.

There's still a window, just much narrower. I don't know if it's
possible to close the window entirely. While git can clearly update
the currently checked out branch in eg git merge, it doesn't seem to
provide another way to do it.

Sponsored-by: Graham Spencer on Patreon
2023-06-07 11:10:54 -04:00
Joey Hess
fe1b2dfb4b
speed up very first tree import by 25%
Reading from the cidsdb is responsible for about 25% of the runtime of
an import. Since the cidmap is used to store the same information in
ram, the cidsdb is not written to during an import any longer. And so,
if it started off empty (and updateFromLog wasn't needed), those reads
can just be skipped.

This is kind of a cheesy optimisation, since after any import from any
special remote, the database will no longer be empty, so it's a single
use optimisation. But it's probably not uncommon to start by importing a
lot of files, and it can save a lot of time then.

Sponsored-by: Brock Spratlen on Patreon
2023-06-02 13:30:30 -04:00
Joey Hess
40017089f2
use importChanges optimisation
Large speed up to importing trees from special remotes that contain a lot
of files, by only processing changed files.

Benchmarks:

Importing from a special remote that has 10000 files, that have all been
imported before, and 1 new file sped up from 26.06 to 2.59 seconds.

An import with no change and 10000 unchanged files sped up from 24.3 to
1.99 seconds.

Going up to 20000 files, an import with no changes sped up from
125.95 to 3.84 seconds.

Sponsored-by: k0ld on Patreon
2023-06-01 13:47:00 -04:00
Joey Hess
c6acf574c7
implement importChanges optimisaton (not used yet)
For simplicity, I've not tried to make it handle History yet, so when
there is a history, a full import will still be done. Probably the right
way to handle history is to first diff from the current tree to the last
imported tree. Then, diff from the current tree to each of the
historical trees, and recurse through the history diffing from child tree
to parent tree.

I don't think that will need a record of the previously imported
historical trees, and so Logs.Import doesn't store them. Although I did
leave room for future expansion in that log just in case.

Next step will be to change importTree to importChanges and modify
recordImportTree et all to handle it, by using adjustTree.

Sponsored-by: Brett Eisenberg on Patreon
2023-05-31 16:01:34 -04:00
Joey Hess
7298123520
build git trees using ContentIdentifier to speed up import
This gets the trees built, but it does not use them. Next step will be
to remember the tree for next time an import is done, and diff between
old and new trees to find the files that have changed.

Added --missing to the mktree parameters. That only disables a check, so
it's ok to do everywhere mktree is used. It probably also speeds up
mktree to disable the check.

Note that git fsck does not complain about the resulting tree objects
that point to shas that are not in the repository. Even with --strict.

A quick benchmark, importing 10000 files, this slowed it down
from 2:04.06 to 2:04.28. So it will more than pay for itself.

Sponsored-by: Luke Shumaker on Patreon
2023-05-31 12:46:54 -04:00
Joey Hess
f6aa097a39
avoid import writing to cidsdb initially
Speed up importing trees from special remotes somewhat by avoiding
redundant writes to sqlite database.

Before, import would write to both the git-annex branch and also to the
sqlite database. But then the next time it was run, needsUpdateFromLog
would see the branch had changed, so run updateFromLog, which would make
the same writes to the sqlite database a second time.

Now import writes only to the git-annex branch. The next time it's run,
needsUpdateFromLog sees that the branch has changed and so calls
updateFromLog, which updates the sqlite database.

Why defer the write to the sqlite database like this? It seems that it
could write to the database as it goes, and at the end call
recordAnnexBranchTree to indicate that the information in the git-annex
branch has all been written to the cidsdb. That would avoid the second
import doing extra work.

But, there could be other processes running at the same time, and one of
them may update the git-annex branch, eg merging a remote git-annex branch
into it. Any cids logs on that merged git-annex branch would not be
reflected in the cidsdb yet. If the import then called
recordAnnexBranchTree, the cidsdb would never get updated with that merged
information.

I don't think there's a good way to prevent, or to detect that situation.
So, it can't call recordAnnexBranchTree at the end. So it might as well
wait until the next run and do updateFromLog then. It could instead do
updateFromLog at the end, but it's going to check needsUpdateFromLog
at the beginning anyway.

Note that the database writes were queued, so there is already a cidmap
that is used to remember changes that the current process has made.
So, omitting database writes can't change the behavior of the current
process.

Also note that thirdpartypopulatedimport uses recordcidkeyindb, which
reflects what it already did. That code path does not use the cidmap,
but does not need to query it either. It might be possible to make that
code path also only update the git-annex branch and not the db, but I
haven't checked.

Sponsored-by: Noam Kremen on Patreon
2023-05-30 17:05:28 -04:00
Joey Hess
f2db6da938
default to yt-dlp and fix progress parsing bugs
I noticed git-annex was using a lot of CPU when downloading from youtube,
and was not displaying progress. Turns out that yt-dlp (and I think also
youtube-dl) sometimes only knows an estimated size, not the actual size,
and displays the progress output slightly differently for that. That broke
the parser. And, the parser was feeding chunks that failed to parse back
as a remainder, which caused it to try to re-parse the entire output each
time, so it got slower and slower.

Using --progress-template like this should avoid parsing problems as well
as future proof against output changes. But it will work with only yt-dlp.

So, this seemed like the right time to deprecate youtube-dl, and default
to yt-dlp when available.

git-annex will still use youtube-dl if that's all that's available.
However, since the progress parser for youtube-dl was buggy, and I don't
want to maintain two different progress parsers (especially since
youtube-dl is no longer in debian unstable having been replaced by
yt-dlp), made git-annex no longer try to parse youtube-dl's progress.

Also, updated docs for yt-dlp being default. It did not seem worth
renaming annex.youtube-dl-options and annex.youtube-dl-command.

Note that yt-dlp does not seem to document the fields available in the
progress template. I found them by reading the source and looking at
the templates it uses internally. Also note that the use of "i" (rather
than "s") in progressTemplate makes it display floats rounded to integers;
particularly the estimated total size can be a float. That also does not
seem to be documented but I assume is a python thing?

Sponsored-by: Joshua Antonishen on Patreon
2023-05-27 13:04:53 -04:00
Joey Hess
aff37fc208
avoid annexFileMode special case
This makes annexFileMode be just an application of setAnnexPerm',
which avoids having 2 functions that do different versions of the same
thing.

Fixes some buggy behavior for some combinations of core.sharedRepository
and umask.

Sponsored-by: Jack Hill on Patreon
2023-04-27 15:58:37 -04:00
Joey Hess
67f8268b3f
Support core.sharedRepository=0xxx at long last
Sponsored-by: Brett Eisenberg on Patreon
2023-04-26 17:03:29 -04:00
Joey Hess
0aa98aa09b
fix perms for core.sharedRepository
These two missed setting it.

It rarely matters that the journal gets the right perm. But, when using
annex.alwayscommit=false, someone else may come along later and want
to append to the journal file.

It probably never matters what the sentinal perms are, but for
completeness..

Sponsored-by: Luke Shumaker on Patreon
2023-04-26 16:29:11 -04:00
Joey Hess
7af75a59be
Warn about unsupported core.sharedRepository=0xxx when set
This spams the user with a lot of messages, but it seems like busywork to
avoid that and only warn once, since this warning will go away when it gets
implemented.

Also fix parsing of the octal value.

Sponsored-by: Kevin Mueller on Patreon
2023-04-26 13:25:29 -04:00
Joey Hess
9155ed1072
configremote
New command, currently limited to changing autoenable= setting of a special remote.

It will probably never be used for more than that given the limitations on
it.

Sponsored-by: Brock Spratlen on Patreon
2023-04-18 15:30:49 -04:00
Joey Hess
fe5e586b72
rename Git.Filename to Git.Quote 2023-04-12 17:22:03 -04:00
Joey Hess
a576fc3b12
fix mojibake reversion in display of utf8
When displaying a ByteString like "💕", safeOutput operates on
individual bytes like "\240\159\146\149" and isControl '\146' = True,
so it got truncated to just "\240".

So, only treat the low control characters, and DEL, as control
characters.

Also split Utility.Terminal out of Utility.SafeOutput. The latter needs
win32, but Utility.SafeOutput is used by Control.Exception, which is
used by Setup.

Sponsored-by: Nicholas Golder-Manning on Patreon
2023-04-12 13:53:30 -04:00
Joey Hess
c50aa21d5f
init: Avoid autoenabling special remotes that have control characters in their names
I'm on the fence about this. Notice that pulling from a git remote can
pull branches that have escape sequences in their names. Git will
display those as-is. Arguably git should try harder to avoid that.

But, names of remotes are usually up to the local user, and autoenable
changes that, and so it makes sense that git chooses to display control
characters in names of remotes, and so autoenable needs to guard against
it.

Sponsored-by: Graham Spencer on Patreon
2023-04-12 12:37:12 -04:00
Joey Hess
de68e3dd4f
allow tab in controlCharacterInFilePath
Seems unlikely to have a tab in a path, but it's not a control character
that needs to be prevented either.

Left \n \r \v and \a as other non-threatening control characters
that are still obnoxious to have in a filepath because of how it causes
issues with display and/or with shell scripting.
2023-04-12 12:31:16 -04:00
Joey Hess
8b6c7bdbcc
filter out control characters in all other Messages
This does, as a side effect, make long notes in json output not
be indented. The indentation is only needed to offset them
underneath the display of the file they apply to, so that's ok.

Sponsored-by: Brock Spratlen on Patreon
2023-04-11 12:58:01 -04:00
Joey Hess
3290a09a70
filter out control characters in warning messages
Converted warning and similar to use StringContainingQuotedPath. Most
warnings are static strings, some do refer to filepaths that need to be
quoted, and others don't need quoting.

Note that, since quote filters out control characters of even
UnquotedString, this makes all warnings safe, even when an attacker
sneaks in a control character in some other way.

When json is being output, no quoting is done, since json gets its own
quoting.

This does, as a side effect, make warning messages in json output not
be indented. The indentation is only needed to offset warning messages
underneath the display of the file they apply to, so that's ok.

Sponsored-by: Brett Eisenberg on Patreon
2023-04-10 15:55:44 -04:00
Joey Hess
cd544e548b
filter out control characters in error messages
giveup changed to filter out control characters. (It is too low level to
make it use StringContainingQuotedPath.)

error still does not, but it should only be used for internal errors,
where the message is not attacker-controlled.

Changed a lot of existing error to giveup when it is not strictly an
internal error.

Of course, other exceptions can still be thrown, either by code in
git-annex, or a library, that include some attacker-controlled value.
This does not guard against those.

Sponsored-by: Noam Kremen on Patreon
2023-04-10 13:50:51 -04:00
Joey Hess
063c00e4f7
git style filename quoting for giveup
When the filenames are part of the git repository or other files that
might have attacker-controlled names, quote them in error messages.

This is fairly complete, although I didn't do the one in
Utility.DirWatcher.INotify.hs because that doesn't have access to
Git.Filename or Annex.

But it's also quite possible I missed some. And also while scanning for
these, I found giveup used with other things that could be attacker
controlled to contain control characters (eg Keys). So, I'm thinking
it would also be good for giveup to just filter out control characters.
This commit is then not the only line of defence, but just good
formatting when git-annex displays a filename in an error message.

Sponsored-by: Kevin Mueller on Patreon
2023-04-10 12:56:45 -04:00
Joey Hess
da83652c76
addurl --preserve-filename: reject control characters
As well as escape sequences, control characters seem unlikely to be desired when
doing addurl, and likely to trip someone up. So disallow them as well.

I did consider going the other way and allowing filenames with control characters
and escape sequences, since git-annex is in the process of escaping display
of all filenames. Might still be a better idea?

Also display the illegal filename git quoted when it rejects it.

Sponsored-by: Nicholas Golder-Manning on Patreon
2023-04-10 12:18:25 -04:00
Joey Hess
2ba1559a8e
git style quoting for ActionItemOther
Added StringContainingQuotedPath, which is used for ActionItemOther.

In the process, checked every ActionItemOther for those containing
filenames, and made them use quoting.

Sponsored-by: Graham Spencer on Patreon
2023-04-08 16:30:01 -04:00
Joey Hess
ac0345aa42
improve comments 2023-04-04 15:23:39 -04:00
Joey Hess
e3f5bd4ca6
Revert "override rather than setting user.name and user.email"
This reverts commit 66eb63dd82.

git-annex init is the only thing that uses ensureCommit. So overriding
there will make later commits to the git-annex branch or by git-annex sync
fail.

It's ugly that git-annex init sets user.name and user.email, but it only
does it on systems that are badly configured.
2023-04-04 15:15:02 -04:00
Joey Hess
e91bf784cd
Support user.useConfigOnly git config
When it's set and git cannot determine user.name or user.email, this will
result in git-annex init failing when committing to create the git-annex
branch. Other git-annex commands that commit can also fail.

Sponsored-by: Jack Hill on Patreon
2023-04-04 15:12:52 -04:00
Joey Hess
66eb63dd82
override rather than setting user.name and user.email
Avoid setting user.name and user.email in the git config when git is unable
to detect them.

git-annex has good reason to want to ensure git commit succeeds when eg
committing to the git-annex branch. But it's not playing nice to set these
values where other commands can see them.

Sponsored-by: Brett Eisenberg on Patreon
2023-04-04 14:56:44 -04:00
Joey Hess
3eb51ee929
readFileStrict to avoid laziness bug
Fix laziness bug introduced in last release that breaks use of
--unlock-present and --hide-missing adjusted branches.

Since there is a writeFile of the same file immediately after readFile, it
may still have the file open for read (or may have happened to read it
already and closed it).

I was not able to reproduce the problem in brief testing, but this seems
obvious.

Sponsored-by: Luke Shumaker on Patreona
2023-04-04 14:25:01 -04:00
Joey Hess
22091d4765
fix comment 2023-03-28 13:40:17 -04:00
Joey Hess
a5709dcc22
Copy with a reflink when exporting a tree to a directory special remote
Remote.Directory makes a temp file, then calls this, and since the temp
file exists, it prevented probing if CoW works.

Note that deleting the empty file does mean there's a small window for a
race. If another process is also exporting to the remote, that could let it
make the same temp file. However, the temp filename actually has the
processes's pid in it, which avoids that being a problem.

This may have been a reversion caused by commits around
63d508e885, but I haven't gone back and
tested to be sure. The directory special remote had supposedly supported
CoW for this going back to about half a year before that.

Sponsored-by: Graham Spencer on Patreon
2023-03-28 13:09:14 -04:00
Joey Hess
24ae4b291c
addurl, importfeed: Fix failure when annex.securehashesonly is set
The temporary URL key used for the download, before the real key is
generated, was blocked by annex.securehashesonly.

Fixed by passing the Backend that will be used for the final key into
runTransfer. When a Backend is provided, have preCheckSecureHashes
check that, rather than the key being transferred.

Sponsored-by: unqueued on Patreon
2023-03-27 15:10:46 -04:00
Joey Hess
cb6cb61ca1
avoid build warning on windows 2023-03-27 12:20:35 -04:00
Joey Hess
291ad8f6b2
avoid build warning on windows 2023-03-27 12:19:26 -04:00
Joey Hess
2b5fa091e2
annex.maxextensionlength for view
view: Support annex.maxextensionlength when generating filenames for the
view branch.

Note that refining an existing view will reuse the extension length that was
configured when initially constructing the view. This is necessarily the case
because it reuses the filenames.

Also view files used to have all extensions at the end, no matter how
many there were. Since annex.maxextensionlength's documentation includes
that it's limited to 2 extensions, I made it consistent with that.

Sponsored-by: k0ld on Patreon
2023-03-24 14:01:38 -04:00
Joey Hess
038a2600f4
Avoid leaving repo with a detached head when there is a failure checking out an updated adjusted branch
I don't know of scenarios where that can happen (besides the bug
fixed by the parent commit), but there probably are some.

Sponsored-by: Boyd Stephen Smith Jr. on Patreon
2023-03-23 16:36:43 -04:00
Joey Hess
cb4d9f7b1f
run restagePointerFiles in adjustedBranchRefreshFull
Avoid failure to update adjusted branch --unlock-present after git-annex
drop when annex.adjustedbranchrefresh=1

At higher values, it did flush the queue, which ran restagePointerFiles.
But at 1, adjustedBranchRefreshFull gets added to the queue, and while
restagePointerFiles is also in the queue, it runs after that.

Sponsored-by: Brock Spratlen on Patreon
2023-03-23 16:25:45 -04:00
Joey Hess
e822df2a09
fix build warnings on windows 2023-03-21 18:41:23 -04:00
Yaroslav Halchenko
84b0a3707a
Apply codespell -w throughout 2023-03-17 15:14:58 -04:00
Yaroslav Halchenko
0ae5ff797f
Typo: sansative -> sensitive 2023-03-17 15:14:50 -04:00