While redundant concurrent transfers were already prevented in most
cases, it failed to prevent the case where two different repositories were
sending the same content to the same repository. By removing the uuid
from the transfer lock file for Download transfers, one repository
sending content will block the other one from also sending the same
content.
In order to interoperate with old git-annex, the old lock file is still
locked, as well as locking the new one. That added a lot of extra code
and work, and the plan is to eventually stop locking the old lock file,
at some point in time when an old git-annex process is unlikely to be
running at the same time.
Note that in the case of 2 repositories both doing eg
`git-annex copy foo --to origin`
the output is not that great:
copy b (to origin...)
transfer already in progress, or unable to take transfer lock
git-annex: transfer already in progress, or unable to take transfer lock
97% 966.81 MiB 534 GiB/s 0sp2pstdio: 1 failed
Lost connection (fd:14: hPutBuf: resource vanished (Broken pipe))
Transfer failed
Perhaps that output could be cleaned up? Anyway, it's a lot better than letting
the redundant transfer happen and then failing with an obscure error about
a temp file, which is what it did before. And it seems users don't often
try to do this, since nobody ever reported this bug to me before.
(The "97%" there is actually how far along the *other* transfer is.)
Sponsored-by: Joshua Antonishen on Patreon
What this can currently be used for is only to change an url from being
used by a special remote to being used by the web remote.
This could have been a --move-from option to registerurl. But, that would
have complicated its option and --batch processing, and also would have
complicated unregisterurl, which is implemented on top of
Command.Registerurl. So, a separate command was actually less complicated
to implement.
The generic description of the command is because I want to make this
command a catch-all for other url updating kind of things, if there are
ever any more. Also because it was hard to come up with a good name for the
specific action. I considered `git-annex moveurl`, but that seems to
indicate data is perhaps actually being moved, and seems to sit at the same
level as addurl and rmurl, and this command is at the plumbing
level of registerurl and unregisterurl.
Sponsored-by: Dartmouth College's DANDI project
Not yet implemented is recording hashes on download from web and
verifying hashes.
addurl --verifiable option added with -V short option because I
expect a lot of people will want to use this.
It seems likely that --verifiable will become the default eventually,
and possibly rather soon. While old git-annex versions don't support
VURL, that doesn't prevent using them with keys that use VURL. Of
course, they won't verify the content on transfer, and fsck will warn
that it doesn't know about VURL. So there's not much problem with
starting to use VURL even when interoperating with old versions.
Sponsored-by: Joshua Antonishen on Patreon
Notice a warning with -J2 causing git-annex progress output to get slightly
messed up.
Error output would also probably do that, so perhaps it should capture
stderr and only display it when yt-dlp exited nonzero?
This option might also make sense for youtube-dl, I don't have an
installation handy anymore to check.
Except when a commit is made in a view, which changes metadata.
Make the assistant commit the git-annex branch after git commit of working
tree changes.
This allows using the annex.commitmessage-command in the assistant to
generate a commit message for the git-annex branch that relies on state
gathered during the commit of the working tree. Eg, it might reuse the
commit message.
Note that, when not using the assistant, a git-annex add still commits
the git-annex branch, so such a annex.commitmessage-command set up would
not work then. But if someone is using the assistant and wants
programmatic control over commit messages, this is useful. Someone not
using the assistant can get the same result by using annex.alwayscommit=false
during the git-annex add, and git-annex merge after they git commit.
pre-commit was never really intended to commit the git-annex branch
(except after recording changed metadata), but the assistant did sort of
rely on it. It does later commit the git-annex branch before pushing to
remotes, but I didn't want to risk building up lots of uncommitted changes
to it if that didn't happen frequently.
Sponsored-by: the NIH-funded NICEMAN (ReproNim TR&D3) project
Was doing a Git.Branch.commit for historical reasons to do with direct
mode, which no longer apply.
Note that the preCommitAnnexHook is no longer called in commitStaged
because git-annex installs a pre-commit hook that runs the pre-commit-annex
hook. And git commit will run the pre-commit hook.
Sponsored-by: the NIH-funded NICEMAN (ReproNim TR&D3) project
--raw-except=web allows using yt-dlp but not any other special remotes.
Currently this option can only be used once, trying to use it repeatedly
will make option parsing fail. Perhaps it ought to support being used more
than once, but it seemed like an unlikely use case to need that.
Note that getParsed is called repeatedly when the option is used with
several urls. While implementing DeferredParseClass would avoid that
innefficiency, it didn't seem worth the added boilerplate since
getParsed only calls byNameWithUUID which does minimal work.
Sponsored-by: Dartmouth College's DANDI project
importfeed --force: Don't treat it as a failure when an already downloaded
file exists. (Fixes a behavior change introduced in 10.20230626.)
04ee6c4c6b caused the reversion. Inside a CommandPerform, stop causes it
to fail. Before that commit, it was inside a CommandStart, where stop
causes it to skip.
Which uses yt-dlp to screen scrape the equivilant of an RSS feed.
Note that youtubedlscraped is a speed optimisation. Since yt-dlp found
the urls, we know it can download them. That avoids calling
youtubeDlSupported on each url, which makes --fast a lot faster.
Almost all the same metadata fields and file formatting fields are
populated, when yt-dlp is able to get the data. Note that yt-dlp has some
additional useful metadata that could be exposed. But, much of it is
specific to particular websites, and it would be hard to document on the
git-annex importfeed man page.
Sponsored-by: unqueued on Patreon
The getSocket comment that mentioned using ":port"
in the hostname seems to have been incorrect or be out of date.
After all, the bug report came when the user first tried doing that,
and it didn't work.
Sponsored-by: the NIH-funded NICEMAN (ReproNim TR&D3) project
external: Monitor file size when getting content from external special
remotes and use that to update the progress meter, in case the external
special remote program does not report progress.
This relies on 703a70cafa to prevent ever
running the meter backwards.
Sponsored-by: Dartmouth College's DANDI project
Improve annex.stalldetection to handle remotes that update progress less
frequently than the configured time period.
In particular, this makes remotes that don't report progress but are
chunked work when transferring a single chunk takes longer than the
specified time period.
Any remotes that just have very low update granulatity would also be
handled by this.
The change to Remote.Helper.Chunked avoids an extra progress update when
resuming an interrupted upload. In that case, the code saw first Nothing
and then Just the already transferred number of bytes, which defeated this
new heuristic. This change will mean that, when resuming an interrupted
upload to a chunked remote that does not do its own progress reporting, the
progress display does not start out displaying the amount sent so far,
until after the first chunk is sent. This behavior change does not seem
like a major problem.
About the scalefudgefactor, it seems reasonable to expect subsequent chunks
to take no more than 1.5 times as long as the first chunk to transfer.
Could set it to 1, but then any chunk taking a little longer would be
treated as a stall. 2 also seems a likely value. Even 10 might be fine?
Sponsored-by: Dartmouth College's DANDI project
Improve disk free space checking when transferring unsized keys to
local git remotes. Since the size of the object file is known, can
check that instead.
Getting unsized keys from local git remotes does not check the actual
object size. It would be harder to handle that direction because the size
check is run locally, before anything involving the remote is done. So it
doesn't know the size of the file on the remote.
Also, transferring unsized keys to other remotes, including ssh remotes and
p2p remotes don't do disk size checking for unsized keys. This would need a
change in protocol.
(It does seem like it would be possible to implement the same thing for
directory special remotes though.)
In some sense, it might be better to not ever do disk free checking for
unsized keys, than to do it only sometimes. A user might notice this
direction working and consider it a bug that the other direction does not.
On the other hand, disk reserve checking is not implemented for most
special remotes at all, and yet it is implemented for a few, which is also
inconsistent, but best effort. And so doing this best effort seems to make
some sense. Fundamentally, if the user wants the size to always be checked,
they should not use unsized keys.
Sponsored-by: Brock Spratlen on Patreon
This works well, and it interoperates with gpg in my testing (although some
SOP commands might choose to use a profile that does not so caveat emptor).
Note that for creating the Cipher, gpg --gen-random is still used. SOP
does not have an eqivilant, and as long as the user has gpg around,
which seems likely, it doesn't matter that it uses gpg here, it's not being
used for encryption. That seemed better than implementing a second way
to get high quality entropy, at least for now.
The need for the sop command to run in an empty directory has each call
to encrypt and decrypt creating a new temporary directory. That is some
unncessary overhead, though probably swamped by the overhead of running
the sop command. This could be improved in the future by passing an
already empty directory to them, or a sufficiently empty directory
(.git/annex/tmp would probably suffice).
Sponsored-by: Brett Eisenberg on Patreon
Test a specified Stateless OpenPGP command with eg:
git-annex test --test-git-config annex.shared-sop-command=sqop
Also documented that config and another one, but so far only the test suite
uses the configs, have not yet implemented using it for actual symmetric
encryption.
Sponsored-by: Joshua Antonishen on Patreon
This aims to future-proof gpg key generation. OpenPGP is in flux with a
conflict over standards ongoing. It seems not unlikely that different
systems will have different gpg commands that support different algorithms.
This also simplifies the code by using the --quick-gen-key interface rather
than the experimental batch interface. It seems less likely that
--quick-gen-key will break than an experimental interface (whose
documentation I can no longer find).
--quick-gen-key is supported since gpg 2.1.0 (2014).
Sponsored-by: Graham Spencer on Patreon
The old code traversed the list of addtreeitems once per subdirectory in
the tree, so could get quite slow. Converting to Map lookups sped it up
significantly.
In my test case, git-annex import used to take about 2 minutes, when
calling adjustTree to add back excluded files to the imported tree. This
dropped it down to 6 seconds. Of which 4 seconds are the actual
enumeration of the contents of the remote, so really only 2 seconds for
this.
The path prefix map is a bit suboptimal memory-wise, since items get
stored in the map once per subdirectory on the path to the item. It
would perhaps be better to use a tree data structure.
Also it's suboptimal memory-wise that it builds two maps, as well
as retaining a reference to addtreeitems. I could not see a way around
that though.
Sponsored-by: Luke T. Shumaker on Patreon
I saw a nearly 2 minute speed up from this, in a repo with 56000 files some
of which are preferred content of the special remote and others not. In
such a case, addBackExportExcluded has to do a lot of work, which is
unncessary when the tree is unchanged.
When using sync --content, preferred content checking of that many files
takes about 1 minute. So this speeds up sync --content by 3x.
When using git-annex import, the speed up is much larger.
Sponsored-by: Nicholas Golder-Manning on Patreon
Thanks to previous work in 11cc9f1933,
this is almost entirely free, it only needs to do some additional map
lookups and math.
The strictness annotations keep the memory use from blowing up.
Sponsored-by: unqueued on Patreon
Fix a crash opening sqlite databases when run in a non-unicode locale,
with a remote that uses a non-unicode filepath. In that situation
converting to Text fails.
The fix needs git-annex to be built with persistent-sqlite 2.13.3.
Building against older versions still works, but that version is used when
building with stack.
Database.RawFilePath is a lot of code copied from persistent-sqlite and
lightly modified, since only 1 function in persistent-sqlite was made to
support RawFilePath. This is a bit of a pain, and I hope that
persistent-sqlite will eventually switch to using OsPath, allowing this
module to be removed from git-annex.
Sponsored-by: k0ld on Patreon
When importing from a special remote, support preferred content expressions
that use terms that match on keys (eg "present", "copies=1"). Such terms
are ignored when importing, since the key is not known yet.
When "standard" or "groupwanted" is used, the terms in those
expressions also get pruned accordingly.
This does allow setting preferred content to "not (copies=1)" to make a
special remote into a "source" type of repository. Importing from it will
import all files. Then exporting to it will drop all files from it.
In the case of setting preferred content to "present", it's pruned on
import, so everything gets imported from it. Then on export, it's applied,
and everything in it is left on it, and no new content is exported to it.
Since the old behavior on these preferred content expressions was for
importtree to error out, there's no backwards compatability to worry about.
Except that sync/pull/etc will now import where before it errored out.
This can reduce the size of the branch by up to 8%. My test was
running git-annex add 1000 times on one file each.
Lots of different high-resolution timestamps were recorded before
and eliminating those, after packing, the git repo was 8% smaller.
Due to the use of vector clocks, high resolution timestamps are
not necessary to make clear which information is most recent when
eg, a value is changed repeatedly in the same second. In such a
case, the vector clock will be advanced to the next second after
the last modification. For example, running
git-annex numcopies 1; git-annex numcopies 2
The first will record the current second, while the next records
the second after that even if it runs in the same second.
As for conflicting information written to two different clones of the
repository, this will make git-annex sometimes pick information that
was written earlier in a second over information written later in the
same second. Usually git-annex does not write conflicting information,
but there are some cases where it could. Eg, storing an object on a remote
can update the remote state log with some state. If two repos both store the
same object, and end up storing different remote state for some reason,
this can result in one that ran a tiny bit later winning. Such a situation
seems unlikely to be user visible. And a small amount of clock skew could
already result in such things.
The only case I can think of where this might be a user visible change
is if a configuration command like git-annex numcopies is being run
in 2 clones of a repository on the same machine at very
close to the same time. Then the user will know which they ran last,
and git-annex won't.
If that did become a problem, this could be dialed back to eg log
milliseconds with still some space saving.
migrate: Support adding size to URL keys that were added with --relaxed, by
running eg: git-annex migrate --backend=URL foo
Since url keys cannot be generated, that used to fail. Make it notice that
the backend is not changed, and just get the size of the content.
Sponsored-by: Brock Spratlen on Patreon
pull, sync: When operating on content, automatically hard link objects
that have been migrated.
Added annex.syncmigrations config that can be set to false to prevent
pull and sync from migrating object content.
I think that true is a good default for this config, because it avoids
users having to re-download migrated content or learning about migration.
But, some users will surely not like it, whether because it does take some
time (especially for the first git-annex branch scan when there is a long
history), or because they want to deal with it manually, or because their
filesystem doesn't support hard links and they don't want it to copy
objects.
Sponsored-by: k0ld on Patreon
And avoid migrate --update/--aply migrating when the new key was already
present in the repository, and got dropped. Luckily, the location log
allows distinguishing from the new key never having been present!
That is mostly useful for --apply because otherwise dropped files would
keep coming back until the old objects were reaped as unused. But it
seemed to make sense to also do it for --update. for consistency in edge
cases if nothing else. One case where --update can use it is when one
branch got migrated earlier, and we dropped the file, and now another
branch has migrated the same file.
Sponsored-by: Jack Hill on Patreon
This is most of the way there, but not quite working.
The layout of migrate.tree/ needs to be changed to follow this approach.
git log will list all the files in tree order, so the new layout needs
to alternate old and new keys. Can that be done? git may not document
tree order, or may not preserve it here.
Alternatively, change to using git log --format=raw and extract
the tree header from that, then use
git diff --raw $tree:migrate.tree/old $tree:migrate.tree/new
That will be a little more expensive, but only when there are lots of
migrations.
Sponsored-by: Joshua Antonishen on Patreon
Avoid a problem with temp file names ending in "." on certian filesystems
that have problems with such filenames.
relatedTemplate is quite an ugly hack really; since it doesn't know the max
filename length of the filesystem it can only assume that the filename is
max allowed length. When given the input "lh.aparc.DKTatlas.annot", it
wants to reserve 20 characters for tempfile so it truncates to "lh.". That
ending period is apparently a problem on some filesystem (FAT eats it, but
does not throw EINVAL; ntfs does not seem bothered by it, I don't know what
FUSE filesystem the bug reporter was really using).
Sponsored-by: Brett Eisenberg on Patreon
Presumably git merge sometimes needs to verifiy if a worktree file is
modified, and so will then run git-annex filter-process which would try to
take the pid lock. And for whatever reason, git-annex sync already had the
pidlock held. I have not replicated that, but it does make enough sense to
deploy the workaround.
Like I said back in commit 7bdb0cdc0d,
Arguably, it would be better to have a way to make any process git-annex
runs have the env var set. But then it would need to take the pid lock
when running any and all processes, and that would be a problem when
git-annex runs two processes concurrently. So, I'm left doing it ad-hoc
in places where git-annex really does run a child process, directly
or indirectly via a particular git command.
Sponsored-by: KDM on Patreon
Implementation was simple because it's equivilant to
--from=foo --to remote for each other remote, followed by
--to remote when there's a local copy.
(Or, in the edge case of --from-anywhere --to=here,
it's the same as --to=here.)
Note that, when the local repo does not have a copy,
fromToPerform gets it from a remote, sends it to the destination,
and drops the local copy. Another call to that for a second remote
will notice that the dest now has a copy, and simply drop from the
second remote, avoiding a second transfer.
Also note that, when numcopies doesn't allow dropping it from
everywhere, it will drop it from the cheapest remotes first
(maybe not ideal) up to more expensive remotes, and finally from the local
repo. So the local repo will generally end up holding a copy. Maybe not
ideal in all cases either, but it seems no worse to do that than to end up
with a copy undropped from a remote.
And I'm not entirely happy with the output, eg:
copy bigfile (from r3...) ok
copy bigfile ok
That makes sense if you think of the second line as being
the same as what is output by `git-annex copy bigfile --to bar`,
but it's less clear in this context. Maybe add "(from here...)"?
Also the --json output doesn't have a machine-readable field for
the "from" uuid, and maybe it should?
Sponsored-by: Dartmouth College's DANDI project
Make git-annex get/copy/move --from foo override configuration of
remote.foo.annex-ignore, as documented.
This already worked for remotes supporting hasKeyCheap. For others though,
git-annex copy --from foo would silently not do anything, while
git-annex copy --to foo would use the annex-ignored remote.
Also improved the annex-ignore docs, to reflect that `git-annex get`
without --from will skip using annex-ignored remotes, for example.
Sponsored-by: Dartmouth College's DANDI project
Eg when the destination is logged as containing a file, skip
actively checking that it does contain it.
Note that --fast does not prevent other verifications of content
location that are done in a copy --from --to. Perhaps it could, but this
change will already avoid the real unnecessary work of operating on
files that are already in the remote.
And avoiding other verifications
might cause it to fail if the location log thinks that --to does not
contain the content but does. Such complications with `git-annex copy
--to remote --fast` led to commit d006586cd0
which added a note that gets displayed when that fails, mentioning it
might be due to --fast being enabled.
copy --from --to is already complicated enough without needing to worry
about such edge cases, so continuing to doing some verification of
content location after the initial --fast filtering seems ok.
Sponsored-by: Dartmouth College's DANDI project