Commit graph

1583 commits

Author SHA1 Message Date
Joey Hess
bad444342e
reorder and condense 2023-06-21 13:48:31 -04:00
Joey Hess
3cec932bb5
changelog 2023-06-21 12:51:33 -04:00
Joey Hess
a861d56428
httpalso: Support being used with special remotes that use chunking.
Sponsored-by: k0ld on Patreon
2023-06-20 13:35:28 -04:00
Joey Hess
958c2fa6d2
Improve resuming interrupted download when using yt-dlp or youtube-dl
Fixes a failure like this:

curl: (33) HTTP server doesn't seem to support byte ranges. Cannot resume.

That happens because the whole web page has already been downloaded
previously, and kept, so now addurl tries to download it, and curl asks the
server to resume from the last byte. And youtube.com can't, for whatever
stupid reason.

So, delete the temp file after determining that youtube-dl can be used.
2023-06-19 15:01:47 -04:00
Joey Hess
a36a81dea3
Improve resuming interrupted download when using yt-dlp
Sometimes resuming an interrupted download will fail to resume and download
more files with different names. That resulted in the workdir having
multiple files at the end, which causes git-annex to give up because it
does not know what was downloaded.

To fix this, use a yt-dlp feature, which appends to a file the name of each
file after it's finished downloading it. So the presence of other cruft in
the workdir will not confuse git-annex.
2023-06-19 14:39:08 -04:00
Joey Hess
217a6abb19
assistant: Fix a crash when a small file is deleted immediately after being created
git add will fail if the file got deleted in the meantime. And since it was
queued, there was a window until the queue flushed where a deletion of the
file would cause a crash.

Instead, reuse Command.Add.addFile, which sha1 hashes the file itself
immediately, and then queues the index update. Ignore exceptions that will
happen if the file got deleted already.

Sponsored-by: k0ld on Patreon
2023-06-19 12:44:56 -04:00
Joey Hess
114a2d7504
Fix display when run with -J1
Commit b6642dde8a broke it by enabling
non-concurrent display mode while leaving concurrency set in the config
and having already started concurrency earlier.

(I don't actually know if that commit was a good idea.)

Sponsored-By: Brett Eisenberg on Patreon
2023-06-15 10:07:54 -04:00
Joey Hess
64738ea157
config: Added the --show-origin and --for-file options
* config: Added the --show-origin and --for-file options.
* config: Support annex.numcopies and annex.mincopies.

There is a little bit of redundancy here with other code elsewhere that
combines the various configs and selects which to use. But really only
for the special case of annex.numcopies, which is a git config that does
not override the annex branch setting and for annex.mincopies, which does
not have a git config but does have gitattributes settings as well as the
annex branch setting.

That seems small enough, and unlikely enough to grow into a mess that it was
worth supporting annex.numcopies and annex.mincopies in git-annex config
--show-origin. Because these settings are a prime thing that someone might
get confused about and want to know where they were configured.

And, it followed that git-annex config might as well support those two
for --set and --get as well. While this is redundant with the speclialized
commands, it's only a little code and it makes it more consistent.

Note that --set does not have as nice output as numcopies/mincopies
commands in some special cases like setting to 0 or a negative number.
It does avoid setting to a bad value thanks to the smart
constructors (eg configuredNumCopies).

As for other git-annex branch configurations that are not set by git-annex
config, things like trust and wanted that are specific to a repository
don't map to a git config name, so don't really fit into git-annex config.
And they are only configured in the git-annex branch with no local override
(at least so far), so --show-origin would not be useful for them.

Sponsored-by: Dartmouth College's DANDI project
2023-06-12 16:24:31 -04:00
Joey Hess
38153ad340
assistant: Add dotfiles to git by default, unless annex.dotfiles is configured
Tthe same as git-annex add does.

Sponsored-by: Luke Shumaker on Patreon
2023-06-12 13:25:04 -04:00
Joey Hess
c33c226abd
fixed 2023-06-09 16:13:52 -04:00
Joey Hess
a0ab425c95
add ContentIndentifiersCidRemoteKeyIndex
Optimise database to further speed up importing large trees from special
remotes.

See comment for details of why the other index didn't help cid queries.

It would probably be better to manually create an index on only cid, rather
than adding a second uniqueness constraint that is a larger index. But
persitent does not support creating indexes, and an attempt to manually add
it to the migration failed.

Sponsored-by: Nicholas Golder-Manning on Patreon
2023-06-09 15:12:33 -04:00
Joey Hess
6821ba8dab
sync: use log to track adjusted branch needs updating
Speeds up sync in an adjusted branch by avoiding re-adjusting the branch
unncessarily, particularly when it is adjusted with --hide-missing or
--unlock-present.

When there are a lot of files, that was the majority of the time of a
--no-content sync.

Uses a log file, which is updated when content presence changes. This
adds a little bit of overhead to every file get/drop when on such an
adjusted branch. The overhead is minimal for get of any size of file,
but might be noticable for drop in some cases. It seems like a reasonable
trade-off. It would be possible to update the log file only at the end, but
then it would not happen if the command is interrupted.

When not in an adjusted branch, there should be no additional overhead.
(getCurrentBranch is an MVar read, and it avoids the MVar read of
getGitConfig.)

Note that this does not deal with situations such as:
git checkout master, git-annex get, git checkout adjusted branch,
git-annex sync. The sync won't know that the adjusted branch needs to be
updated. Dealing with that would add overhead to operation in non-adjusted
branches, which I don't like. Also, there are other situations like having
two adjusted branches that both need to be updated like this, and switching
between them and sync not updating.

This does mean a behavior change to sync, since it did previously deal
with those situations. But, the documentation did not say that it did.
The man pages only talk about sync updating the adjusted branch after
it transfers content.

I did consider making sync keep track of content it transferred (and
dropped) and only update the adjusted branch then, not to catch up to other
changes made previously. That would perform better. But it seemed rather
hard to implement, and also it would have problems with races with a
concurrent get/drop, which this implementation avoids.

And it seemed pretty likely someone had gotten used to get/drop followed by
sync updating the branch. It seems much less likely someone is switching
branches, doing get/drop, and then switching back and expecting sync to update
the branch.

Re-running git-annex adjust still does a full re-adjusting of the branch,
for anyone who needs that.

Sponsored-by: Leon Schuermann on Patreon
2023-06-08 14:35:41 -04:00
Joey Hess
3c15e0f7a0
cache negative lookups of global numcopies and mincopies
Speeds up eg git-annex sync --content by up to 50%. When it does not need
to transfer or drop anything, it now noops a lot more quickly.

I didn't see anything else in sync --content noop loop that could really
be sped up. It has to cat git objects to keys, stat object files, etc.

Sponsored-by: unqueued on Patreon
2023-06-06 14:43:25 -04:00
Joey Hess
cfad0def18
wrap 2023-06-05 15:15:20 -04:00
Joey Hess
fe1b2dfb4b
speed up very first tree import by 25%
Reading from the cidsdb is responsible for about 25% of the runtime of
an import. Since the cidmap is used to store the same information in
ram, the cidsdb is not written to during an import any longer. And so,
if it started off empty (and updateFromLog wasn't needed), those reads
can just be skipped.

This is kind of a cheesy optimisation, since after any import from any
special remote, the database will no longer be empty, so it's a single
use optimisation. But it's probably not uncommon to start by importing a
lot of files, and it can save a lot of time then.

Sponsored-by: Brock Spratlen on Patreon
2023-06-02 13:30:30 -04:00
Joey Hess
40017089f2
use importChanges optimisation
Large speed up to importing trees from special remotes that contain a lot
of files, by only processing changed files.

Benchmarks:

Importing from a special remote that has 10000 files, that have all been
imported before, and 1 new file sped up from 26.06 to 2.59 seconds.

An import with no change and 10000 unchanged files sped up from 24.3 to
1.99 seconds.

Going up to 20000 files, an import with no changes sped up from
125.95 to 3.84 seconds.

Sponsored-by: k0ld on Patreon
2023-06-01 13:47:00 -04:00
Joey Hess
f6aa097a39
avoid import writing to cidsdb initially
Speed up importing trees from special remotes somewhat by avoiding
redundant writes to sqlite database.

Before, import would write to both the git-annex branch and also to the
sqlite database. But then the next time it was run, needsUpdateFromLog
would see the branch had changed, so run updateFromLog, which would make
the same writes to the sqlite database a second time.

Now import writes only to the git-annex branch. The next time it's run,
needsUpdateFromLog sees that the branch has changed and so calls
updateFromLog, which updates the sqlite database.

Why defer the write to the sqlite database like this? It seems that it
could write to the database as it goes, and at the end call
recordAnnexBranchTree to indicate that the information in the git-annex
branch has all been written to the cidsdb. That would avoid the second
import doing extra work.

But, there could be other processes running at the same time, and one of
them may update the git-annex branch, eg merging a remote git-annex branch
into it. Any cids logs on that merged git-annex branch would not be
reflected in the cidsdb yet. If the import then called
recordAnnexBranchTree, the cidsdb would never get updated with that merged
information.

I don't think there's a good way to prevent, or to detect that situation.
So, it can't call recordAnnexBranchTree at the end. So it might as well
wait until the next run and do updateFromLog then. It could instead do
updateFromLog at the end, but it's going to check needsUpdateFromLog
at the beginning anyway.

Note that the database writes were queued, so there is already a cidmap
that is used to remember changes that the current process has made.
So, omitting database writes can't change the behavior of the current
process.

Also note that thirdpartypopulatedimport uses recordcidkeyindb, which
reflects what it already did. That code path does not use the cidmap,
but does not need to query it either. It might be possible to make that
code path also only update the git-annex branch and not the db, but I
haven't checked.

Sponsored-by: Noam Kremen on Patreon
2023-05-30 17:05:28 -04:00
Joey Hess
5070087a63
repair: Fix handling of git ref names on Windows
Sponsored-by: Kevin Mueller on Patreon
2023-05-30 16:09:13 -04:00
Joey Hess
f2db6da938
default to yt-dlp and fix progress parsing bugs
I noticed git-annex was using a lot of CPU when downloading from youtube,
and was not displaying progress. Turns out that yt-dlp (and I think also
youtube-dl) sometimes only knows an estimated size, not the actual size,
and displays the progress output slightly differently for that. That broke
the parser. And, the parser was feeding chunks that failed to parse back
as a remainder, which caused it to try to re-parse the entire output each
time, so it got slower and slower.

Using --progress-template like this should avoid parsing problems as well
as future proof against output changes. But it will work with only yt-dlp.

So, this seemed like the right time to deprecate youtube-dl, and default
to yt-dlp when available.

git-annex will still use youtube-dl if that's all that's available.
However, since the progress parser for youtube-dl was buggy, and I don't
want to maintain two different progress parsers (especially since
youtube-dl is no longer in debian unstable having been replaced by
yt-dlp), made git-annex no longer try to parse youtube-dl's progress.

Also, updated docs for yt-dlp being default. It did not seem worth
renaming annex.youtube-dl-options and annex.youtube-dl-command.

Note that yt-dlp does not seem to document the fields available in the
progress template. I found them by reading the source and looking at
the templates it uses internally. Also note that the use of "i" (rather
than "s") in progressTemplate makes it display floats rounded to integers;
particularly the estimated total size can be a float. That also does not
seem to be documented but I assume is a python thing?

Sponsored-by: Joshua Antonishen on Patreon
2023-05-27 13:04:53 -04:00
Joey Hess
0f89d221bd
version: Avoid error message when entire output is not read
Sponsored-by: Dartmouth College's Datalad project
2023-05-19 15:00:57 -04:00
Joey Hess
c4ad9b1446
Fix bug in -z handling of trailing NUL in input
The obvious way to fix this would be to adapt lines to split on null.

However, it's actually nontrivial to rewrite lines. In particular it has a
weird implementation to avoid a space leak. See:
https://gitlab.haskell.org/ghc/ghc/-/issues/4334

Also, while that is a small amount of code, it's covered by a rather
complex copyright and I'd have to include that copyright in git-annex.

So, I opted to filter out the trailing empty string instead.

Sponsored-by: Dartmouth College's Datalad project
2023-05-19 14:34:02 -04:00
Joey Hess
e955912ad0
git-annex assist
assist: New command, which is the same as git-annex sync but with
new files added and content transferred by default.

(Also this fixes another reversion in git-annex sync,
--commit --no-commit, and --message were not enabled, oops.)

See added comment for why git-annex assist does commit staged
changes elsewhere in the work tree, but only adds files under
the cwd.

Note that it does not support --no-commit, --no-push, --no-pull
like sync does. My thinking is, why should it? If you want that
level of control, use git commit, git annex push, git annex pull.
Sync only got those options because pull and push were not split
out.

Sponsored-by: k0ld on Patreon
2023-05-18 14:37:43 -04:00
Joey Hess
f93a7fce1d
sync: Started transition to --content being enabled by default
When used without --content or --no-content, warn about the upcoming
transition, and suggest using one of the options, or setting
annex.synccontent.

Sponsored-by: Brett Eisenberg on Patreon
2023-05-17 13:23:42 -04:00
Joey Hess
40731ff9fd
sync: Added -g as a short option for --no-content
I anticipate that if sync is transitioned to syncing content by default,
people will want a short option. And in repositories where
annex.synccontent = true, they already would. And pull and push sync
content by default, so a short option is useful with them too.

Mnemonic: -g makes only git data be synced
Also, -a makes only annex data be synced.

Would have preferred -c, which would complement -C, but it
was already taken to set git configs.

Sponsored-by: Noam Kremen on Patreon
2023-05-17 12:34:26 -04:00
Joey Hess
5df89d58c7
git-annex pull and push
Split out two new commands, git-annex pull and git-annex push. Those plus a
git commit are equivilant to git-annex sync.

In a sense, git-annex sync conflates 3 things, and it would have been
better to have push and pull from the beginning and not sync. Although
note that git-annex sync --content is faster than a pull followed by a
push, because it only has to walk the tree once, look at preferred
content once, etc. So there is some value in git-annex sync in speed, as
well as user convenience.

And it would be hard to split out pull and push from sync, as far as the
implementaton goes. The implementation inside sync was easy, just adjust
SyncOptions so it does the right thing.

Note that the new commands default to syncing content, unless
annex.synccontent is explicitly set to false. I'd like sync to also do
that, but that's a hard transition to make. As a start to that
transition, I added a note to git-annex-sync.mdwn that it may start to
do so in a future version of git-annex. But a real transition would
necessarily involve displaying warnings when sync is used without
--content, and time.

Sponsored-by: Kevin Mueller on Patreon
2023-05-16 16:51:07 -04:00
Joey Hess
2e984c51b6
sync --no-pull and --no-push affect download and upload of content
The man page is somewhat vague about this, but I do think it was a bug
that these options didn't alreay behave that way. The options are
documented to disable imports and exports, which is the same operations
just with a special remote that uses trees.

The real motivation for this is that I'm adding git-annex pull and
git-annex push, and I want these options to turn off the equivilant of
those commands. And git-annex pull will certianly download and push
upload.

Sponsored-by: Nicholas Golder-Manning on Patreon
2023-05-16 16:25:23 -04:00
Joey Hess
212442dd9b
pullOption should be pushOption in seekExportContent
sync: Fix bug that made --no-pull, rather than --no-push prevent exporting
trees to special remotes.

Sponsored-by: Joshua Antonishen on Patreon
2023-05-16 15:55:24 -04:00
Joey Hess
271f3b1ab4
uninit: Support --json and --json-error-messages
Had to convert uninit to do everything that can error out inside a
CommandStart. This was harder than feels nice.

(Also, in passing, converted CommandCheck to use a data type, not a
weird number that it was not clear how it managed to be unique.)

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-11 13:43:02 -04:00
Joey Hess
02cfef1f91
uninit: Avoid buffering the names of all annexed files in memory
Oops, using the same list twice does prevent streaming in constant memory.

Sponsored-by: unqueued on Patreon
2023-05-11 13:25:55 -04:00
Joey Hess
de84abb210
configremote: Support --json and --json-error-messages
Seems unlikely to be too useful, but who knows.

Moved the checkSafeConfig call to happen after an action is started, so
it will be captured by --json-error-messages

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-10 14:21:42 -04:00
Joey Hess
a242eabc7a
enableremote: Support --json and --json-error-messages
Seems unlikely to be too useful, but who knows. Was trivial anyway.

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-10 14:09:27 -04:00
Joey Hess
b3cc8dbacb
initremote: Support --json and --json-error-messages
Including special --whatelse handling.

Otherwise, it seems unlikely to be too useful, but who knows.

Refactored code to call starting before displaying error messages.
This makes the error messages be captured by --json-error-messages

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-10 14:03:40 -04:00
Joey Hess
8d8e044458
upgrade: Support --json and --json-error-messages and --json-progress
Seems unlikely to be very useful, but trivial.

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-10 12:54:48 -04:00
Joey Hess
c98fb0b637
merge: Support --json and --json-error-messages and --json-progress
Seems unlikely to be very useful, but trivial.
And, this completes the story that git-annex sync does not need json,
since every sub-operation is available in a command that does support json.
(Well, except for committing, but that's not a git-annex command.)

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-10 12:34:19 -04:00
Joey Hess
7919349cee
importfeed: Support --json and --json-error-messages and --json-progress
Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-09 16:51:16 -04:00
Joey Hess
04ee6c4c6b
importfeed: Support -J (and work toward supporting --json)
Both -J and --json needed importfeed to be refactored to use commandAction.

That was difficult, because of the interrelated nature of downloading feeds
and then downloading files from feeds, both of which needed to use
commandAction. And then checking for problems in feeds has to come after
these actions, which may be run as background jobs.

As for --json support, it's most of the way there, but still has some
warts, so I didn't enable jsonOptions yet. The warts include:

- An initial empty json record is displayed by getCache.
- Input is not populated, should be feed url
- feedProblem at end will not be captured by --json-error-messages
  (see FIXME)

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-09 16:13:56 -04:00
Joey Hess
a71c831949
renameremote: Support --json and --json-error-messages
Seems unlikely to be useful, but it works so

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-08 16:25:40 -04:00
Joey Hess
3d8f93dc0a
reinject: Support --json and --json-error-messages
Also fix support for operating on multiple pairs of files and keys.

Moved notAnnexed to inside starting, so error message will get into the json.

Cannot include the key in the starting as it's not known yet, so instead
add it to the json later.

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-08 15:43:37 -04:00
Joey Hess
91b9915b09
reinit: Support --json and --json-error-messages
Basically same concerns as init..

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-08 15:07:40 -04:00
Joey Hess
f09a248fe2
init: Support --json and --json-error-messages
Dunno how useful this will be, since about all that's accessible from
the json is whether it succeeded or failed, and the error messages
which were already on stderr.

Note that, when autoenabling a special remote, it would be possible for
one to stop and prompt or output not using Messages and so not output as
part of the json. I don't think that happens, but I'm not 100% sure
something doesn't manage to break it. Of course, the same could be the
case for commands that transfer objects. Using Annex.Init.autoEnableSpecialRemotes
in --json mode would avoid the problem, but I've chosen to wait until I
know it's needed to use it.

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-08 14:58:08 -04:00
Joey Hess
c208442292
unused: Support --json and --json-error-messages
Generalized AddJSONActionItemField to allow it to add several fields. Not entirely
happy with that, since the names of the fields have to be carefully chosen to
not conflict with other json fields. And fields added that way can't be parsed
back in FromJSON, except for the "fields" field that is special cased for metadata.
Still, I couldn't see another way to do it.

Also, omit file:null from the json output. Which does affect other commands,
eg git-annex whereis --all --json. Hopefully that won't break something that expects
a null file. If it did, that could be reverted, but it would be ugly to have
file:null in the unused --json

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-08 14:39:57 -04:00
Joey Hess
365dbc89dc
expire, trust et al, dead, describe: Support --json and --json-error-messages
For expire, the normal output is unchanged, but the --json output includes the uuid
in machine parseable form. Which could be very useful for this somewhat obscure
command. That needed ActionItemUUID to be implemented, which seemed like a lot
of work, but then ---

I had been going to skip implementing them for trust, untrust, dead, semitrust,
and describe, but putting the uuid in the json is useful information, it tells
what uuid git-annex picked given the input. It was not hard to support
these once ActionItemUUID was implemented.

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-05 15:33:30 -04:00
Joey Hess
1a9af823bc
addunused, dropunused: Support --json and --json-error-messages
This also changes addunused to display the names of the files that it adds.
That seems like a general usability improvement, and not displaying the input
number does not seem likely to be a problem to a user, since the filename
is based on the key. Displaying the filename was necessary to get it and the key
included in the json.

dropunused does not include the key in the json. It would be possible to
add, but would need more changes. And I doubt that dropunused --json
would be used in a situation where a program cared which keys were
dropped. Note that drop --unused does have the key in its json, so such
a program could just use it. Or could just dropkey --batch with the
specific keys it wants to drop if it cares about specific keys.

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-05 14:01:40 -04:00
Joey Hess
1d4bd2dcb8
migrate, undo: Support --json and --json-error-messages
Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-04 16:34:35 -04:00
Joey Hess
38fc5d3fc7
rekey, setpresentkey: Support --json and --json-error-messages
Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-04 16:03:54 -04:00
Joey Hess
f20c8b087e
fix: Support --json and --json-error-messages
And triaged out some commands that don't need to support these options.

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-04 14:28:21 -04:00
Joey Hess
46c7c30140
log: Support --json and --json-error-messages
Also in passing the --all display was fixed up to not quote keys like filenames.

Note that the check added to compareChanges was needed to avoid logging when
nothing changed.

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-04 12:36:31 -04:00
Joey Hess
f56f6140fa
remote tailing 's' from log --raw-data
log: When --raw-date is used, display only seconds from the epoch, as
documented, omitting a trailing "s" that was included in the output
before.

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-04 11:53:38 -04:00
Joey Hess
c235488e2d
rmurl: Support --json and --json-error-messages
The json does not include an url field, but it does have an input field that is
"file url" when using --batch and ["file", "url"] when using the command line.
I chose not to change that because it would complicate batchInput.
An url field could be added if it turns out to be useful.

Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-04 11:28:27 -04:00
Joey Hess
6cbcba484c
unannex: Support --json and --json-error-messages
Sponsored-By: the NIH-funded NICEMAN (ReproNim TR&D3) project
2023-05-03 15:56:20 -04:00