Commit graph

1292 commits

Author SHA1 Message Date
Joey Hess
1a586e473b
releasing package git-annex version 8.20210903 2021-09-03 12:01:12 -04:00
Joey Hess
ec12537774
defer write permissions checking in import until after copy to repo
This should complete the fix started in
6329997ac4, fixing the actual cause of the
test suite failure this time.

Sponsored-by: Dartmouth College's Datalad project
2021-09-02 13:45:21 -04:00
Joey Hess
4f42292b13
improve url download failure display
* When downloading urls fail, explain which urls failed for which
  reasons.
* web: Avoid displaying a warning when downloading one url failed
  but another url later succeeded.

Some other uses of downloadUrl use urls that are effectively internal use,
and should not all be displayed to the user on failure. Eg, Remote.Git
tries different urls where content could be located depending on how the
remote repo is set up. Exposing those urls to the user would lead to wild
goose chases. So had to parameterize it to control whether it displays urls
or not.

A side effect of this change is that when there are some youtube urls
and some regular urls, it will try regular urls first, even if the
youtube urls are listed first. This seems like an improvement if
anything, but in any case there's no defined order of urls that it's
supposed to use.

Sponsored-by: Dartmouth College's Datalad project
2021-09-01 15:33:38 -04:00
Joey Hess
837116ef1e
Fix support for readonly git remotes
Boolean blindness oops.

(Reversion in version 8.20210621)

Sponsored-by: Dartmouth College's Datalad project
2021-08-30 12:34:19 -04:00
Joey Hess
a99a84f342
add: Detect when xattrs or perhaps ACLs prevent locking down a file's content
And fail with an informative message.

I don't think ACLs can prevent removing the write bit, but I'm not sure,
so kept it mentioning them as a possibility.

Should git-annex lock also check if the write bits are able to be removed?
Maybe, but the case I know about with xattrs involves cp -a copying NFS
xattrs, and it's the copy of the file that is the problem. So when locking
a file, I guess it will not be the copy.

Sponsored-by: Dartmouth College's Datalad project
2021-08-27 14:33:01 -04:00
Joey Hess
e17342b2a0
Run cp -a with --no-preserve=xattr, to avoid problems with copied xattrs
Including them breaking permissions setting on some NFS servers.

Sponsored-by: Dartmouth College's Datalad project
2021-08-27 13:09:34 -04:00
Joey Hess
6d4a728455
Added annex.youtube-dl-command config
This can be used to run some forks of youtube-dl.

Sponsored-by: Brett Eisenberg on Patreon
2021-08-27 09:44:23 -04:00
Joey Hess
ab7b5a492c
--batch-keys
New --batch-keys option added to these commands:  get, drop, move, copy, whereis

git-annex-matching-options had to be reworded since some of its options
can be used to match on keys, not only files.

Sponsored-by: Luke Shumaker on Patreon
2021-08-25 14:21:12 -04:00
Joey Hess
4ed36b2634
Fix test suite failure on Windows
It would be better if the Arbitrary instance avoided generating impossible
filenames like "foo/c:bar", but proably this is the only place that splits
the file from the directory and then uses the file without the directory..
At least on the quickcheck properties.

Sponsored-by: Svenne Krap on Patreon
2021-08-24 14:03:29 -04:00
Joey Hess
f9b92c81f6
unused: Skip the refs/annex/last-index ref that git-annex recently started creating
This was unlikely to cause any problem, but it is unsightly to mention
normally hidden refs, and it might have done a bit of unnecessary work to
check that ref.

Sponsored-by: Noam Kremen on Patreon
2021-08-24 12:58:14 -04:00
Joey Hess
53744e132d
incremental verification for gitlfs and httpalso
And that should be all the special remotes supporting it on linux now,
except for in the odd edge case here and there.

Sponsored-by: Dartmouth College's DANDI project
2021-08-18 15:17:10 -04:00
Joey Hess
f5e09a1dbe
incremental verification for S3
Sponsored-by: Dartmouth College's DANDI project
2021-08-18 15:07:00 -04:00
Joey Hess
d154e7022e
incremental verification for web special remote
Except when configuration makes curl be used. It did not seem worth
trying to tail the file when curl is downloading.

But when an interrupted download is resumed, it does not read the whole
existing file to hash it. Same reason discussed in
commit 7eb3742e4b76d1d7a487c2c53bf25cda4ee5df43; that could take a long
time with no progress being displayed. And also there's an open http
request, which needs to be consumed; taking a long time to hash the file
might cause it to time out.

Also in passing implemented it for git and external special remotes when
downloading from the web. Several others like S3 are within striking
distance now as well.

Sponsored-by: Dartmouth College's DANDI project
2021-08-18 15:02:22 -04:00
Joey Hess
8613770b06
incremental verify for webdav special remote
Sponsored-by: Dartmouth College's DANDI project
2021-08-16 17:29:32 -04:00
Joey Hess
b1622eb932
incremental verify for directory special remote
Added fileRetriever', which will let the remaining special remotes
eventually also support incremental verify.

Sponsored-by: Dartmouth College's DANDI project
2021-08-16 16:51:33 -04:00
Joey Hess
ec82299730
status update
I was wrong about S3 supporting tailVerify.
2021-08-16 15:15:32 -04:00
Joey Hess
dadbb510f6
incremental hashing for fileRetriever
It uses tailVerify to hash the file while it's being written.

This is able to sometimes avoid a separate checksum step. Although
if the file gets written quickly enough, tailVerify may not see it
get created before the write finishes, and the checksum still happens.

Testing with the directory special remote, incremental checksumming did
not happen. But then I disabled the copy CoW probing, and it did work.
What's going on with that is the CoW probe creates an empty file on
failure, then deletes it, and then the file is created again. tailVerify
will open the first, empty file, and so fails to read the content that
gets written to the file that replaces it.

The directory special remote really ought to be able to avoid needing to
use tailVerify, and while other special remotes could do things that
cause similar problems, they probably don't. And if they do, it just
means the checksum doesn't get done incrementally.

Sponsored-by: Dartmouth College's DANDI project
2021-08-13 15:43:29 -04:00
Joey Hess
7eb3742e4b
incremental verify for chunked remotes
Simply feed each chunk in turn to the incremental verifier.

When resuming an interrupted retrieve, it does not do incremental
verification. That would need to read the file, up to the resume point,
and feed it to the incremental verifier. That seems easy to get wrong.
Also it would mean extra work done before the transfer can start. Which
would complicate displaying progress, and would perhaps not appear to the
user as if it was resuming from where it left off. Instead, in that
situation, return UnVerified, and let the verification be done in a
separate pass.

Granted, Annex.CopyFile does manage all that, but it's not complicated
by dealing with chunks too.

Sponsored-by: Dartmouth College's DANDI project
2021-08-11 14:42:49 -04:00
Joey Hess
c20358b671
incremental verify for byteRetriever special remotes
Several special remotes verify content while it is being retrieved,
avoiding a separate checksum pass. They are: S3, bup, ddar, and
gcrypt (with a local repository).

Not done when using chunking, yet.

Complicated by Retriever needing to change to be polymorphic. Which in turn
meant RankNTypes is needed, and also needed some code changes. The
change in Remote.External does not change behavior at all but avoids
the type checking failing because of a "rigid, skolem type" which
"would escape its scope". So I refactored slightly to make the type
checker's job easier there.

Unfortunately, directory uses fileRetriever (except when chunked),
so it is not amoung the improved ones. Fixing that would need a way for
FileRetriever to return a Verification. But, since the file retrieved
may be encrypted or chunked, it would be extra work to always
incrementally checksum the file while retrieving it. Hm.

Some other special remotes use fileRetriever, and so don't get incremental
verification, but could be converted to byteRetriever later. One is
GitLFS, which uses downloadConduit, which writes to the file, so could
verify as it goes. Other special remotes like web could too, but don't
use Remote.Helper.Special and so will need to be addressed separately.

Sponsored-by: Dartmouth College's DANDI project
2021-08-11 14:20:38 -04:00
Joey Hess
f1176f82a5
rsync special remote: Stop displaying rsync progress, and use git-annex's own progress display
Reasons are same as in commit cee14f147a.
(It was already done when using -J.)

Sponsored-by: Mark Reidenbach on Patreon
2021-08-09 12:06:10 -04:00
Joey Hess
1acdd18ea8
deal better with clock skew situations, using vector clocks
* Deal with clock skew, both forwards and backwards, when logging
  information to the git-annex branch.
* GIT_ANNEX_VECTOR_CLOCK can now be set to a fixed value (eg 1)
  rather than needing to be advanced each time a new change is made.
* Misuse of GIT_ANNEX_VECTOR_CLOCK will no longer confuse git-annex.

When changing a file in the git-annex branch, the vector clock to use is now
determined by first looking at the current time (or GIT_ANNEX_VECTOR_CLOCK
when set), and comparing it to the newest vector clock already in use in
that file. If a newer time stamp was already in use, advance it forward by
a second instead.

When the clock is set to a time in the past, this avoids logging with
an old timestamp, which would risk that log line later being ignored in favor
of "newer" line that is really not newer.

When a log entry has been made with a clock that was set far ahead in the
future, this avoids newer information being logged with an older timestamp
and so being ignored in favor of that future-timestamped information.
Once all clocks get fixed, this will result in the vector clocks being
incremented, until finally enough time has passed that time gets back ahead
of the vector clock value, and then it will return to usual operation.

(This latter situation is not ideal, but it seems the best that can be done.
The issue with it is, since all writers will be incrementing the last
vector clock they saw, there's no way to tell when one writer made a write
significantly later in time than another, so the earlier write might
arbitrarily be picked when merging. This problem is why git-annex uses
timestamps in the first place, rather than pure vector clocks.)

Advancing forward by 1 second is somewhat arbitrary. setDead
advances a timestamp by just 1 picosecond, and the vector clock could
too. But then it would interfere with setDead, which wants to be
overrulled by any change. So it could use 2 picoseconds or something,
but that seems weird. It could just as well advance it forward by a
minute or whatever, but then it would be harder for real time to catch
up with the vector clock when forward clock slew had happened.

A complication is that many log files contain several different peices of
information, and it may be best to only use vector clocks for the same peice
of information. For example, a key's location log file contains
InfoPresent/InfoMissing for each UUID, and it only looks at the vector
clocks for the UUID that is being changed, and not other UUIDs.

Although exactly where the dividing line is can be hard to determine.
Consider metadata logs, where a field "tag" can have multiple values set
at different times. Should it advance forward past the last tag?
Probably. What about when a different field is set, should it look at
the clocks of other fields? Perhaps not, but currently it does, and
this does not seems like it will cause any problems.

Another one I'm not entirely sure about is the export log, which is
keyed by (fromuuid, touuid). So if multiple repos are exporting to the
same remote, different vector clocks can be used for that remote.
It looks like that's probably ok, because it does not try to determine
what order things occurred when there was an export conflict.

Sponsored-by: Jochen Bartl on Patreon
2021-08-04 12:33:46 -04:00
Joey Hess
899983058f
add: When adding a dotfile, avoid treating its name as an extension. 2021-08-03 12:22:58 -04:00
Joey Hess
9cae7c5bbf
releasing package git-annex version 8.20210803 2021-08-03 12:20:45 -04:00
Joey Hess
b3c4579c79
work around strange auto-init bug
git-annex get when run as the first git-annex command in a new repo did not
populate unlocked files. (Reversion in version 8.20210621)

I am not entirely happy with this, because I don't understand how
428c91606b caused the problem in the first
place, and I don't fully understand how skipping calling scanAnnexedFiles
during autoinit avoids the problem.

Kept the explicit call to scanAnnexedFiles during git-annex init,
so that when reconcileStaged is expensive, it can be made to run then,
rather than at some later point when the information is needed.

Sponsored-by: Brock Spratlen on Patreon
2021-07-30 18:36:03 -04:00
Joey Hess
66089e97de
Fix a rounding bug in display of data sizes
Eg, showImprecise 1 1.99 returned "1.1" rather than "2". The 9 rounded
upward to 10, and that was wrongly used as the decimal, rather than
carrying the 1.

Sponsored-by: Jack Hill on Patreon
2021-07-30 09:56:04 -04:00
Joey Hess
d2aead67bd
fsck: Detect and correct stale or missing inode caches for object files
An easy way to see this in action is to have an unlocked file, and touch the
object file.

While all code that compares inode caches for object files needs to be
prepared for this kind of problem and fall back to verification, having
fsck notice it and correct it is cheap (as long as fsck is being run
anyway) and ensures that if it happens for some unusual reason, there's a
way for the user to notice that it's happening.

Not that, when annex.thin is in use, the earlier call to isUnmodified
(and also potentially earlier calls to inAnnex in eg, verifyLocationLog)
will fix up the same problem silently. That might prevent the warning
being displayed, although probably it still will be, because the
Database.Keys write of the InodeCache will be queued but will not have
happened yet. I can't see a way to improve this, but it's not great.

Sponsored-by: Dartmouth College's Datalad project
2021-07-29 14:06:42 -04:00
Joey Hess
73e0cbbb19
fix problem populating pointer files
This is a result of an audit of every use of getInodeCaches,
to find places that misbehave when the annex object is not in the inode
cache, despite pointer files for the same key being in the inode cache.

Unfortunately, that is the case for objects that were in v7 repos that
upgraded to v8. Added a note about this gotcha to getInodeCaches.

Database.Keys.reconcileStaged, then annex.thin is set, would fail to
populate pointer files in this situation. Changed it to check if the
annex object is unmodified the same way inAnnex does, falling back to a
checksum if the inode cache is not recorded.

Sponsored-by: Dartmouth College's Datalad project
2021-07-27 14:26:49 -04:00
Joey Hess
3b5a3e168d
check if object is modified before starting to send it
Fix bug that caused some transfers to incorrectly fail with "content
changed while it was being sent", when the content was not changed.

While I don't know how to reproduce the problem that several people
reported, it is presumably due to the inode cache somehow being stale.
So check isUnmodified', and if it's not modified, include the file's
current inode cache in the set to accept, when checking for modification
after the transfer.

That seems like the right thing to do for another reason: The failure
says the file changed while it was being sent, but if the object file was
changed before the transfer started, that's wrong. So it needs to check
before allowing the transfer at all if the file is modified.

(Other calls to sameInodeCache or elemInodeCaches, when operating on inode
caches from the database, could also be problimatic if the inode cache is
somehow getting stale. This does not address such problems.)

Sponsored-by: Dartmouth College's Datalad project
2021-07-26 17:33:49 -04:00
Joey Hess
3d50b47ded
sync, merge: Added --allow-unrelated-histories option
Which is the same as the git merge option.

After last commit, this turns out to be needed in the test suite, and when
doing git-annex import from special remote, followed by a git-annex merge.

Sponsored-by: Svenne Krap on Patreon
2021-07-19 12:14:26 -04:00
Joey Hess
b6bea0d3f2
remove direct mode remnant of merging unrelated histories
sync, merge, post-receive: Avoid merging unrelated histories, which used to
be allowed only to support direct mode repositories.

(However, sync does still merge unrelated histories when importing trees
from special remotes, and the assistant still merges unrelated histories
always.)

See 556b2ded2b for why this was added
back in 2016, for direct mode.

This is a behavior change, which might break something that was relying
on sync merging unrelated histories, but git had a good reason to
prevent it, since it's easy to foot shoot with it, and git-annex should
follow suit.

Sponsored-by: Noam Kremen on Patreon
2021-07-19 11:41:26 -04:00
Joey Hess
33a80d083a
sync --quiet
* sync: When --quiet is used, run git commit, push, and pull without
  their ususual output.
* merge: When --quiet is used, run git merge without its usual output.

This might also make --quiet work better for some other commands
that make commits, like git-annex adjust.

Sponsored-by: Kevin Mueller on Patreon
2021-07-19 11:28:47 -04:00
Joey Hess
c952c485c8
Fix retrieval of content from borg repos accessed over ssh
It was making the borgrepo path absolute.. even when it was a ssh
repository.

Made BorgRepo a newtype, to guard against accidentially treating it like a
FilePath.

Sponsored-by: Graham Spencer on Patreon
2021-07-15 12:39:24 -04:00
Joey Hess
dd31fe7b9e
fall back to checking lower case hash directories in normal repo
Fix a bug that prevented getting content from a repository that started out
as a bare repository, or had annex.crippledfilesystem set, and was
converted to a non-bare repository.

This unfortunately means that inAnnex check gets slowed down by a stat call
in normal repos when the content is not present. Oh well, such is the cost
of backwards compatability with old mistakes.

Sponsored-by: Mark Reidenbach on Patreon
2021-07-15 12:16:31 -04:00
Joey Hess
47d3dccf19
whereused implemented
except --historical

Sponsored-by: Jack Hill on Patreon
2021-07-14 14:27:21 -04:00
Joey Hess
065db484e0
releasing package git-annex version 8.20210714 2021-07-14 12:23:24 -04:00
Joey Hess
8885bd3c5b
addistant: honor annex.delayadd for non-large files
assistant: When adding non-large files to git, honor annex.delayadd
configuration.

Also, don't add non-large files to git when they are still
being written to. This came for free, since the changes to non-large
files get queued up with the ones to large files, and run through the lsof
check.

Sponsored-by: Luke Shumaker on Patreon
2021-07-13 12:17:00 -04:00
Joey Hess
a6767ca81f
close bug
and mention another aspect of the reversion in changelog
2021-07-12 10:45:57 -04:00
Joey Hess
6a581f8b8b
fix init reversion when core.sharedRepository = group
init: Fix misbehavior when core.sharedRepository = group that caused it to
enter an adjusted branch. (Reversion in version 8.20210630)

Commit 4b1b9d7a83 made init call
freezeContent in case there was a hook that could prevent writing in
situations where perms don't. But with the above git config, freezeContent
does not prevent write at all. So init needs to do what freezeContent does
with a non-shared git config.

Or init could check for that config, and skip the probing, since it
won't actually be preventing write to any files. But that would make init
too aware if details of Annex.Perms, and also would break if the git config
were changed after init.

Sponsored-by: Dartmouth College's Datalad project
2021-07-12 10:15:49 -04:00
Joey Hess
b885007f0e
--debug output goes to stderr again, not stdout
Reversion in version 8.20210428

Sponsored-by: Dartmouth College's Datalad project
2021-07-12 09:40:38 -04:00
Joey Hess
b9db859221
addurl: Avoid crashing when used on beegfs.
Sponsored-by: Dartmouth College's DANDI project
2021-07-05 13:02:40 -04:00
Joey Hess
d2c48404a8
assistant: Avoid unncessary git repository repair
In a situation where git fsck gets confused about a commit that is made
while it's running.

Sponsored-by: Graham Spencer on Patreon
2021-06-30 18:00:16 -04:00
Joey Hess
fd99ce6c95
releasing package git-annex version 8.20210630 2021-06-30 11:48:33 -04:00
Joey Hess
73ccf34763
closing 2021-06-30 11:47:39 -04:00
Joey Hess
6b0d732746
repair: Fix reversion in version 8.20200522 that prevented fetching missing objects from remotes
In commit dfc4e641b5 git repair was changed
to use remote name, not url, when fetching. But it fetches into a temporary
git repo, which doesn't have remotes configured. Oops.

(In my defense, that commit was made just as covid lockdown started. But
testing? Urk.)

Sponsored-by: Mark Reidenbach on Patreon
2021-06-29 13:15:15 -04:00
Joey Hess
199391befe
make repair interruption safe
Fixed bug that interrupting git-annex repair (or assistant) while it was
fixing repository corruption would lose objects that were contained in pack
files.

Unpack all pack files and move objects into place *before* deleting the
pack files. The old approach moved the pack files to a temp directory
before unpacking them, which was not interruption safe.

Sponsored-By: Jochen Bartl on Patreon
2021-06-29 13:14:28 -04:00
Joey Hess
b8e32e200e
addurl, importfeed: Added --no-raw option
Forces eg, download with youtube-dl without falling back to raw download.

Since youtube-dl failing due to an url not being supported is difficult to
distinguish from it failing due to being blocked in some way, this can be
useful to avoid the fallback of git-annex downloading the raw web page and
adding that.

Since --raw also prevents using special remotes, --no-raw also
allows special remote downloads. Although it's always possible that some
special remote may claim an url and fall back to raw download of the
content, which --no-raw cannot prevent.

Sponsored-by: Boyd Stephen Smith Jr. on Patreon
2021-06-27 11:14:51 -04:00
Joey Hess
3a14648142
dropping unused marks as dead
Dropping an object with drop --unused or dropunused will mark it as
dead, preventing fsck --all from complaining about it after it's been
dropped from all repositories.

If another repository still has a copy, it won't be treated as dead
until it's also dropped from there.

The drop has to use --unused, can't be --key or something else, because
this indicates that the user has recently ran git-annex unused. If it
checked the unused log on every drop, bad things would happen when the
unused log was out of date, eg a file used to be unused but then got
re-added. Marking such a file as dead could be confusing. When the user
uses --unused/dropunused, they must consider the unused information to be
up-to-date.

The particular workflow this enables is:

	git annex add foo
	git annex unannex foo
	git annex unused
	git annex drop --unused / dropunused
	git annex fsck --all # no warnings

The docs for git-annex unannex say to use git-annex unused and dropunused,
so the user should be pointed in this direction when they want to undo an
accidental add.

Sponsored-by: Brock Spratlen on Patreon
2021-06-25 15:22:26 -04:00
Joey Hess
df2001aa88
Improve display of errors when transfers fail
Transfers from or to a local git repo could fail without a reason being
given, if the content failed to verify, or if the object file's stat
changed while it was being copied. Now display messages in these cases.

Sponsored-by: Jack Hill on Patreon
2021-06-25 13:17:04 -04:00
Joey Hess
4b1b9d7a83
Added annex.freezecontent-command and annex.thawcontent-command configs
Freeze first sets the file perms, and then runs
freezecontent-command. Thaw runs thawcontent-command before
restoring file permissions. This is in case the freeze command
prevents changing file perms, as eg setting a file immutable does.
Also, changing file perms tends to mess up previously set ACLs.

git-annex init's probe for crippled filesystem uses them, so if file perms
don't work, but freezecontent-command manages to prevent write to a file,
it won't treat the filesystem as crippled.

When the the filesystem has been probed as crippled, the hooks are not
used, because there seems to be no point then; git-annex won't be relying
on locking annex objects down. Also, this avoids them being run when the
file perms have not been changed, in case they somehow rely on
git-annex's setting of the file perms in order to work.

Sponsored-by: Dartmouth College's Datalad project
2021-06-21 14:40:52 -04:00
Joey Hess
1cc7b2661e
push synced/master before synced/git-annex
sync: Partly work around github behavior that first branch to be pushed to
a new repository is assumed to be the head branch, by not pushing
synced/git-annex first.

github expects master (or whatever the name is) to be pushed first, but
git-annex sync can't, because it's got to also support pushes to non-bare
repos where pushing master fails, as explained in the big comment. So
pushing synced/master is not entirely a fix, but at least it makes github
default to a branch with the stuff the user expects in it, not a bunch of
annex log files.

Aside from fixing github to not make this assumption, or improving
the git push protocol to include what the current HEAD is, the only other
approach I can think of is to identify git push's progress messages and
display those when pushing master, while filtering out error messages
about non-fast-forward etc. But git doesn't provide a way to separate out
or identify its progress messages.

Sponsored-by: Luke Shumaker on Patreon
2021-06-21 12:32:21 -04:00
Joey Hess
a6e281e008
releasing package git-annex version 8.20210621 2021-06-21 12:17:46 -04:00
Joey Hess
d2be68907c
drop, move, mirror: when two files have the same content, honor the max numcopies and requiredcopies
Eg, before with a .gitattributes like:

*.2 annex.numcopies=2
*.1 annex.numcopies=1

And foo.1 and foo.2 having the same content and key, git-annex drop foo.1 foo.2
would succeed, leaving just 1 copy, despite foo.2 needing 2 copies.
It dropped foo.1 first and then skipped foo.2 since its content was gone.

Now that the keys database includes locked files, this longstanding wart
can be fixed.

Sponsored-by: Noam Kremen on Patreon
2021-06-15 11:38:44 -04:00
Joey Hess
af9fdf5dba
verify associated files when checking numcopies
Most of this is just refactoring. But, handleDropsFrom
did not verify that associated files from the keys db were still
accurate, and has now been fixed to.

A minor improvement to this would be to avoid calling catKeyFile
twice on the same file, when getting the numcopies and mincopies value,
in the common case where the same file has the highest value for both.
But, it avoids checking every associated file, so it will scale well to
lots of dups already.

Sponsored-by: Kevin Mueller on Patreon
2021-06-15 11:14:52 -04:00
Joey Hess
3af4c9a29a
fix exponential blowup when adding lots of identical files
This was an old problem when the files were being added unlocked,
so the changelog mentions that being fixed. However, recently it's also
affected locked files.

The fix for locked files is kind of stupidly simple. moveAnnex already
handles populating unlocked files, and only does it when the object file
was not already present. So remove the redundant populateUnlockedFiles
call. (That call was added all the way back in
cfaac52b88, and has always been
unncessary.)

Sponsored-by: Dartmouth College's Datalad project
2021-06-15 09:45:55 -04:00
Joey Hess
78da00c7a6
Future proof activity log parsing
When the log has an activity that is not known, eg added by a future
version of git-annex, it used to be treated as no activity at all,
which would make git-annex expire think it should expire the repository,
despite it having some kind of recent activity.

Hopefully there will be no reason to add a new activity until enough
time has passed that this commit is in use everywhere.

Sponsored-by: Jake Vosloo on Patreon
2021-06-14 14:18:19 -04:00
Joey Hess
771a122c9e
add --size-limit option
When this option is not used, there should be effectively no added
overhead, thanks to the optimisation in
b3cd0cc6ba.

When an action fails on a file, the size of the file still counts toward
the size limit. This was necessary to support concurrency, but also
generally seems like the right choice.

Most commands that operate on annexed files support the option.
export and import do not, and I don't know if it would make sense for
export to.. Why would you want an incomplete export? sync doesn't, and
while it would be easy to make it support it for transferring files,
it's not clear if dropping files should also take the size limit into
account. Commands like add that don't operate on annexed files don't
support the option either.

Exiting 101 not yet implemented.

Sponsored-by: Denis Dzyubenko on Patreon
2021-06-04 16:16:53 -04:00
Joey Hess
189fb05ffb
Added annex.adviceNoSshCaching config.
Sponsored-by: Brock Spratlen on Patreon
2021-05-27 12:37:49 -04:00
Joey Hess
b5f5475ed6
New matching options --excludesamecontent and --includesamecontent
The normalisation of filenames turns out to be the tricky part here,
because the associated files coming out of the keys db may look like
"./foo/bar" or "../bar". For the former to match a glob like "foo/*",
it needs to be normalised.

Note that, on windows, normalise "./foo/bar" = "foo\\bar"
which a glob like "foo/*" won't match. So the glob is matched a second
time, on the toInternalGitPath, so allowing the user to provide a glob
with the slashes in either direction. However, this still won't support
some wacky edge cases like the user providing a glob of "foo/bar\\*"

Sponsored-by: Dartmouth College's Datalad project
2021-05-25 13:08:18 -04:00
Joey Hess
cedc28a783
prevent dropping required content of other file using same content
When two files have the same content, and a required content expression
matches one but not the other, dropping the latter file will fail as it
would also remove the content of the required file.

This will slow down drop (w/o --auto), dropunused, mirror, and move, by one
keys db lookup per file. But I did include an optimisation to avoid a
double db lookup in the drop --auto / sync --content case. I suspect that
dropunused could also use PreferredContentChecked True, but haven't
entirely thought it through and it's rarely used with enough files for the
optimisation to matter.

Sponsored-by: Dartmouth College's Datalad project
2021-05-25 11:34:06 -04:00
Joey Hess
7029ef1c3d
improve changelog 2021-05-25 10:08:29 -04:00
Joey Hess
5d18994736
clearer language 2021-05-24 14:54:51 -04:00
Joey Hess
a56b151f90
fix longstanding indeterminite preferred content for duplicated file problem
* drop: When two files have the same content, and a preferred content
  expression matches one but not the other, do not drop the file.
* sync --content, assistant: Fix an edge case where a file that is not
  preferred content did not get dropped.

The sync --content edge case is that handleDropsFrom loaded associated files
and used them without verifying that the information from the database was
not stale.

It seemed best to avoid changing --want-drop's behavior, this way when
debugging a preferred content expression with it, the files matched will
still reflect the expression. So added a note to the --want-drop documentation,
to make clear it may not behave identically to git-annex drop --auto.

While it would be possible to introspect the preferred content
expression to see if it matches on filenames, and only look up the
associated files when it does, it's generally fairly rare for 2 files to
have the same content, and the database lookup is already avoided when
there's only 1 file, so I did not implement that further optimisation.

Note that there are still some situations where the associated files
database does not get locked files recorded in it, which will prevent
this fix from working.

Sponsored-by: Dartmouth College's Datalad project
2021-05-24 14:07:05 -04:00
Joey Hess
c525d18cf7
filter-branch: New command, useful to produce a filtered version of the git-annex branch, eg when splitting a repository 2021-05-17 14:16:46 -04:00
Joey Hess
8b6dad11a2
add createMessage
init: When annex.commitmessage is set, use that message for the commit
that creates the git-annex branch.

This will be used by filter-branch too, and it seems to make sense to let
annex.commitmessage affect it.
2021-05-17 13:07:47 -04:00
Joey Hess
947d2a10bc
assistant: Fix a crash on startup by avoiding using forkProcess
ghc 8.8.4 seems to have changed something that broke code that has been
successfully using forkProcess since 2012. Likely a change to GC internals.

Since forkProcess has never had clear documentation about how to
use it safely, avoid using it at all. Instead, when git-annex needs to
daemonize itself, re-run the git-annex command, in a new process group
and session.

This commit was sponsored by Luke Shumaker on Patreon.
2021-05-12 15:08:03 -04:00
Joey Hess
675556fd9a
smudge: check for known annexed inodes before checking annex.largefiles
smudge: Fix a case where an unlocked annexed file that annex.largefiles
does not match could get its unchanged content checked into git, due to git
running the smudge filter unecessarily.

When the file has the same inodecache as an already annexed file,
we can assume that the user is not intending to change how it's stored in
git.

Note that checkunchangedgitfile already handled the inverse case, where the
file was added to git previously. That goes further and actually sha1
hashes the new file and checks if it's the same hash in the index.

It would be possible to generate a key for the file and see if it's the
same as the old key, however that could be considerably more expensive than
sha1 of a small file is, and it is not necessary for the case I have, at
least, where the file is not modified or touched, and so its inode will
match the cache.

git-annex add was changed, when adding a small file, to remove the inode
cache for it. This is necessary to keep the recipe in
doc/tips/largefiles.mdwn for converting from annex to git working.
It also avoids bugs/case_where_using_pathspec_with_git-commit_leaves_s.mdwn
which the earlier try at this change introduced.
2021-05-10 13:20:10 -04:00
Joey Hess
72a8bbce12
Revert "smudge: check for known annexed inodes before checking annex.largefiles"
This reverts commit 424bef6b6f.

This commit caused other buggy behavior unfortunately.
2021-05-10 12:20:13 -04:00
Joey Hess
921753ac44
reinject: Error out when run on a file that is not annexed
rather than silently skipping it
2021-05-07 13:31:03 -04:00
Joey Hess
4bf7940d6b
fileRef: make paths relative and simplified
Fix behavior of several commands, including reinject, addurl, and rmurl
when given an absolute path to an unlocked file, or a relative path that
leaves and re-enters the repository.

To avoid slowing down all the cases where the paths are already ok
with an unncessary call to getCurrentDirectory, put in an optimisation
in relPathCwdToFile. That will probably also speed up other parts of
git-annex by some small amount, but I have not benchmarked.

Note that I did not convert branchFileRef, because it seems likely that
it will be used with a file that is not provided by the user, so is already
in a sane format. This is certainly true for the way git-annex uses it,
though maybe arguable to the extent Git.Ref is a reusable library.
2021-05-07 13:25:59 -04:00
Joey Hess
424bef6b6f
smudge: check for known annexed inodes before checking annex.largefiles
smudge: Fix a case where an unlocked annexed file that annex.largefiles
does not match could get its unchanged content checked into git, due to git
running the smudge filter unecessarily.

When the file has the same inodecache as an already annexed file,
we can assume that the user is not intending to change how it's stored in
git.

Note that checkunchangedgitfile already handled the inverse case, where the
file was added to git previously. That goes further and actually sha1
hashes the new file and checks if it's the same hash in the index.

It would be possible to generate a key for the file and see if it's the
same as the old key, however that could be considerably more expensive than
sha1 of a small file is, and it is not necessary for the case I have, at
least, where the file is not modified or touched, and so its inode will
match the cache.
2021-05-03 13:26:32 -04:00
Joey Hess
4588668a12
fromkey unlocked files support
fromkey: Create an unlocked file when used in an adjusted branch where the
file should be unlocked, or when configured by annex.addunlocked.

There is some overlap with code in Annex.Ingest, however it's not quite the
same because ingesting has a temp file with the content, where here the
content, if any, is in the annex object file. So it eg, makes sense for
Annex.Ingest to copy the execute mode of the content file, but it does not make
sense for fromkey to do that.

Also changed in passing to stage the file in git directly, rather than
using git add. One consequence of that is that if the file is gitignored,
it will still get added, rather than the old behavior:

The following paths are ignored by one of your .gitignore files:
ignored
hint: Use -f if you really want to add them.
hint: Turn this message off by running
hint: "git config advice.addIgnoredFile false"
git-annex: user error (xargs ["-0","git","--git-dir=.git","--work-tree=.","--literal-pathspecs","add","--"] exited 123)

That old behavior was a surprise to me, and so I consider it a bug, and doubt
anyone would have relied on it.

Note that, when on an --hide-missing branch, it is possible to fromkey a key
that is not present (needs --force). The annex link or pointer file still gets
written in this case. It doesn't seem to make any sense not to write it,
because then fromkey would not do anything useful in this case, and this way
the file can be committed and synced to master, and the branch re-adjusted to
hide the new missing file.

This commit was sponsored by Noam Kremen on Patreon.
2021-05-03 11:26:18 -04:00
Joey Hess
27e5f3cd52
releasing package git-annex version 8.20210428 2021-04-28 12:16:45 -04:00
Joey Hess
0f73b6d03a
Avoid more than 1 gpg password prompt at the same time
Which could happen occasionally before when concurrency is enabled.
While not much of a problem when it did happen, better to avoid it. Also,
since it seems likely the gpg-agent sometimes fails in such a situation,
this makes it not happen when running a single git-annex command with
concurrency enabled.

This commit was sponsored by Jake Vosloo on Patreon.
2021-04-27 16:36:44 -04:00
Joey Hess
a166d2520b
check mincopies is satisfied even when numcopies is known to be satisfied
I had been assuming that numcopies would be a larger or at most equal to
mincopies, so no need to check both. But users get confused and use configs
that don't really make sense, so make sure to handle mincopies being larger
than numcopies.

Also add something to the mincopies man page to discourage this
misconfiguration.

This commit was sponsored by Denis Dzyubenko on Patreon.
2021-04-27 13:37:18 -04:00
Joey Hess
d3e49b210a
git-annex-config: Allow setting annex.securehashesonly
Which has otherwise been supported since 2019, but was missing from the
list of allowed repo-global configs.

Reordered the list to match the order in the git-annex-config man page, to
make them easy to cross-compare.

This commit was sponsored by Boyd Stephen Smith Jr. on Patreon.
2021-04-26 13:50:37 -04:00
Joey Hess
8e24fb3507
update 2021-04-26 13:12:51 -04:00
Joey Hess
2b264b3edf
initremote --private 2021-04-23 14:47:46 -04:00
Joey Hess
0547884eb2
importfeed: fix bug while also speeding up 12x!
* Fix bug that could make git-annex importfeed not see recently recorded
  state when configured with annex.alwayscommit=false.
* importfeed: Made "checking known urls" phase run 12 times faster.

The massive speedup is because it no longer queries for metadata
accompanying each url. Instead it processes the whole git-annex branch and
checks all metadata files for feed item ids, and uses any it finds.

This could result in a behavior change, in an unlikely situation: If a feed
id is recorded in a key's metadata, but the url gets removed, the old code
would not see that item id and would re-download it if it finds an url for
it in a feed, while the new code will see the item id. I don't think
the old behavior was intentional, and it may be that the new behavior is
better. Not gonna worry about this.
2021-04-23 12:36:56 -04:00
Joey Hess
6eb3c0a6b4
fix branch precacheing bug by checking journal
Fix bug caused by recent optimisations that could make git-annex not see
recently recorded status information when configured with
annex.alwayscommit=false.

When not using --all, precaching only gets triggered when the
command actually needs location logs, and so there's no speed hit there.

This is a minor speed hit for --all, because it precaches even when the
location log is not actually going to be used, and so checking the journal
is not necessary. It would have been possible to defer checking the journal
until the cache gets used. But that would complicate the usual Branch.get
code path with two different kinds of caches, and the speed hit is really
minimal. A better way to speed up --all, later, would be to avoid
precaching at all when the location log is not going to be used.
2021-04-21 14:02:15 -04:00
Joey Hess
e1a9b79fa6
fix hardcoded origin name in checkAdjustedClone
init: Fix a crash when the repo's was cloned from a repo that had an
adjusted branch checked out, and the origin remote is not named "origin".

The only other hardcoding of the name of origin is in:

- Upgrade.V2, which can be ignored probably
- Annex.Branch, which doesn't fail if it has some other name, but just
  doesn't set up the git-annex branch with quite as linear a history in
  that case.
2021-04-14 18:53:27 -04:00
Joey Hess
4b048ca042
directory CoW on store
Not for exports to directory yet though.
2021-04-14 15:11:00 -04:00
Joey Hess
7bb93896af
directory CoW on retrieve
directory: When cp supports reflinks, use it when getting content from a
directory special remote.

Not yet for imports from directory though, and not for store.

Note that, when it's chunked, using cp --reflink would not speed it up, and
when reflink was not supported, would unnecessarily write the chunk to a
file before reading it back in. So, only using a fileRetriever in the
NoChunks case is necessary to keep chunking fast.

fileCopier is told not to verify, because the special remote interface
does not yet support verification in passing. AFAICS, fileCopies can
never return False when not verifying so the added giveup should never
actually happen.
2021-04-14 15:05:12 -04:00
Joey Hess
5783a8d081
fsck: avoid redundant checksum when transfer is Verified
When downloading content from a remote, if the content is able to be
verified during the transfer, skip checksumming it a second time.

Note that in this case, the fsck output does not include "(checksum)"
which it does when the checksumming is done separately from the download.

This commit was sponsored by Brock Spratlen on Patreon.
2021-04-14 13:22:54 -04:00
Joey Hess
8e7dc958d2
forget: Preserve currently exported trees
Avoiding problems with exporttree remotes in some unusual circumstances.

This commit was sponsored by Brett Eisenberg on Patreon.
2021-04-13 15:00:23 -04:00
Joey Hess
805d325a8d
diffdriver: Support unlocked files 2021-04-08 14:32:09 -04:00
Joey Hess
1b645e1ace
added --debugfilter (and annex.debugfilter) 2021-04-05 15:31:10 -04:00
Joey Hess
ced91b3fbd
Avoid excess commits to the git-annex branch when stall detection is enabled
When git-annex transferrer started up, and the journal contained something,
it would commit it to the git-annex branch. This caused excess commits to
the branch, in cases where normally several changes would be journalled and
committed together. That generated some excess git objects and was also
just noisy on stdout.

Since transferrer uses enableInteractiveBranchAccess, it does not need to
commit journalled changes, since the optimisation that avoids checking
the journal when reading from the branch is disabled for processes that
call that.

This commit was sponsored by Svenne Krap on Patreon.
2021-04-02 11:57:18 -04:00
Joey Hess
8868a3a4c7
Fix build with persistent-2.12.0.1
persistent stopped using askLogFunc, and the thing to use is askLoggerIO
from monad-logger. Bumped the dep to the first version that contained that.

Note that the i386ancient build uses a newer monad-logger than 0.3.10,
so the new versioned dep should not break it, and presumably nothing else
either.

This commit was sponsored by Noam Kremen on Patreon.
2021-04-01 12:21:02 -04:00
Joey Hess
315a81e3c6
releasing package git-annex version 8.20210330 2021-03-30 14:33:28 -04:00
Joey Hess
4611813ef1
Fix bug importing from a special remote into a subdirectory more than one level deep
Which generated unusual git trees that could confuse git merge,
since they incorrectly had 2 subtrees with the same name.

Root of the bug was a) not testing that at all! but also
b) confusing graftdirs, which contains eg "foo/bar" with
non-recursively read trees, which would contain eg "bar"
when reading a subtree of "foo".

It's worth noting that Annex.Import uses graftTree, but it really
shouldn't have needed to. Eg, when importing into foo/bar from a remote,
it's enough to generate a tree of foo/bar/x, foo/bar/y, and does not
include other files that are at the top of the master branch. It uses
graftTree, so it does include the other files, as well as the foo/bar
tree. git merge will do the same thing for both trees. With that said,
switching it away from graftTree would result in another import
generating a new commit that seems to delete files that were there in a
previous commit, so it probably has to keep using graftTree since it
used it before.

This commit was sponsored by Kevin Mueller on Patreon.
2021-03-26 16:04:36 -04:00
Joey Hess
f085ae4937
borg: Support importing files that are hard linked in the borg backup
Note that a key with no size field that is hard linked will
result in listImportableContents reporting a file size of 0,
rather than the actual size of the file. One result is that
the progress meter when getting the file will seem to get stuck
at 100%. Another is that the remote's preferred content expression,
if it tries to match against file size, will treat it as an empty file.
I don't see a way to improve the latter behavior, and the former behavior
is a minor enough problem.

This commit was sponsored by Jake Vosloo on Patreon.
2021-03-26 13:29:34 -04:00
Joey Hess
31eb5fddf3
borg: Fix a bug that prevented importing keys of type URL and WORM
Keys stored on the filesystem are mangled by keyFile to avoid problem
chars. So, that mangling has to be reversed when parsing files from a
borg backup back to a key.

The directory special remote also so mangles them. Some other special
remotes do not; eg S3 just serializes the key -- but S3 object names are
not limited to filesystem valid filenames anyway, so a S3 server must
not map them directly to files in any case. It seems unlikely that a
borg backup of some such special remote will get broken by this change.

This commit was sponsored by Graham Spencer on Patreon.
2021-03-26 12:07:00 -04:00
Joey Hess
537f9d9a11
Improved display of errors when accessing a git http remote fails.
New error message:

  Remote foo not usable by git-annex; setting annex-ignore

  http://localhost/foo/config download failed: Configuration of annex.security.allowed-ip-addresses does not allow accessing address ::1

If git config parse fails, or the git config file is not available at the url,
a better error message for that is also shown.

This commit was sponsored by Mark Reidenbach on Patreon.
2021-03-24 14:19:32 -04:00
Joey Hess
4631d1ab56
Fix build with attoparsec-0.14
It changed parseOnly in the ByteString.Lazy module to take a lazy, not
strict ByteString. In all these cases though, we actually had a strict
ByteString, so the most efficient fix, which also happens to avoid needing
ifdefs, is to use the non-lazy module instead.

This commit was sponsored by Denis Dzyubenko on Patreon.
2021-03-24 12:11:50 -04:00
Joey Hess
5d78cd9d08
Sped up git-annex init in a clone of an existing repository
Seems that hasOrigin was never finding origin's git-annex branch, so a new
one got created each time. And so then it later needed to merge the two
branches, which is expensive.

Added --no-track to git branch to avoid it displaying a message about
setting up tracking branches. Of course there's no reason to make the
git-annex branch a tracking branch since git-annex auto-merges it.
2021-03-23 15:23:13 -04:00
Joey Hess
798f685077
New annex.supportunlocked config
Can beet to false to avoid some expensive things needed to support unlocked
files.

See my comment for why this only controls what init sets up, and not other
behavior.

I didn't bother with making the v5 upgrade code path look at this, though
it easily could, because the docs say to run git-annex init after setting
it to make it take effect.
2021-03-23 14:04:34 -04:00
Joey Hess
c68ba7d893
whereis: Don't include yt: prefix when showing url to content retrieved with youtube-dl
I don't think this was really intentional behavior. It may be that it was
useful to include it so it could be passed to rmurl, since without it rmurl
would not actually remove the url. Since that was changed earlier today,
now seems like a good time to clean up the display of these urls.

This commit was sponsored by Jochen Bartl on Patreon.
2021-03-22 19:56:24 -04:00
Joey Hess
637229c593
fix fsck --from --all to not fall over trying to check required content
fsck: When --from is used in combination with --all or similar options, do
not verify required content, which can't be checked properly when operating
on keys.

This commit was sponsored by Boyd Stephen Smith Jr. on Patreon.
2021-03-22 15:08:07 -04:00
Joey Hess
5545e78a1e
Make --debug also enable debugging in child git-annex processes
Especially necessary with stalldetection using child processes for
transfers.

This commit was sponsored by Jack Hill on Patreon.
2021-03-22 14:25:28 -04:00
Joey Hess
5d75cbcdcf
webdav: deal with buggy webdav servers in renameExport
box.com already had a special case, since its renaming was known buggy.
In its case, renaming to the temp file succeeds, but then renaming the temp
file to final destination fails.

Then this 4shared server has buggy handling of renames across directories.
While already worked around with for the temp files when storing exports
now being in the same directory as the final filename, that also affected
renameExport when the file moves between directories.

I'm not entirely clear what happens on the 4shared server when it fails
this way. It kind of looks like it may rename the file to destination and
then still fail.

To handle both, when rename fails, delete both the source and the
destination, and fall back to uploading the content again. In the box.com
case, the temp file is the source, and deleting it makes sure the temp file
gets cleaned up. In the 4shared case, the file may have been renamed to the
destination and so cleaning that up avoids any interference with the
re-upload to the destination.
2021-03-22 13:08:18 -04:00