That's the version in Debian stable now. And this removes a lot of ifdefs.
Also I'm pretty sure a recent commit broke building with older versions of
aws, although that could be fixed with sufficent testing.
S3: When initremote is given the name of a bucket that already exists,
automatically set datacenter to the right value, rather than needing it to
be explicitly set.
This needs aws-0.23. But, initremote stores the datacenter value, so
a remote set up this way can be used with git-annex built with an older aws.
This is not done when signature=anonymous, because in that case,
using AWS.defaultRegion works fine for accessing buckets on other
datacenters.
It feels a bit round-about to need to do this probing. But without it,
the problem seems to be that, with a v4 signature, the location constraint
is included in the Authorization header. When that is the wrong location,
AWS S3 rejects it. I do wonder though if there is an easier way that I
am currently missing.
Sponsored-by: Dartmouth College's DANDI project
Commit 215640096f caused the default
region for S3 to change to us-east-2. This was due to regionInfo having
an undocumented property that the first item in the list is for the
default region.
Avoid relying on regionInfo for defaultRegion.
Sponsored-by: Dartmouth College's DANDI project
* S3: Default to signature=v4 when using an AWS endpoint, since some
AWS regions need v4 and all support it. When host= is used to specify
a different S3 host, the default remains signature=v2.
* webapp: Support setting up S3 buckets in regions that need v4
signatures.
For the webapp, went ahead and added all current S3 regions
(except govcloud, which is not usable by everyone).
Sponsored-by: Dartmouth College's DANDI project
p2p: Added --enable option, which can be used to enable P2P networks
provided by external commands git-annex-p2p-<netname>
Made git-annex p2p --enable tor behave the same as git-annex enable-tor,
to make tor a bit less of a special case. However, it canot be run as root,
since it cannot take the user id parameter.
When using the new generic P2P transport to open an outgoing connection
to a peer, this will hold the pid of the git-annex-p2p-<netname>
command.
closeConnection simply waits for it. Rather than relying on garbage
collection of the closed handles to close it.
In Remote.Helper.Ssh, connProcess is set to Nothing, even though there
is a similar process being used there. That code stores the pid in
OpenConnection instead, and handles waiting for it itself. A bit ugly,
but not worth cleaning up at this point, maybe later.
Fix bug in handling of linked worktrees on filesystems not supporting
symlinks, that caused annexed file content to be stored in the wrong
location inside the git directory, and also caused pointer files to not get
populated.
This parameterizes functions in Annex.Locations with a GitLocationMaker.
The uses of standardGitLocationMaker are in cases where the path returned
by a function should not change when in a linked worktree. For example,
gitAnnexLink uses standardGitLocationMaker because symlink targets should
always be to ".git/annex/objects" paths, even when in a linked worktree.
Hopefully I have gotten all uses of standardGitLocationMaker right.
This also assumes that all path construction to the annex directory
is done via the functions in Annex.Locations, and there is no other,
ad-hoc construction elsewhere. Thankfully, Annex.Locations has been around
since the beginning, and has been used consistently. I think.
---
In fixupUnusualRepos, when symlinks are supported, the .git file is replaced
with a symlink to the linked worktree git directory. And in that directory,
an "annex" symlink points to the main annex directory. In that case,
it's not necessary to set mainWorkTreePath. It would be ok to set it,
but not setting it in that case allows an optimisation of avoiding reading
the "commondir" file.
The change to make fixupUnusualRepos set mainWorkTreePath when the
repository is not initialized yet is done in case the initialization itself
writes to the annex directory. If that were the case, without setting
mainWorkTreePath, the annex symlink would not be set up yet, and so
it might have created the annex directory in the wrong place. Currently
that didn't happen, but now that mainWorkTreePath is available, using it
here avoids any such later problem.
---
This commit does not deal with the mess of a worktree that has
experienced this bug before. In particular, if `git-annex get` were
run in such a worktree, it would have stored the object files in the
linked worktree's git directory, rather than in the main git directory.
Such misplaced object files need to be dealt with; the plan is to make
git-annex fsck notice and fix them.
A worktree that has experienced this bug before will contain unpopulated
pointer files. Those may eventually get fixed up in regular usage of
git-annex, but git-annex fsck will also fix them up.
---
Finally, this has me pondering if all of git-annex's state files should
really be stored in one common place across all linked worktrees. Should
perhaps state files that are specific to the worktree be stored per-worktree?
That has not been the case when using git-annex on filesystems supporting
symlinks, but it *has* been the case on filesystems not supporting
symlinks. Perhaps this leads to some other buggy behavior in some cases.
Or perhaps to extra work being done.
For example, the keys database has an associated files table. Which depends
on the worktree. But reconcileStaged updates that table, so when git-annex
is used first in one worktree and then in another one, reconcileStaged will
update the table to reflect the current worktree. Which is extra work each
time a different worktree is used. But also, what if two git-annex
processes are running at the same time, in separate worktrees? Probably
this needs more thought and investigation.
So there is a risk that this commit exposes such buggy behavior in a
situation where it didn't happen before, due to the filesystem not
supporting symlinks. But, given how much this bug crippled using linked
worktrees in such a situation, I doubt that many people have been doing
that.
Added annex.fastcopy and remote.name.annex-fastcopy config setting. When
set, this allows the copy_file_range syscall to be used, which can eg allow
for server-side copies on NFS. (For fastest copying, also disable
annex.verify or remote.name.annex-verify.)
This is a simple implementation, that does not handle resuming as well as
it possibly could.
It can be used with both local git remotes (including on NFS), and
directory special remotes. Other types of remotes could in theory also
support it, so I've left the config documented as a general thing.
It was treating remote paths of a remote repo as if they were local paths,
and so trying to expand git directories and so forth on them. That led to
bad results, including a path like "foo.git" getting turned into
"foo.git.git"
Sponsored-by: Dartmouth College's OpenNeuro project
Which is a per-remote version of the annex.web-options config.
Had to plumb RemoteGitConfig through to getUrlOptions. In cases where a
special remote does not use curl, there was no need to do that and I used
Nothing instead.
In the case of the addurl and importfeed commands, it seemed best to say
that running these commands is not using the web special remote per se,
so the config is not used for those commands.
If an input file has been lost from all repositories, it is no longer
possible to compute the output. This will avoid dropping content that
was computed in such a situation, as well as making git-annex fsck --from
the compute remote do its usual thing when content has gone missing.
This implementation avoids recursing forever if there is a cycle,
which should not be possible anyway.
Note the use of RemoteStateHandle as a constructor here suggests that
this may not handle sameas remotes right, since usually a
RemoteStateHandle is constructed using the sameas uuid for a sameas
remote. That assumes a compute remote can even have or be a sameas remote.
Which doesn't seem to make sense, so I have not thought through what might
happen here in detail.
This avoids a potential problem where the program sends several INPUT
before reading responses, so flushing the respose to the pipe could
block. It's unlikely, but seemed worth making sure it can't happen.
This improves eg `git-annex move --to` a compute remote that does not
contain the key. Rather than erroring with "Missing compute state" when
it checks if the key is in the remote, it proceeds to trying to store to
it, which has a nice error message.
Used by git-annex-compute-singularity to make addcomputed --fast work.
Also, simplified git-annex-compute-singularity; there is no need to hard
link the container into place. singularity does not care about the
extension of the container, so can just pass it the annex object file.
Use case where this came up is a compute program using singularity,
where the process inside the container will be allowed to write to the temp
directory, so could make eg a /etc/shadow symlink, which could then be
used to exfiltrate that from the system to wherever the annex object
might be pushed to.
It seemed better to fix this once in git-annex rather than in any such
compute program.
This allows rejecting output filenames that are outside the repository,
and also handles converting eg "-foo" to "./-foo" to prevent a command
that it's passed to interpreting the output filename as a dashed option.
Rather than use the filename provided by INPUT, which could come from user
input, and so could be something that looks like a dashed parameter,
use a .git/object/<sha> filename.
This avoids user input passing through INPUT and back out, with the file
path then passed to a command, which could do something unexpected with
a dashed parameter, or other special parameter.
Added a note in the design about being careful of passing user input to
commands. They still have to be careful of that in general, just not in
this case.
In this case, the compute program is run the same as if addcomputed --fast
were used, so it should succeed, without outputting a computed file.
computeInputsUnavailable is in ComputeState for simplicity, but it is
not serialized with the rest of the ComputeState.
This needed some refactoring to avoid cycles, since Remote.Compute
cannot import Remote.List. Instead, it uses Annex.remotes. Which must be
populated by something else, but we know it has been, because something
is using Remote.Compute, which it must have found in the remote list,
which populates that.
In Remote.Compute, keyPossibilities' is called with all loggedLocations,
without the trustExclude DeadTrusted that keyLocations does. There is
another cycle there. This may be a problem if a dead repository is still
a remote.
This is missing cycle prevention, and it's certianly possible to make 2
files in the compute remote co-depend on one-another. Hopefully not in a
real world situation, but it an attacker could certainly do it. Cycle
prevention will need to be added to this.
And require for enable as well as autoenable.
It seemed asking for trouble for `git-annex enable foo` to use whatever
compute program is stored in the git config, without verifying that the
user wants that program to be used.
Note that it would be good to allow `git-annex enable foo program=...`
to be used without the program being in the git config. Not implemented yet
though.
Added annex.security.autoenable-compute-programs and only allow
autoenabling special remotes that use compute programs on that list.
The reason this is needed is a user might have some compute programs
that are less safe to use than others. They might want to use an unsafe
one only with one repository, where they are the only committer or other
committers are trusted. They might be ok with others being used by any
repository, and if so they can add them to the list.
Another reason would be a user who has installed a compute program by
accident. Eg, it might be included with git-annex at some point, or
pulled in by some dependency. That user doesn't necessarily want that
compute program to be used in an autoenabled special remote.
Using GIT keys, like are used when exporting git files to special
remotes. Except here the GIT key refers to a file checked into the git
repo.
Note that, since the compute remote uses catObject to get the content,
a symlink that is checked into git does not get followed. This is important
for security, because following a symlink and adding the content to the
repo as an annex object would allow exfiltrating content from outside
the repository.
Instead, the behavior with a symlink is to run the computation on the
symlink target. This may turn out to be confusing, and it might be worth
addcomputed checking if the file in git is a symlink and erroring out.
Or it could follow symlinks as long as the destination is a file in the
repisitory.
Like when getting from the web special remote, when the output of the
computation has changed, record the new hash of the content as an
equivilant key for the VURL key.
Still needs to be done for addcomputed and recompute.
I've lost track of them all, but it includes:
* Using the same key backend as was used in the original computation.
* Fixing bug that prevented updating the source file key in the compute
state
* Handling --reproducible and --unreproducible.
* recompute --original of a file using VURL, when the result is
different, but the key remains the same, makes the object file
be updated with the new content
* Detecting some other ways the program behavior can change, just for
completeness.
* Also adds --backend to addcomputed.