For these, use VURL and URL keys, with an "annex-compute:" URI prefix.
These URL keys will look something like this:
URL--annex-compute&cbar4,63pconvert,3-f4d3d72cf3f16ac9c3e9a8012bde4462
Generally it's too long so most of it gets md5summed. It's a little
ugly, but it's what fell out of the existing URL key generation
machinery. I did consider special casing to eg
"URL--annex-compute&c4d3d72cf3f16ac9c3e9a8012bde4462". But it seems at
least possibly useful that the name of the file that was computed is
visible and perhaps one or two words of the git-annex compute command
parameters.
Note that two different output files from the same computation will get
the same URL key. And these keys should remain stable.
Working pretty well. Mostly. But:
* Does not yet support inputs that are non-annexed files checked into git
* --fast is currently broken (will need something like VURL keys)
* --unreproducible still uses a checksumming backend, so drop and get
again will likely fail (needs probably to use an URL key or something
like one)
The compute special remote seems to work pretty well too. Eg,
getting from it works, and dropping content that is present in it works.
Eg, a computation might be run in "foo/" and refer to "../bar" as an
input or output.
So, the subdir is part of the computation state.
Also, prevent input or output of files that are outside the git
repository. Of course, the program can access any file on disk if it
wants to; this is just a guard against mistakes. And it may also be
useful if the program comunicates with something less trusted than it,
eg a container image, so input/output files communicated by that are not
the source of security problems.
Except for some of the hard parts: progress displays, incremental
verification, and getting inputs before running a computation.
Untested! In order to test this, git-annex addcomputed needs to be
implemented.
git-lfs: Added an optional apiurl parameter.
This needs version 1.2.5 of the haskell git-lfs library to be used.
stack.yaml updated to use that.
Note that git-annex enableremote can be used to add apiurl= to an existing
git-lfs special remote. To allow unsetting the apiurl and instead use
the probed url, support enableremote with apiurl set to an empty string.
Sponsored-by: Luke T. Shumaker
Note that the additional use of System.FilePath.Posix likely fixes a
problem if this were used on windows. The AndroidPath uses / directory
separators. Before this, on windows, \ would have been used.
The change to newtype AndroidPath is only documentation.
And follow-on changes.
Note that relatedTemplate was changed to operate on a RawFilePath, and
so when it counts the length, it is now the number of bytes, not the
number of code points. This will just make it truncate shorter strings
in some cases, the truncation is still unicode aware.
When not building with the OsPath flag, toOsPath . fromRawFilePath and
fromRawFilePath . fromOsPath do extra conversions back and forth between
String and ByteString. That overhead could be avoided, but that's the
non-optimised build mode, so didn't bother.
Sponsored-by: unqueued
By using System.Directory.OsPath, which takes and returns OsString,
which is a ShortByteString. So, things like dirContents currently have the
overhead of copying that to a ByteString, but that should be less than
the overhead of using Strings which often in turn were converted to
RawFilePaths.
Added Utility.OsString and the OsString build flag. That flag is turned
on in the stack.yaml, and will be turned on automatically by cabal when
built with new enough libraries. The stack.yaml change is a bit ugly,
and that could be reverted for now if it causes any problems.
Note that Utility.OsString.toOsString on windows is avoiding only a
check of encoding that is documented as being unlikely to fail. I don't
think it can fail in git-annex; if it could, git-annex didn't contain
such an encoding check before, so at worst that should be a wash.
Previously, when the git config was unable to be read from a ssh remote,
it would try to git fetch from it to determine if the remote was
otherwise accessible. That was unnessary work, since exit status 255
indicates a connection problem.
As well as avoiding the extra work of the fetch, this also improves
things when a ssh remote cannot be connected to due to a problem with
the git-annex ssh control socket. In that situation, ssh will also exit 255.
Before, the git fetch was tried in that situation, and would succeed, since
it does not use the git-annex ssh control socket. git-annex would conclude
that git-annex-shell was not installed on the remote, which could be wrong.
I suppose it also used to be possible for the user to need to enter a
ssh password on each connection to the remote. If they entered the wrong
password for the git-annex-shell call, but then the right password for
the git fetch, it would also incorrectly set annex-ignore, and that
situation is also now fixed.
* Removed the i386ancient standalone tarball build for linux, which
was increasingly unable to support new git-annex features.
* Removed support for building with ghc older than 9.0.2,
and with older versions of haskell libraries than are in current Debian
stable.
* stack.yaml: Update to lts-23.2.
Note that i386ancient was targeting linux 2.6.32, which has been EOL for
over 9 years now. Any old system still using such a kernel is certainly highly
insecure. And I suspect i386ancient had its own insecurities due to haskell
libraries and C libraries not having been updated.
Added config `url.<base>.annexInsteadOf` corresponding to git's
`url.<base>.pushInsteadOf`, to configure the urls to use for accessing the
git-annex repositories on a server without needing to configure
remote.name.annexUrl in each repository.
While one use case for this would be rewriting urls to use annex+http,
I decided not to add any kind of special case for that. So while
git-annex p2phttp, when serving multiple repositories, needs an url
of eg "annex+http://example.com/git-annex/ for each of them, rewriting an
url like "https://example.com/git/foo/bar" with this config set to
"https://example.com/git/" will result in eg
"annex+http://example.com/git-annex/foo/bar", which p2phttp does not
support.
That seems better dealt with in either git-annex p2phttp or a http
middleware, rather than complicating the config with a special case for
annex+http.
Anyway, there are other use cases for this that don't involve annex+http.
Logically, this should make it need a lot less memory when files have
been changed many times. In my tests, it didn't seem to change memory
use at all. Unsure why, it is working. It's possible the Response is not
getting garbage collected due to pinning. But as far as I can see, all
parts of it that are retained get copied in a way that won't keep the
whole thing pinned in memory.
Fix infinite loop and memory blowup when importing from an unversioned S3
bucket that is large enough to need pagination.
I don't think there actually ever will be a Marker element, a delimiter is
not set.
Probably this code path was never tested with pagination! Also the aws
library's lack of any docs made it easy to mess up.
Versioned buckets seem to not have the same problem. The API docs for
ListObjectVersions say that NextKeyMarker will always be provided when
paginating.
Changed the protocol docs because servant parses "true" and "false" for
booleans in query parameters, not "1" and "0".
clientPut with datapresent=True is not used by git-annex, and I don't
anticipate it being used in git-annex, except for testing.
I've tested this by making clientPut be called with datapresent=True and
git-annex copy to a remote succeeds once the object file is first
manually copied to the remote. That would be a good test for the test
suite, but running the http client means exposing it to at least
localhost, and would fail if a real http client was already running on
that port.
I anticipate lots of external special remote programs will neglect
implementing this. Still, it's the right thing to do to assume that some
of them may write files out of order. Probably most external special
remotes will not be used with a proxy. When someone is using one with a
proxy, they can always get it fixed to send ORDERED.
The problem was that when the proxy requests a key be retrieved to its
own temp file, fileRetriever was retriving it to the key's temp
location, and then moving it at the end, which broke streaming.
So, plumb through the path where the key is being retrieved to.