Had to add Read instances to Key and NumCopies and some other similar
types. I only expect to use those in serializing a sim. Of course, this
risks that implementation changes break reading old data. For a sim,
that would not be a big problem.
Detect when a preferred content expression contains "not present", which
would lead to repeatedly getting and then dropping files, and make it never
match. This also applies to "not balanced" and "not sizebalanced".
--explain will tell the user when this happens
Note that getMatcher calls matchMrun' and does not check for unstable
negated limits. While there is no --present anyway, if there was,
it would not make sense for --not --present to complain about
instability and fail to match.
Might want to make --rebalance turn balanced=group:N where N > 1
to fullysizebalanced=group:N. Have not yet determined if that will
improve situations enough to be worth the extra work.
This all works fine. But it doesn't check repository sizes yet, and
without repository size checking, once a repository gets full, there
will be no other repository that will want its files.
Use of sha2 seems unncessary, probably alder2 or md5 or crc would have
been enough. Possibly just summing up the bytes of the key mod the number
of repositories would have sufficed. But sha2 is there, and probably
hardware accellerated. I doubt very much there is any security benefit
to using it though. If someone wants to construct a key that will be
balanced onto a given repository, sha2 is certianly not going to stop
them.
This is similar to git-annex copy --from --to, in that it downloads a
local copy, locks it for removal, uploads it, and drops it. Removal of
the temporary local copy is done without verifying numcopies for the
same reason as that command.
I do wonder, looking at this, if there's a race where the local copy
gets used as a copy to allow some other drop in the narrow window after
it is downloaded and before it gets locked for removal. That would need
some other repository to have an out of date location log that says the
repository contains a copy of the key, in order for it to try to use it
as a copy. If there is such a race, git-annex copy/move would also be
vulnerable to it. It would be better to lock it for removal before
starting to download it! That is possible in v10 repositories, which do
use a separate content lock file.
Note that, when the exported tree contains several files that use the
same key, it will be downloaded repeatedly, once per time needed to
upload it. It would be possible to avoid that extra work, but it would
complicate this since the local copy would need to be preserved, locked
for removal, until the end. Also, that would mean that interrupting the
export would leave possibly a lot of temporarily downloaded keys in the
local repository, while currently it can only leave one.
When getting from a P2P HTTP remote, prompt for credentials when required,
instead of failing.
This feels like it might be a bug in servant-client. withClientM's type
suggests it would not throw a ClientError. But it does in this case.
The file corruption consists of each chunk of the file being duplicated.
Since chunks are typically a fixed size, it would certianly be possible
to get from a corrupted file back to the original file. But this is still
bad data loss.
Reversion was in commit fcc052bed8.
Luckily that did not make the most recent release.
It works when using git-annex sync/push/assist, or when manually sending
all content to the proxied remote before pushing to the proxy remote.
But when the push comes before the content is sent, sending content does
not update the exported tree.
This fixes a problem with datalad's test suite, where loading the cluster
log happened to cause the git-annex branch commits to take a different
shape, with an additional commit.
It's also faster though, since many commands don't need the cluster log.
Just fill Annex.clusters with a thunk.
Sponsored-by: the NIH-funded NICEMAN (ReproNim TR&D3) project
Primarily because Windows needs a dependency bump to get stm-2.5.1
for Servant build flag.
This includes Win32-2.13.4.0 and aws-0.24 which adds some features
that windows had been missing out on as well.
Lots of warnings about head and tail will need to eventually be
addressed. Of course AFAIK the uses of it in git-annex are all safe.
While usually uploading to a special remote does not verify the content,
the content in a repository is assumed to be valid, and there is no trust
boundary. But with a proxied special remote, there may be users who are
allowed to store objects, but are not really trusted.
Another way to look at this is it's the equivilant of git-annex-shell
checking the hash of received data, which it does (see StoreContent
implementation).
Only implemented server side, not used client side yet.
And not yet implemented for proxies/clusters, for which there's a build
warning about unhandled cases.
This is P2P protocol version 3. Probably will be the only change in that
version..
Added a dependency on clock to access a monotonic clock.
On i386-ancient, that is at version 0.2.0.0.
This was caused by commit fb8ab2469d putting
an isPointerFile check in the wrong place. So if the file was not a pointer
file at that point, but got replaced by one before the file got locked
down, the pointer file would be ingested into the annex.
The fix is simply to move the isPointerFile check to after safeToAdd locks
down the file. Now if the file changes to a pointer file after the
isPointerFile check, ingestion will see that it changed after lockdown,
and will refuse to add it to the annex.
Sponsored-by: the NIH-funded NICEMAN (ReproNim TR&D3) project
When --debugfilter or annex.debugfilter is set, avoid propigating debug
output from git-annex-shell, since it cannot be filtered.
It would be possible to pass --debugfilter on to git-annex-shell,
but it only started accepting that option in 2022. So it would break
interop with older versions.