The error message is not displayed to the use, but this mirrors the
behavior when a regular get from a special remote fails. At least now
there is not a protocol error.
Now that storeKey can have a different object file passed to it, this
complication is not needed. This avoids a lot of strange situations,
and will also be needed if streaming is eventually supported.
Wanted to also list a cluster's nodes when showing info for the cluster,
but that's hard because it needs getting the name of the proxying
remote, which is some prefix of the cluster's name, but if the names
contain dashes there's no good way to know which prefix it is.
Still needs some work.
The reason that the waitv is necessary is because without it,
runNet loops back around and reads the next protocol message. But it's
not finished reading the whole bytestring yet, and so it reads some part
of it.
Working, but lots of room for improvement...
Without streaming, so there is a delay before download begins as the
file is retreived from the special remote.
And when resuming it retrieves the whole file from the special remote
*again*.
Also, if the special remote throws an exception, currently it
shows as "protocol error".
This allows an error message from a proxied special remote to be
displayed to the client.
In the case where removal from several nodes of a cluster fails,
there can be several errors. What to do? I decided to only show
the first error to the user. Probably in this case the user is not in a
position to do anything about an error message, so best keep it simple.
If the problem with the first node is fixed, they'll see the error from
the next node.
That error is now rethrown on the client, so it will be displayed.
For example:
$ git-annex fsck x --fast --from AMS-dir
fsck x (special remote reports: directory /home/joey/tmp/bench2/dir is not accessible) failed
No protocol version check is needed. Because in order to talk to a
proxied special remote, the client has to be running the upcoming
git-annex release. Which has this fix in it.
This is early, but already working for CHECKPRESENT.
However, when the special remote throws an exception on checkPresent,
this happens:
[2024-06-28 13:22:18.520884287] (P2P.IO) [ThreadId 4] P2P > ERROR directory /home/joey/tmp/bench2/dir is not accessible
[2024-06-28 13:22:18.521053135] (P2P.IO) [ThreadId 4] P2P < ERROR expected SUCCESS or FAILURE
git-annex: client error: expected SUCCESS or FAILURE
(fixing location log) p2pstdio: 1 failed
** Based on the location log, x
** was expected to be present, but its content is missing.
failed
This will allow having an internal thread speaking P2P protocol,
which will be needed to support proxying to external special remotes.
No serialization is done on the internal P2P protocol of course.
When a ByteString is being exchanged, it may or may not be exactly
the length indicated by DATA. While that has to be carefully managed
for the serialized P2P protocol, here it would require buffering the
whole lazy bytestring in memory to check its length when sending,
so it's better to do length checks on the receiving side.
This makes eg git-annex get default to using the cluster rather than an
arbitrary node, which is better UI.
The actual cost of accessing a proxied node vs using the cluster is
basically the same. But using the cluster allows smarter load-balancing
to be done on the cluster.
Before it was using a node that might have had a higher cost.
Also threw in a random selection from amoung the low cost nodes. Of
course this is a poor excuse for load balancing, but it's better than
nothing. Most of the time...
When --debugfilter or annex.debugfilter is set, avoid propigating debug
output from git-annex-shell, since it cannot be filtered.
It would be possible to pass --debugfilter on to git-annex-shell,
but it only started accepting that option in 2022. So it would break
interop with older versions.
The VIA extension is still needed to avoid some extra work and ugly
messages, but this is enough that it actually works.
This filters out the RemoteSides that are a proxied connection via a
remote gateway to the cluster.
The VIA extension will not filter those out, but will send VIA to them
on connect, which will cause the ones that are accessed via the listed
gateways to be filtered out.
Walking a tightrope between security and convenience here, because
git-annex-shell needs to only proxy for things when there has been
an explicit, local action to configure them.
In this case, the user has to have run `git-annex extendcluster`,
which now sets annex-cluster-gateway on the remote.
Note that any repositories that the gateway is recorded to
proxy for will be proxied onward. This is not limited to cluster nodes,
because checking the node log would not add any security; someone could
add any uuid to it. The gateway of course then does its own
checking to determine if it will allow proxying for the remote.
When there are multiple gateways to a cluster, this sets up proxying
for nodes that are accessed via a remote gateway.
Eg, when running in nyc and amsterdam is the remote gateway,
and it has node1 and node2, this sets up proxying for
amsterdam-node1 and amsterdam-node2. A client that has nyc as a remote
will see proxied remotes nyc-amsterdam-node1 and nyc-amsterdam-node2.
Just look at the existing proxied remotes that correspond to already
existing nodes of the cluster, and keep those nodes in the cluster.
While adding any remotes of the local repo that are configured as
cluster nodes. This allows removing cluster nodes from the local repo
and updating, without it also removing nodes provided by other gateways.