4f3ae96666
An interrupted PUT to cluster that has a node that is a special remote over http left open the connection to the cluster, so the next request opens another one. So did an interrupted PUT directly to the proxied special remote over http. proxySpecialRemote was stuck waiting for all the DATA. Its connection remained open so it kept waiting. In servePut, checktooshort handles closing the P2P connection when too short a data is received from PUT. But, checktooshort was only called after the protoaction, which is what runs the proxy, which is what was getting stuck. Modified it to run as a background thread, which waits for the tooshortv to be written to, which gather always does once it gets to the end of the data received from the http client. That makes proxyConnection's releaseconn run once all data is received from the http client. Made it close the connection handles before waiting on the asyncworker thread. This lets proxySpecialRemote finish processing any data from the handle, and then it will give up, more or less cleanly, if it didn't receive enough data. I say "more or less cleanly" because with both sides of the P2P connection taken down, some protocol unhappyness results. Which can lead to some ugly debug messages. But also can cause the asyncworker thread to throw an exception. So made withP2PConnections not crash when it receives an exception from releaseconn. This did have a small change to the behavior of an interrupted PUT when proxying to a regular remote. proxyConnection has a protoerrorhandler that closes the proxy connection on a protocol error. But the proxy connection is also closed by checktooshort when it closes the P2P connection. Closing the same proxy connection twice is not a problem, it just results in duplicated debug messages about it.
213 lines
8.3 KiB
Markdown
213 lines
8.3 KiB
Markdown
This is a summary todo covering several subprojects, which would extend
|
|
git-annex to be able to use proxies which sit in front of a cluster of
|
|
repositories.
|
|
|
|
1. [[design/passthrough_proxy]]
|
|
2. [[design/p2p_protocol_over_http]]
|
|
3. [[design/balanced_preferred_content]]
|
|
4. [[todo/track_free_space_in_repos_via_git-annex_branch]]
|
|
5. [[todo/proving_preferred_content_behavior]]
|
|
|
|
## table of contents
|
|
|
|
[[!toc ]]
|
|
|
|
## planned schedule
|
|
|
|
Joey has received funding to work on this.
|
|
Planned schedule of work:
|
|
|
|
* June: git-annex proxies and clusters
|
|
* July, part 1: p2p protocol over http
|
|
* July, part 2: git-annex proxy support for exporttree
|
|
* August: balanced preferred content
|
|
* September: streaming through proxy to special remotes (especially S3)
|
|
* October: proving behavior of balanced preferred content with proxies
|
|
|
|
[[!tag projects/openneuro]]
|
|
|
|
## work notes
|
|
|
|
* When part of a file has been sent to a cluster via the http server,
|
|
the transfer interrupted, and another node is added to the cluster,
|
|
and the transfer of the file performed again, there is a failure
|
|
sending to the node that had an incomplete copy. It fails like this:
|
|
|
|
//home/joey/tmp/bench/c3/.git/annex/tmp/SHA256E-s1048576000--09d7e19983a65682138fa5944f135e4fc593330c2693c41d22cc7881443d6060: withBinaryFile: illegal operation
|
|
git-annex: transfer already in progress, or unable to take transfer lock
|
|
p2pstdio: 1 failed
|
|
|
|
When using ssh and not the http server, the node that had the incomplete
|
|
copy also doesn't get the file, altough no error is displayed.
|
|
|
|
* When proxying a PUT to a special remote, no verification of the received
|
|
content is done, it's just written to a file and that is sent to the
|
|
special remote. This violates a usual invariant that any data being
|
|
received into a repository gets verified in passing. Although on the
|
|
other hand, when sending data to a special remote normally, there is also
|
|
no verification.
|
|
|
|
## items deferred until later for p2p protocol over http
|
|
|
|
* `git-annex p2phttp` should support serving several repositories at the same
|
|
time (not as proxied remotes), so that eg, every git-annex repository
|
|
on a server can be served on the same port.
|
|
|
|
* Support proxying to git remotes that use annex+http urls. This needs a
|
|
translation from P2P protocol to servant-client to P2P protocol.
|
|
|
|
* Should be possible to use a git-remote-annex annex::$uuid url as
|
|
remote.foo.url with remote.foo.annexUrl using annex+http, and so
|
|
not need a separate web server to serve the git repository. Doesn't work
|
|
currently because git-remote-annex urls only support special remotes.
|
|
It would need a new form of git-remote-annex url, eg:
|
|
annex::$uuid?annex+http://example.com/git-annex/
|
|
|
|
* `git-annex p2phttp` could support systemd socket activation. This would
|
|
allow making a systemd unit that listens on port 80.
|
|
|
|
## completed items for July's work on p2p protocol over http
|
|
|
|
* HTTP P2P protocol design [[design/p2p_protocol_over_http]].
|
|
|
|
* addressed [[doc/todo/P2P_locking_connection_drop_safety]]
|
|
|
|
* implemented server and client for HTTP P2P protocol
|
|
|
|
* added git-annex p2phttp command to serve HTTP P2P protocol
|
|
|
|
* Make git-annex p2phttp support https.
|
|
|
|
* Allow using annex+http urls in remote.name.annexUrl
|
|
|
|
* Make http server support proxying.
|
|
|
|
* Make http server support serving a cluster.
|
|
|
|
## items deferred until later for [[design/passthrough_proxy]]
|
|
|
|
* Check annex.diskreserve when proxying for special remotes
|
|
to avoid the proxy's disk filling up with the temporary object file
|
|
cached there.
|
|
|
|
* Resuming an interrupted download from proxied special remote makes the proxy
|
|
re-download the whole content. It could instead keep some of the
|
|
object files around when the client does not send SUCCESS. This would
|
|
use more disk, but without streaming, proxying a special remote already
|
|
needs some disk. And it could minimize to eg, the last 2 or so.
|
|
The design doc has some more thoughts about this.
|
|
|
|
* Streaming download from proxied special remotes. See design.
|
|
(Planned for September)
|
|
|
|
* When an upload to a cluster is distributed to multiple special remotes,
|
|
a temporary file is written for each one, which may even happen in
|
|
parallel. This is a lot of extra work and may use excess disk space.
|
|
It should be possible to only write a single temp file.
|
|
(With streaming this won't be an issue.)
|
|
|
|
* Indirect uploads when proxying for special remote
|
|
(to be considered). See design.
|
|
|
|
* Getting a key from a cluster currently picks from amoung
|
|
the lowest cost remotes at random. This could be smarter,
|
|
eg prefer to avoid using remotes that are doing other transfers at the
|
|
same time.
|
|
|
|
* The cost of a proxied node that is accessed via an intermediate gateway
|
|
is currently the same as a node accessed via the cluster gateway.
|
|
To fix this, there needs to be some way to tell how many hops through
|
|
gateways it takes to get to a node. Currently the only way is to
|
|
guess based on number of dashes in the node name, which is not satisfying.
|
|
|
|
Even counting hops is not very satisfying, one cluster gateway could
|
|
be much more expensive to traverse than another one.
|
|
|
|
If seriously tackling this, it might be worth making enough information
|
|
available to use spanning tree protocol for routing inside clusters.
|
|
|
|
* Optimise proxy speed. See design for ideas.
|
|
|
|
* Speed: A proxy to a local git repository spawns git-annex-shell
|
|
to communicate with it. It would be more efficient to operate
|
|
directly on the Remote. Especially when transferring content to/from it.
|
|
But: When a cluster has several nodes that are local git repositories,
|
|
and is sending data to all of them, this would need an alternate
|
|
interface than `storeKey`, which supports streaming, of chunks
|
|
of a ByteString.
|
|
|
|
* Use `sendfile()` to avoid data copying overhead when
|
|
`receiveBytes` is being fed right into `sendBytes`.
|
|
Library to use:
|
|
<https://hackage.haskell.org/package/hsyscall-0.4/docs/System-Syscall.html>
|
|
|
|
* Support using a proxy when its url is a P2P address.
|
|
(Eg tor-annex remotes.)
|
|
|
|
## completed items for June's work on [[design/passthrough_proxy]]:
|
|
|
|
* UUID discovery via git-annex branch. Add a log file listing UUIDs
|
|
accessible via proxy UUIDs. It also will contain the names
|
|
of the remotes that the proxy is a proxy for,
|
|
from the perspective of the proxy. (done)
|
|
|
|
* Add `git-annex updateproxy` command (done)
|
|
|
|
* Remote instantiation for proxies. (done)
|
|
|
|
* Implement git-annex-shell proxying to git remotes. (done)
|
|
|
|
* Proxy should update location tracking information for proxied remotes,
|
|
so it is available to other users who sync with it. (done)
|
|
|
|
* Implement `git-annex initcluster` and `git-annex updatecluster` commands (done)
|
|
|
|
* Implement cluster UUID insertation on location log load, and removal
|
|
on location log store. (done)
|
|
|
|
* Omit cluster UUIDs when constructing drop proofs, since lockcontent will
|
|
always fail on a cluster. (done)
|
|
|
|
* Don't count cluster UUID as a copy in numcopies checking etc. (done)
|
|
|
|
* Tab complete proxied remotes and clusters in eg --from option. (done)
|
|
|
|
* Getting a key from a cluster should proxy from one of the nodes that has
|
|
it. (done)
|
|
|
|
* Implement upload with fanout to multiple cluster nodes and reporting back
|
|
additional UUIDs over P2P protocol. (done)
|
|
|
|
* Implement cluster drops, trying to remove from all nodes, and returning
|
|
which UUIDs it was dropped from. (done)
|
|
|
|
* `git-annex testremote` works against proxied remote and cluster. (done)
|
|
|
|
* Avoid `git-annex sync --content` etc from operating on cluster nodes by
|
|
default since syncing with a cluster implicitly syncs with its nodes. (done)
|
|
|
|
* On upload to cluster, send to nodes where its preferred content, and not
|
|
to other nodes. (done)
|
|
|
|
* Support annex.jobs for clusters. (done)
|
|
|
|
* Add `git-annex extendcluster` command and extend `git-annex updatecluster`
|
|
to support clusters with multiple gateways. (done)
|
|
|
|
* Support proxying for a remote that is proxied by another gateway of
|
|
a cluster. (done)
|
|
|
|
* Support distributed clusters: Make a proxy for a cluster repeat
|
|
protocol messages on to any remotes that have the same UUID as
|
|
the cluster. Needs extension to P2P protocol to avoid cycles.
|
|
(done)
|
|
|
|
* Proxied cluster nodes should have slightly higher cost than the cluster
|
|
gateway. (done)
|
|
|
|
* Basic support for proxying special remotes. (But not exporttree=yes ones
|
|
yet.) (done)
|
|
|
|
* Tab complete remotes in all relevant commands (done)
|
|
|
|
* Display cluster and proxy information in git-annex info (done)
|