edaed18e4c
Currently works for special remotes that don't use fileRetriever. Ones that do will download to another filename and rename it into place, defeating the streaming. This actually benchmarks slightly slower when getting a large file from a fast proxied special remote. However, when the proxied special remote is slow, it will be a big win.
261 lines
10 KiB
Markdown
261 lines
10 KiB
Markdown
This is a summary todo covering several subprojects, which would extend
|
|
git-annex to be able to use proxies which sit in front of a cluster of
|
|
repositories.
|
|
|
|
1. [[design/passthrough_proxy]]
|
|
2. [[design/p2p_protocol_over_http]]
|
|
3. [[design/balanced_preferred_content]]
|
|
4. [[todo/track_free_space_in_repos_via_git-annex_branch]]
|
|
5. [[todo/proving_preferred_content_behavior]]
|
|
|
|
## table of contents
|
|
|
|
[[!toc ]]
|
|
|
|
## planned schedule
|
|
|
|
Joey has received funding to work on this.
|
|
Planned schedule of work:
|
|
|
|
* June: git-annex proxies and clusters
|
|
* July: p2p protocol over http
|
|
* August, part 1: git-annex proxy support for exporttree
|
|
* August, part 2: balanced preferred content
|
|
* October: proving behavior of balanced preferred content with proxies
|
|
* September: streaming through proxy to special remotes (especially S3)
|
|
|
|
[[!tag projects/openneuro]]
|
|
|
|
## work notes
|
|
|
|
* Currently working on streaming download via proxy from special remote.
|
|
|
|
* Remotes using fileRetriever retrieve to the temp object file,
|
|
before it is renamed to the requested file. In the case of a proxy,
|
|
that is a different file, and so it won't see the file until it's all
|
|
been transferred and renamed.
|
|
|
|
## completed items for September's work on proving behavior of preferred content
|
|
|
|
* Static analysis to detect "not present", "not balanced", and similar
|
|
unstable preferred content expressions and avoid problems with them.
|
|
* Implemented `git-annex sim` command.
|
|
* Simulated a variety of repository networks, and random preferred content
|
|
expressions, checking that a stable state is always reached.
|
|
* Fix bug that prevented anything being stored in an empty
|
|
repository whose preferred content expression uses sizebalanced.
|
|
(Identified via `git-annex sim`)
|
|
|
|
## items deferred until later for balanced preferred content and maxsize tracking
|
|
|
|
* The assistant is using NoLiveUpdate, but it should be posssible to plumb
|
|
a LiveUpdate through it from preferred content checking to location log
|
|
updating.
|
|
|
|
* `git-annex info` in the limitedcalc path in cachedAllRepoData
|
|
double-counts redundant information from the journal due to using
|
|
overLocationLogs. In the other path it does not (any more; it used to),
|
|
and this should be fixed for consistency and correctness.
|
|
|
|
* getLiveRepoSizes has a filterM getRecentChange over the live updates.
|
|
This could be optimised to a single sql join. There are usually not many
|
|
live updates, but sometimes there will be a great many recent changes,
|
|
so it might be worth doing this optimisation. Persistent is not capable
|
|
of this, would need dependency added on esquelito.
|
|
|
|
## completed items for August's work on balanced preferred content
|
|
|
|
* Balanced preferred content basic implementation, including --rebalance
|
|
option.
|
|
* Implemented [[track_free_space_in_repos_via_git-annex_branch]]
|
|
* Implemented tracking of live changes to repository sizes.
|
|
* `git-annex maxsize`
|
|
* annex.fullybalancedthreshhold
|
|
|
|
## completed items for August's work on git-annex proxy support for exporttre
|
|
|
|
* Special remotes configured with exporttree=yes annexobjects=yes
|
|
can store objects in .git/annex/objects, as well as an exported tree.
|
|
|
|
* Support proxying to special remotes configured with
|
|
exporttree=yes annexobjects=yes.
|
|
|
|
* post-retrieve: When proxying is enabled for an exporttree=yes
|
|
special remote and the configured remote.name.annex-tracking-branch
|
|
is received, the tree is exported to the special remote.
|
|
|
|
* When getting from a P2P HTTP remote, prompt for credentials when
|
|
required, instead of failing.
|
|
|
|
* Prevent `updateproxy` and `updatecluster` from adding
|
|
an exporttree=yes special remote that does not have
|
|
annexobjects=yes, to avoid foot shooting.
|
|
|
|
* Implement `git-annex export treeish --to=foo --from=bar`, which
|
|
gets from bar as needed to send to foo. Make post-retrieve use
|
|
`--to=r --from=r` to handle the multiple files case.
|
|
|
|
## items deferred until later for p2p protocol over http
|
|
|
|
* `git-annex p2phttp` should support serving several repositories at the same
|
|
time (not as proxied remotes), so that eg, every git-annex repository
|
|
on a server can be served on the same port.
|
|
|
|
* Support proxying to git remotes that use annex+http urls. This needs a
|
|
translation from P2P protocol to servant-client to P2P protocol.
|
|
|
|
* Should be possible to use a git-remote-annex annex::$uuid url as
|
|
remote.foo.url with remote.foo.annexUrl using annex+http, and so
|
|
not need a separate web server to serve the git repository. Doesn't work
|
|
currently because git-remote-annex urls only support special remotes.
|
|
It would need a new form of git-remote-annex url, eg:
|
|
annex::$uuid?annex+http://example.com/git-annex/
|
|
|
|
* `git-annex p2phttp` could support systemd socket activation. This would
|
|
allow making a systemd unit that listens on port 80.
|
|
|
|
## completed items for July's work on p2p protocol over http
|
|
|
|
* HTTP P2P protocol design [[design/p2p_protocol_over_http]].
|
|
|
|
* addressed [[doc/todo/P2P_locking_connection_drop_safety]]
|
|
|
|
* implemented server and client for HTTP P2P protocol
|
|
|
|
* added git-annex p2phttp command to serve HTTP P2P protocol
|
|
|
|
* Make git-annex p2phttp support https.
|
|
|
|
* Allow using annex+http urls in remote.name.annexUrl
|
|
|
|
* Make http server support proxying.
|
|
|
|
* Make http server support serving a cluster.
|
|
|
|
## items deferred until later for [[design/passthrough_proxy]]
|
|
|
|
* Check annex.diskreserve when proxying for special remotes
|
|
to avoid the proxy's disk filling up with the temporary object file
|
|
cached there.
|
|
|
|
* Resuming an interrupted download from proxied special remote makes the proxy
|
|
re-download the whole content. It could instead keep some of the
|
|
object files around when the client does not send SUCCESS. This would
|
|
use more disk, but without streaming, proxying a special remote already
|
|
needs some disk. And it could minimize to eg, the last 2 or so.
|
|
The design doc has some more thoughts about this.
|
|
|
|
* Streaming download from proxied special remotes. See design.
|
|
(Planned for September)
|
|
|
|
* When an upload to a cluster is distributed to multiple special remotes,
|
|
a temporary file is written for each one, which may even happen in
|
|
parallel. This is a lot of extra work and may use excess disk space.
|
|
It should be possible to only write a single temp file.
|
|
(With streaming this won't be an issue.)
|
|
|
|
* Indirect uploads when proxying for special remote
|
|
(to be considered). See design.
|
|
|
|
* Getting a key from a cluster currently picks from amoung
|
|
the lowest cost remotes at random. This could be smarter,
|
|
eg prefer to avoid using remotes that are doing other transfers at the
|
|
same time.
|
|
|
|
* The cost of a proxied node that is accessed via an intermediate gateway
|
|
is currently the same as a node accessed via the cluster gateway.
|
|
To fix this, there needs to be some way to tell how many hops through
|
|
gateways it takes to get to a node. Currently the only way is to
|
|
guess based on number of dashes in the node name, which is not satisfying.
|
|
|
|
Even counting hops is not very satisfying, one cluster gateway could
|
|
be much more expensive to traverse than another one.
|
|
|
|
If seriously tackling this, it might be worth making enough information
|
|
available to use spanning tree protocol for routing inside clusters.
|
|
|
|
* Optimise proxy speed. See design for ideas.
|
|
|
|
* Speed: A proxy to a local git repository spawns git-annex-shell
|
|
to communicate with it. It would be more efficient to operate
|
|
directly on the Remote. Especially when transferring content to/from it.
|
|
But: When a cluster has several nodes that are local git repositories,
|
|
and is sending data to all of them, this would need an alternate
|
|
interface than `storeKey`, which supports streaming, of chunks
|
|
of a ByteString.
|
|
|
|
* Use `sendfile()` to avoid data copying overhead when
|
|
`receiveBytes` is being fed right into `sendBytes`.
|
|
Library to use:
|
|
<https://hackage.haskell.org/package/hsyscall-0.4/docs/System-Syscall.html>
|
|
|
|
* Support using a proxy when its url is a P2P address.
|
|
(Eg tor-annex remotes.)
|
|
|
|
## completed items for June's work on [[design/passthrough_proxy]]:
|
|
|
|
* UUID discovery via git-annex branch. Add a log file listing UUIDs
|
|
accessible via proxy UUIDs. It also will contain the names
|
|
of the remotes that the proxy is a proxy for,
|
|
from the perspective of the proxy. (done)
|
|
|
|
* Add `git-annex updateproxy` command (done)
|
|
|
|
* Remote instantiation for proxies. (done)
|
|
|
|
* Implement git-annex-shell proxying to git remotes. (done)
|
|
|
|
* Proxy should update location tracking information for proxied remotes,
|
|
so it is available to other users who sync with it. (done)
|
|
|
|
* Implement `git-annex initcluster` and `git-annex updatecluster` commands (done)
|
|
|
|
* Implement cluster UUID insertation on location log load, and removal
|
|
on location log store. (done)
|
|
|
|
* Omit cluster UUIDs when constructing drop proofs, since lockcontent will
|
|
always fail on a cluster. (done)
|
|
|
|
* Don't count cluster UUID as a copy in numcopies checking etc. (done)
|
|
|
|
* Tab complete proxied remotes and clusters in eg --from option. (done)
|
|
|
|
* Getting a key from a cluster should proxy from one of the nodes that has
|
|
it. (done)
|
|
|
|
* Implement upload with fanout to multiple cluster nodes and reporting back
|
|
additional UUIDs over P2P protocol. (done)
|
|
|
|
* Implement cluster drops, trying to remove from all nodes, and returning
|
|
which UUIDs it was dropped from. (done)
|
|
|
|
* `git-annex testremote` works against proxied remote and cluster. (done)
|
|
|
|
* Avoid `git-annex sync --content` etc from operating on cluster nodes by
|
|
default since syncing with a cluster implicitly syncs with its nodes. (done)
|
|
|
|
* On upload to cluster, send to nodes where its preferred content, and not
|
|
to other nodes. (done)
|
|
|
|
* Support annex.jobs for clusters. (done)
|
|
|
|
* Add `git-annex extendcluster` command and extend `git-annex updatecluster`
|
|
to support clusters with multiple gateways. (done)
|
|
|
|
* Support proxying for a remote that is proxied by another gateway of
|
|
a cluster. (done)
|
|
|
|
* Support distributed clusters: Make a proxy for a cluster repeat
|
|
protocol messages on to any remotes that have the same UUID as
|
|
the cluster. Needs extension to P2P protocol to avoid cycles.
|
|
(done)
|
|
|
|
* Proxied cluster nodes should have slightly higher cost than the cluster
|
|
gateway. (done)
|
|
|
|
* Basic support for proxying special remotes. (But not exporttree=yes ones
|
|
yet.) (done)
|
|
|
|
* Tab complete remotes in all relevant commands (done)
|
|
|
|
* Display cluster and proxy information in git-annex info (done)
|