git-annex/doc/todo/git-annex_proxies.mdwn
Joey Hess 8555fb88ef
locking in checkLiveUpdate
This makes sure that two threads don't check balanced preferred content at the
same time, so each thread always sees a consistent picture of what is
happening.

This does add a fairly expensive file level lock to every check of
preferred content, in commands that use prepareLiveUpdate. It would
be good to only do that when live updates are actually needed, eg when
the preferred content expression uses balanced preferred content.
2024-08-27 13:12:43 -04:00

284 lines
12 KiB
Markdown

This is a summary todo covering several subprojects, which would extend
git-annex to be able to use proxies which sit in front of a cluster of
repositories.
1. [[design/passthrough_proxy]]
2. [[design/p2p_protocol_over_http]]
3. [[design/balanced_preferred_content]]
4. [[todo/track_free_space_in_repos_via_git-annex_branch]]
5. [[todo/proving_preferred_content_behavior]]
## table of contents
[[!toc ]]
## planned schedule
Joey has received funding to work on this.
Planned schedule of work:
* June: git-annex proxies and clusters
* July: p2p protocol over http
* August, part 1: git-annex proxy support for exporttree
* August, part 2: [[track_free_space_in_repos_via_git-annex_branch]]
* September, part 1: balanced preferred content
* September, part 2: streaming through proxy to special remotes (especially S3)
* October, part 1: streaming through proxy continued
* October, part 2: proving behavior of balanced preferred content with proxies
[[!tag projects/openneuro]]
## work notes
* `git-annex assist --rebalance` of `balanced=foo:2`
sometimes needs several runs to stabalize.
May not be a bug, needs reproducing and analysis.
* Test that live repo size data is correct and really works.
* Avoid using checkLiveUpdate except when checking a preferred content
expression that does use balanced preferred content. No reason to pay
its time penalty otherwise.
* When loading the live update table, check if PIDs in it are still
running (and are still git-annex), and if not, remove stale entries
from it, which can accumulate when processes are interrupted.
Note that it will be ok for the wrong git-annex process, running again
at a pid to keep a stale item in the live update table, because that
is unlikely and exponentially unlikely to happen repeatedly, so stale
information will only be used for a short time.
But then, how to check if a PID is git-annex or not? /proc of course,
but what about other OS's? Windows?
How? Possibly have a thread that
waits on an empty MVar. Thread MVar through somehow to location log
update. (Seems this would need checking preferred content to return
the MVar? Or alternatively, the MVar could be passed into it, which
seems better..) Fill MVar on location log update. If MVar gets
GCed without being filled, the thread will get an exception and can
remove from table and cache then. This does rely on GC behavior, but if
the GC takes some time, it will just cause a failed upload to take
longer to get removed from the table and cache, which will just prevent
another upload of a different key from running immediately.
(Need to check if MVar GC behavior operates like this.
See https://stackoverflow.com/questions/10871303/killing-a-thread-when-mvar-is-garbage-collected )
Perhaps stale entries can be found in a different way. Require the live
update table to be updated with a timestamp every 5 minutes. The thread
that waits on the MVar can do that, as long as the transfer is running. If
interrupted, it will become stale in 5 minutes, which is probably good
enough? Could do it every minute, depending on overhead. This could
also be done by just repeatedly touching a file named with the processes's
pid in it, to avoid sqlite overhead.
* The assistant is using NoLiveUpdate, but it should be posssible to plumb
a LiveUpdate through it from preferred content checking to location log
updating.
* `git-annex info` in the limitedcalc path in cachedAllRepoData
double-counts redundant information from the journal due to using
overLocationLogs. In the other path it does not, and this should be fixed
for consistency and correctness.
* getLiveRepoSizes has a filterM getRecentChange over the live updates.
This could be optimised to a single sql join. There are usually not many
live updates, but sometimes there will be a great many recent changes,
so it might be worth doing this optimisation.
## completed items for August's work on balanced preferred content
* Balanced preferred content basic implementation, including --rebalance
option.
* Implemented [[track_free_space_in_repos_via_git-annex_branch]]
* Implemented tracking of live changes to repository sizes.
* `git-annex maxsize`
* annex.fullybalancedthreshhold
## completed items for August's work on git-annex proxy support for exporttre
* Special remotes configured with exporttree=yes annexobjects=yes
can store objects in .git/annex/objects, as well as an exported tree.
* Support proxying to special remotes configured with
exporttree=yes annexobjects=yes.
* post-retrieve: When proxying is enabled for an exporttree=yes
special remote and the configured remote.name.annex-tracking-branch
is received, the tree is exported to the special remote.
* When getting from a P2P HTTP remote, prompt for credentials when
required, instead of failing.
* Prevent `updateproxy` and `updatecluster` from adding
an exporttree=yes special remote that does not have
annexobjects=yes, to avoid foot shooting.
* Implement `git-annex export treeish --to=foo --from=bar`, which
gets from bar as needed to send to foo. Make post-retrieve use
`--to=r --from=r` to handle the multiple files case.
## items deferred until later for p2p protocol over http
* `git-annex p2phttp` should support serving several repositories at the same
time (not as proxied remotes), so that eg, every git-annex repository
on a server can be served on the same port.
* Support proxying to git remotes that use annex+http urls. This needs a
translation from P2P protocol to servant-client to P2P protocol.
* Should be possible to use a git-remote-annex annex::$uuid url as
remote.foo.url with remote.foo.annexUrl using annex+http, and so
not need a separate web server to serve the git repository. Doesn't work
currently because git-remote-annex urls only support special remotes.
It would need a new form of git-remote-annex url, eg:
annex::$uuid?annex+http://example.com/git-annex/
* `git-annex p2phttp` could support systemd socket activation. This would
allow making a systemd unit that listens on port 80.
## completed items for July's work on p2p protocol over http
* HTTP P2P protocol design [[design/p2p_protocol_over_http]].
* addressed [[doc/todo/P2P_locking_connection_drop_safety]]
* implemented server and client for HTTP P2P protocol
* added git-annex p2phttp command to serve HTTP P2P protocol
* Make git-annex p2phttp support https.
* Allow using annex+http urls in remote.name.annexUrl
* Make http server support proxying.
* Make http server support serving a cluster.
## items deferred until later for [[design/passthrough_proxy]]
* Check annex.diskreserve when proxying for special remotes
to avoid the proxy's disk filling up with the temporary object file
cached there.
* Resuming an interrupted download from proxied special remote makes the proxy
re-download the whole content. It could instead keep some of the
object files around when the client does not send SUCCESS. This would
use more disk, but without streaming, proxying a special remote already
needs some disk. And it could minimize to eg, the last 2 or so.
The design doc has some more thoughts about this.
* Streaming download from proxied special remotes. See design.
(Planned for September)
* When an upload to a cluster is distributed to multiple special remotes,
a temporary file is written for each one, which may even happen in
parallel. This is a lot of extra work and may use excess disk space.
It should be possible to only write a single temp file.
(With streaming this won't be an issue.)
* Indirect uploads when proxying for special remote
(to be considered). See design.
* Getting a key from a cluster currently picks from amoung
the lowest cost remotes at random. This could be smarter,
eg prefer to avoid using remotes that are doing other transfers at the
same time.
* The cost of a proxied node that is accessed via an intermediate gateway
is currently the same as a node accessed via the cluster gateway.
To fix this, there needs to be some way to tell how many hops through
gateways it takes to get to a node. Currently the only way is to
guess based on number of dashes in the node name, which is not satisfying.
Even counting hops is not very satisfying, one cluster gateway could
be much more expensive to traverse than another one.
If seriously tackling this, it might be worth making enough information
available to use spanning tree protocol for routing inside clusters.
* Optimise proxy speed. See design for ideas.
* Speed: A proxy to a local git repository spawns git-annex-shell
to communicate with it. It would be more efficient to operate
directly on the Remote. Especially when transferring content to/from it.
But: When a cluster has several nodes that are local git repositories,
and is sending data to all of them, this would need an alternate
interface than `storeKey`, which supports streaming, of chunks
of a ByteString.
* Use `sendfile()` to avoid data copying overhead when
`receiveBytes` is being fed right into `sendBytes`.
Library to use:
<https://hackage.haskell.org/package/hsyscall-0.4/docs/System-Syscall.html>
* Support using a proxy when its url is a P2P address.
(Eg tor-annex remotes.)
## completed items for June's work on [[design/passthrough_proxy]]:
* UUID discovery via git-annex branch. Add a log file listing UUIDs
accessible via proxy UUIDs. It also will contain the names
of the remotes that the proxy is a proxy for,
from the perspective of the proxy. (done)
* Add `git-annex updateproxy` command (done)
* Remote instantiation for proxies. (done)
* Implement git-annex-shell proxying to git remotes. (done)
* Proxy should update location tracking information for proxied remotes,
so it is available to other users who sync with it. (done)
* Implement `git-annex initcluster` and `git-annex updatecluster` commands (done)
* Implement cluster UUID insertation on location log load, and removal
on location log store. (done)
* Omit cluster UUIDs when constructing drop proofs, since lockcontent will
always fail on a cluster. (done)
* Don't count cluster UUID as a copy in numcopies checking etc. (done)
* Tab complete proxied remotes and clusters in eg --from option. (done)
* Getting a key from a cluster should proxy from one of the nodes that has
it. (done)
* Implement upload with fanout to multiple cluster nodes and reporting back
additional UUIDs over P2P protocol. (done)
* Implement cluster drops, trying to remove from all nodes, and returning
which UUIDs it was dropped from. (done)
* `git-annex testremote` works against proxied remote and cluster. (done)
* Avoid `git-annex sync --content` etc from operating on cluster nodes by
default since syncing with a cluster implicitly syncs with its nodes. (done)
* On upload to cluster, send to nodes where its preferred content, and not
to other nodes. (done)
* Support annex.jobs for clusters. (done)
* Add `git-annex extendcluster` command and extend `git-annex updatecluster`
to support clusters with multiple gateways. (done)
* Support proxying for a remote that is proxied by another gateway of
a cluster. (done)
* Support distributed clusters: Make a proxy for a cluster repeat
protocol messages on to any remotes that have the same UUID as
the cluster. Needs extension to P2P protocol to avoid cycles.
(done)
* Proxied cluster nodes should have slightly higher cost than the cluster
gateway. (done)
* Basic support for proxying special remotes. (But not exporttree=yes ones
yet.) (done)
* Tab complete remotes in all relevant commands (done)
* Display cluster and proxy information in git-annex info (done)