git-annex/doc/design/assistant/syncing.mdwn

91 lines
3.9 KiB
Text
Raw Normal View History

2012-05-27 01:11:19 +00:00
Once files are added (or removed or moved), need to send those changes to
all the other git clones, at both the git level and the key/value level.
## git syncing
2012-06-22 00:02:00 +00:00
1. Can use `git annex sync`, which already handles bidirectional syncing.
When a change is committed, launch the part of `git annex sync` that pushes
2012-06-22 19:47:02 +00:00
out changes. **done**; changes are pushed out to all remotes in parallel
2012-06-22 00:02:00 +00:00
1. Watch `.git/refs/remotes/` for changes (which would be pushed in from
another node via `git annex sync`), and run the part of `git annex sync`
that merges in received changes, and follow it by the part that pushes out
changes (sending them to any other remotes).
[The watching can be done with the existing inotify code! This avoids needing
2012-06-22 21:17:41 +00:00
any special mechanism to notify a remote that it's been synced to.]
**done**
2012-06-26 00:40:58 +00:00
1. Periodically retry pushes that failed. **done** (every half an hour)
1. Also, detect if a push failed due to not being up-to-date, pull,
and repush. **done**
2012-05-27 02:25:25 +00:00
2. Use a git merge driver that adds both conflicting files,
2012-06-28 01:11:39 +00:00
so conflicts never break a sync. **done**
2012-05-27 02:25:25 +00:00
3. Investigate the XMPP approach like dvcs-autosync does, or other ways of
2012-05-27 01:11:19 +00:00
signaling a change out of band.
2012-06-22 00:02:00 +00:00
4. Add a hook, so when there's a change to sync, a program can be run
and do its own signaling.
2012-05-27 01:11:19 +00:00
## data syncing
There are two parts to data syncing. First, map the network and second,
decide what to sync when.
Mapping the network can reuse code in `git annex map`. Once the map is
built, we want to find paths through the network that reach all nodes
eventually, with the least cost. This is a minimum spanning tree problem,
except with a directed graph, so really a Arborescence problem.
With the map, we can determine which nodes to push new content to. Then we
need to control those data transfers, sending to the cheapest nodes first,
and with appropriate rate limiting and control facilities.
This probably will need lots of refinements to get working well.
2012-06-29 15:59:25 +00:00
### first pass: flood syncing
Before mapping the network, the best we can do is flood all files out to every
reachable remote. This is worth doing first, since it's the simplest way to
get the basic functionality of the assistant to work. And we'll need this
anyway.
2012-06-29 18:03:37 +00:00
### transfer tracking
2012-06-29 15:59:25 +00:00
data ToTransfer = ToUpload Key | ToDownload Key
type ToTransferChan = TChan [ToTransfer]
* ToUpload added by the watcher thread when it adds content.
* ToDownload added by the watcher thread when it seens new symlinks
that lack content.
Transfer threads started/stopped as necessary to move data.
May sometimes want multiple threads downloading, or uploading, or even both.
data TransferID = TransferThread ThreadID | TransferProcess Pid
data Direction = Uploading | Downloading
data Transfer = Transfer Direction Key TransferID EpochTime Integer
-- add [Transfer] to DaemonStatus
The assistant needs to find out when `git-annex-shell` is receiving or
sending (triggered by another remote), so it can add data for those too.
This is important to avoid uploading content to a remote that is already
downloading it from us, or vice versa, as well as to in future let the web
app manage transfers as user desires.
For files being received, it can see the temp file, but other than lsof
there's no good way to find the pid (and I'd rather not kill blindly).
For files being sent, there's no filesystem indication. So git-annex-shell
(and other git-annex transfer processes) should write a status file to disk.
Can use file locking on these status files to claim upload/download rights,
which will avoid races.
This status file can also be updated periodically to show amount of transfer
complete (necessary for tracking uploads).
2012-05-27 01:11:19 +00:00
## other considerations
2012-05-27 02:25:25 +00:00
It would be nice if, when a USB drive is connected,
2012-06-26 00:40:58 +00:00
syncing starts automatically. Use dbus on Linux?
2012-05-28 18:41:23 +00:00
This assumes the network is connected. It's often not, so the
[[cloud]] needs to be used to bridge between LANs.