support annex.jobs for clusters
This commit is contained in:
parent
818030e4d3
commit
cec2848e8a
6 changed files with 65 additions and 12 deletions
|
@ -83,6 +83,13 @@ in the git-annex branch. That tells other repositories about the cluster.
|
|||
Started proxying for node2
|
||||
Started proxying for node3
|
||||
|
||||
Operations that affect multiple nodes of a cluster can often be sped up by
|
||||
configuring annex.jobs in the repository that will serve the cluster to
|
||||
clients. In the example above, the nodes are all disk bound, so operating
|
||||
on more than one at a time will likely be faster.
|
||||
|
||||
$ git config annex.jobs cpus
|
||||
|
||||
## preferred content of clusters
|
||||
|
||||
The preferred content of the cluster can be configured. This tells
|
||||
|
@ -94,8 +101,8 @@ to do the configuration in a repository that has the cluster as a remote.
|
|||
|
||||
For example:
|
||||
|
||||
git-annex wanted bigserver-mycluster standard
|
||||
git-annex group bigserver-mycluster archive
|
||||
$ git-annex wanted bigserver-mycluster standard
|
||||
$ git-annex group bigserver-mycluster archive
|
||||
|
||||
By default, when a file is uploaded to a cluster, it is stored on every node of
|
||||
the cluster. To control which nodes to store to, the [[preferred_content]] of
|
||||
|
|
|
@ -31,8 +31,6 @@ For June's work on [[design/passthrough_proxy]], remaining todos:
|
|||
round-robin amoung remotes, and prefer to avoid using remotes that
|
||||
other git-annex processes are currently using.
|
||||
|
||||
* Support annex.jobs for clusters.
|
||||
|
||||
* Basic proxying to special remote support (non-streaming).
|
||||
|
||||
* Support proxies-of-proxies better, eg foo-bar-baz.
|
||||
|
@ -104,3 +102,6 @@ For June's work on [[design/passthrough_proxy]], remaining todos:
|
|||
|
||||
* On upload to cluster, send to nodes where its preferred content, and not
|
||||
to other nodes. (done)
|
||||
|
||||
* Support annex.jobs for clusters. (done)
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue