Merge remote-tracking branch 'branchable/master'

This commit is contained in:
Joey Hess 2011-04-25 14:57:34 -04:00
commit e1bc704a91
7 changed files with 88 additions and 0 deletions

View file

@ -0,0 +1,18 @@
I would like to be able to name a few remotes that must retain *all* annexed
files. `git-annex fsck` should warn me if any files are missing from those
remotes, even if `annex.numcopies` has been satisfied by other remotes.
I imagine this could also be useful for bup remotes, but I haven't actually
looked at those yet.
Based on existing output, this is what a warning message could look like:
fsck FILE
3 of 3 trustworthy copies of FILE exist.
FILE is, however, still missing from these required remotes:
UUID -- Backup Drive 1
UUID -- Backup Drive 2
Back it up with git-annex copy.
Warning
What do you think?

View file

@ -0,0 +1,10 @@
[[!comment format=mdwn
username="http://joey.kitenet.net/"
nickname="joey"
subject="comment 1"
date="2011-04-23T16:27:13Z"
content="""
Seems to have a scalability problem, what happens when such a repository becomes full?
Another way to accomplish I think the same thing is to pick the repositories that you would include in such a set, and make all other repositories untrusted. And set numcopies as desired. Then git-annex will never remove files from the set of non-untrusted repositories, and fsck will warn if a file is present on only an untrusted repository.
"""]]

View file

@ -0,0 +1,16 @@
[[!comment format=mdwn
username="gernot"
ip="87.79.209.169"
subject="comment 2"
date="2011-04-24T11:20:05Z"
content="""
Right, I have thought about untrusting all but a few remotes to achieve
something similar before and I'm sure it would kind of work. It would be more
of an ugly workaround, however, because I would have to untrust remotes that
are, in reality, at least semi-trusted. That's why an extra option/attribute
for that kind of purpose/remote would be nice.
Obviously I didn't see the scalability problem though. Good Point. Maybe I can
achieve the same thing by writing a log parsing script for myself?
"""]]

View file

@ -0,0 +1,12 @@
I'd like to be able to do something like the following:
* Create encrypted git-annex remotes on a couple of semi-trusted machines - ones that have good connectivity, but non-redundant hardware
* set numcopies=3
* run `git-annex replicate` and have git-annex run the appropriate copy commands to make sure every file is on at least 3 machines
There would also likely be a `git annex rebalance` command which could be used if remotes were added or removed. If possible, it should copy files between servers directly, rather than proxy through a potentially slow client.
There might be the need to have a 'replication_priority' option for each remote that configures which machines would be preferred. That way you could set your local server to a high priority to ensure that it is always 1 of the 3 machines used and files are distributed across 2 of the remaining remotes. Other than priority, other options that might help:
* maxspace - A self imposed quota per remote machine. git-annex replicate should try to replicate files first to machines with more free space. maxspace would change the free space calculation to be `min(actual_free_space, maxspace - space_used_by_git_annex)
* bandwidth - when replication files, copies should be done between machines with the highest available bandwidth. ( I think this option could be useful for git-annex get in general)

View file

@ -0,0 +1,10 @@
[[!comment format=mdwn
username="https://www.google.com/accounts/o8/id?id=AItOawl9sYlePmv1xK-VvjBdN-5doOa_Xw-jH4U"
nickname="Richard"
subject="comment 1"
date="2011-04-22T18:27:00Z"
content="""
While having remotes redistribute introduces some obvious security concerns, I might use it.
As remotes support a cost factor already, you can basically implement bandwidth through that.
"""]]

View file

@ -0,0 +1,12 @@
[[!comment format=mdwn
username="http://joey.kitenet.net/"
nickname="joey"
subject="comment 2"
date="2011-04-23T16:22:07Z"
content="""
Besides the cost values, annex.diskreserve was recently added. (But is not available for special remotes.)
I have held off on adding high-level management stuff like this to git-annex, as it's hard to make it generic enough to cover use cases.
A low-level way to accomplish this would be to have a way for `git annex get` and/or `copy` to skip files when `numcopies` is already satisfied. Then cron jobs could be used.
"""]]

View file

@ -0,0 +1,10 @@
[[!comment format=mdwn
username="https://www.google.com/accounts/o8/id?id=AItOawmBUR4O9mofxVbpb8JV9mEbVfIYv670uJo"
nickname="Justin"
subject="comment 3"
date="2011-04-23T17:54:42Z"
content="""
Hmm, so it seems there is almost a way to do this already.
I think the one thing that isn't currently possible is to have 'plain' ssh remotes.. basically something just like the directory remote, but able to take a ssh user@host/path url. something like sshfs could be used to fake this, but for things like fsck you would want to do the sha1 calculations on the remote host.
"""]]