Merge remote-tracking branch 'branchable/master'

This commit is contained in:
Joey Hess 2011-04-06 19:12:38 -04:00
commit 000247a379
6 changed files with 65 additions and 0 deletions

View file

@ -0,0 +1,10 @@
[[!comment format=mdwn
username="http://joey.kitenet.net/"
nickname="joey"
subject="comment 2"
date="2011-04-05T18:41:49Z"
content="""
I see no use case for verifying encrypted object files w/o access to the encryption key. And possible use cases for not allowing anyone to verify your data.
If there are to be multiple encryption keys usable within a single encrypted remote, than they would need to be given some kind of name (a since symmetric key is used, there is no pubkey to provide a name), and the name encoded in the files stored in the remote. While certainly doable I'm not sold that adding a layer of indirection is worthwhile. It only seems it would be worthwhile if setting up a new encrypted remote was expensive to do. Perhaps that could be the case for some type of remote other than S3 buckets.
"""]]

View file

@ -0,0 +1,12 @@
[[!comment format=mdwn
username="https://www.google.com/accounts/o8/id?id=AItOawl9sYlePmv1xK-VvjBdN-5doOa_Xw-jH4U"
nickname="Richard"
subject="comment 3"
date="2011-04-05T23:24:17Z"
content="""
Assuming you're storing your encrypted annex with me and I with you, our regular cron jobs to verify all data will catch corruption in each other's annexes.
Checksums of the encrypted objects could be optional, mitigating any potential attack scenarios.
It's not only about the cost of setting up new remotes. It would also be a way to keep data in one annex while making it accessible only in a subset of them. For example, I might need some private letters at work, but I don't want my work machine to be able to access them all.
"""]]

View file

@ -0,0 +1,29 @@
[[!comment format=mdwn
username="https://www.google.com/accounts/o8/id?id=AItOawkhdKAhe3l_UyGt5SdfRBPYVwe-9f8P2dM"
nickname="Justin"
subject="comment 4"
date="2011-04-05T21:14:12Z"
content="""
@joey
OK, I'll try increasing the stack size and see if that helps.
For reference, I was running:
git annex add .
on a directory containing about 100k files spread over many nested subdirectories. I actually have more than a dozen projects like this that I plan to keep in git annex, possibly in separate repositories if necessary. I could probably tar the data and then archive that, but I like the idea of being able to see the structure of my data even though the contents of the files are on a different machine.
After the crash, running:
git annex unannex
does nothing and returns instantly. What exactly is 'git annex add' doing? I know that it's moving files into the key-value store and adding symlinks, but I don't know what else it does.
--Justin
If
"""]]

View file

@ -0,0 +1,14 @@
[[!comment format=mdwn
username="https://www.google.com/accounts/o8/id?id=AItOawl9sYlePmv1xK-VvjBdN-5doOa_Xw-jH4U"
nickname="Richard"
subject="comment 2"
date="2011-04-05T20:52:52Z"
content="""
No-so-subtle sarcasm taken and acknowledged :)
Arguably, git-annex should know about any local limits and not have them implemented via mr from the outside. I guess my concern boils down to having git-annex do the right thing all by itself with minimal user interaction. And while I really do appreciate the flexibility of chaining commands, I am a firm believer in exposing the common use cases as easily as possible.
And yes, I am fully aware that not all annexes are created equal. Point in case, I would never use git annex pull on my laptop, but I would git annex push extensively.
"""]]