Merge branch 'master' of ssh://git-annex.branchable.com
This commit is contained in:
commit
1fe486813f
7 changed files with 70 additions and 0 deletions
|
@ -0,0 +1,10 @@
|
|||
[[!comment format=mdwn
|
||||
username="http://joeyh.name/"
|
||||
ip="4.154.4.90"
|
||||
subject="comment 20"
|
||||
date="2013-07-17T19:06:31Z"
|
||||
content="""
|
||||
@frioux the webapp has a \"ssh server\" option that will set up a ssh key and use it for passwordless data transfer to a ssh server. You have to enter your password twice in the git-annex terminal app, and then it's set up.
|
||||
|
||||
The openssh included in the git-annex app fully supports everything you can usually do with ssh keys, so you can also set this up by hand.
|
||||
"""]]
|
|
@ -0,0 +1,12 @@
|
|||
[[!comment format=mdwn
|
||||
username="http://joeyh.name/"
|
||||
ip="4.154.4.90"
|
||||
subject="comment 1"
|
||||
date="2013-07-17T18:54:47Z"
|
||||
content="""
|
||||
Trying again one time does not seem like it would help, given the example you show. Trying multiple times by default would, I think, be annoying in lots of use cases where one just wants to get whatever is available, and having it get stuck retrying to download a file from a remote that is offline would not be desired when it could move on and get another file from a remote that is online.
|
||||
|
||||
I'm willing to consider some kind of option to control how much it retries on error. But I'm not 100% sold on it being better than a simple loop. At least in most cases, using a gpg agent and a loop would work. I suppose the case it would not work as well is if enough time has elapsed for the gpg agent to re-lock the key.
|
||||
|
||||
One approach that might work well is to add a --retry-failures-at-end option. It turns out that all failed downloads are already logged (the assistant uses this to automatically retry them), and so it would be easy to add. And rather than retrying immediately after a failure, when transferring multiple files, this puts some space in between, in which the problem may correct itself.
|
||||
"""]]
|
|
@ -0,0 +1,8 @@
|
|||
[[!comment format=mdwn
|
||||
username="http://joeyh.name/"
|
||||
ip="4.154.4.90"
|
||||
subject="comment 3"
|
||||
date="2013-07-17T19:12:49Z"
|
||||
content="""
|
||||
I don't see how git annex fsck could resolve the corruption, which appeared to be of data from the git repository, not the git-annex content store. Did you try `git fsck`?
|
||||
"""]]
|
|
@ -0,0 +1,10 @@
|
|||
[[!comment format=mdwn
|
||||
username="http://joeyh.name/"
|
||||
ip="4.154.4.90"
|
||||
subject="comment 1"
|
||||
date="2013-07-17T19:06:59Z"
|
||||
content="""
|
||||
We don't currently have a way to store a git repository on box.com, and you need such a git repo on a server somewhere if you're not using Jabber.
|
||||
|
||||
Of course you can mount the box.com, using either davfs2 or something else, and put a bare git repository in its directory, and if you set this up on multiple computers, it might just work (or they might both try to write to it at the same time and fail.. I have not tried).
|
||||
"""]]
|
|
@ -0,0 +1,12 @@
|
|||
[[!comment format=mdwn
|
||||
username="http://joeyh.name/"
|
||||
ip="4.154.4.90"
|
||||
subject="comment 1"
|
||||
date="2013-07-17T19:11:38Z"
|
||||
content="""
|
||||
How many files are in the directory tree you're copying?
|
||||
|
||||
`copy --fast --to` does indeed avoid the check to see if the remote already has the file before copying it.
|
||||
|
||||
However, it still needs to look in the location log to see which files are already present on the remote. Whereas `copy --from` can do a single stat of the file on disk to see if it's present in the local repo. Location log lookups are about as fast as I can make them, but they still require requesting info from out of the git repository. If you have a lot of files this otherwise minor difference in speed can stack up..
|
||||
"""]]
|
|
@ -0,0 +1,10 @@
|
|||
[[!comment format=mdwn
|
||||
username="http://joeyh.name/"
|
||||
ip="4.154.4.90"
|
||||
subject="comment 6"
|
||||
date="2013-07-17T18:57:12Z"
|
||||
content="""
|
||||
You can use `git annex fsck --from remote` to verify that every file location tracking thinks is on the remote still is. It's innefficient though -- it has to download the whole file to check the special remote still has the right content! That transfer can be avoided by adding --fast.
|
||||
|
||||
This is documented in the man page. :)
|
||||
"""]]
|
|
@ -0,0 +1,8 @@
|
|||
[[!comment format=mdwn
|
||||
username="http://joeyh.name/"
|
||||
ip="4.154.4.90"
|
||||
subject="comment 7"
|
||||
date="2013-07-17T18:59:03Z"
|
||||
content="""
|
||||
I don't see any reason why squashing git-annex branch history would not work. If you squash it to the same sha in each clone, things would be very happy, but even if you squash it to different shas, the union merge should result in those different versions of the same data automatically merging together.
|
||||
"""]]
|
Loading…
Add table
Reference in a new issue