Merge branch 'master' of ssh://git-annex.branchable.com
This commit is contained in:
commit
4149265b34
10 changed files with 94 additions and 0 deletions
|
@ -0,0 +1,10 @@
|
||||||
|
[[!comment format=mdwn
|
||||||
|
username="http://joeyh.name/"
|
||||||
|
ip="4.154.6.49"
|
||||||
|
subject="comment 13"
|
||||||
|
date="2012-12-01T18:50:31Z"
|
||||||
|
content="""
|
||||||
|
So it works with a controlling console, and git commands are somehow misbehaving without a controlling console. Very strange.
|
||||||
|
|
||||||
|
Any chance you can `dtrace -p` the stuck git processes to see what they're doing or what resource they're blocked on?
|
||||||
|
"""]]
|
|
@ -0,0 +1,8 @@
|
||||||
|
[[!comment format=mdwn
|
||||||
|
username="spwhitton"
|
||||||
|
ip="163.1.167.50"
|
||||||
|
subject="comment 2"
|
||||||
|
date="2012-12-01T18:34:10Z"
|
||||||
|
content="""
|
||||||
|
Thanks for the log advice. I looked and it wants to drop a file `ttmik/TTMIK-Lesson-L1L1.mp3` from the encrypted rsync remote ma. So I did `git annex whereis ttmik/TTMIK-Lesson-L1L1.mp3` and learnt that the file is not on ma. I tried `git annex drop --from ma ttmik` to be sure, and the command was successful, but it still tries to drop the file from ma on startup. Presumably it would try all the other files in the ttmik directory if I gave it the chance to try to drop this first one. The only thing special about the ttmik directory is that every file in there was added using addurl, so I guess the problem has something to do with that.
|
||||||
|
"""]]
|
|
@ -0,0 +1,10 @@
|
||||||
|
[[!comment format=mdwn
|
||||||
|
username="http://joeyh.name/"
|
||||||
|
ip="4.154.6.49"
|
||||||
|
subject="comment 3"
|
||||||
|
date="2012-12-01T18:59:45Z"
|
||||||
|
content="""
|
||||||
|
How is this rsync remote configured? Is it configured as a transfer remote? If not, the assistant's default behavior is to sync all files to it, and since you say the file is not there, it'd make sense it would start transferring it to there on startup.
|
||||||
|
|
||||||
|
OTOH, this could be a complete red herring. You haven't shown me the log file. Perhaps the drop you're seeing in the log is before the operation that is asking for the GPG passphrase. I can't tell until you show me the log.
|
||||||
|
"""]]
|
|
@ -0,0 +1,10 @@
|
||||||
|
[[!comment format=mdwn
|
||||||
|
username="http://joeyh.name/"
|
||||||
|
ip="4.154.6.49"
|
||||||
|
subject="comment 1"
|
||||||
|
date="2012-12-01T18:40:46Z"
|
||||||
|
content="""
|
||||||
|
Well, first of all, after copying a repository like this, you need to edit its .git/config and delete the annex.uuid setting. Otherwise, you will have two repositories with the same UUID, which is not good.
|
||||||
|
|
||||||
|
Once you've done that, run `git annex fsck` in the new repository and it will do what you want.
|
||||||
|
"""]]
|
|
@ -0,0 +1,8 @@
|
||||||
|
[[!comment format=mdwn
|
||||||
|
username="https://www.google.com/accounts/o8/id?id=AItOawlgyVag95OnpvSzQofjyX0WjW__MOMKsl0"
|
||||||
|
nickname="Sehr"
|
||||||
|
subject="comment 2"
|
||||||
|
date="2012-12-01T19:27:59Z"
|
||||||
|
content="""
|
||||||
|
Ok, that did the trick, except that it recalculates all cehcksums, which is exactly wat I do not want, as it is unnecessary and takes at 40MB/s just too long. Any other way? I think about adding the whereis info by hand, which simply feels wrong!
|
||||||
|
"""]]
|
|
@ -0,0 +1,8 @@
|
||||||
|
[[!comment format=mdwn
|
||||||
|
username="http://joeyh.name/"
|
||||||
|
ip="4.154.6.49"
|
||||||
|
subject="comment 3"
|
||||||
|
date="2012-12-01T19:37:26Z"
|
||||||
|
content="""
|
||||||
|
git annex fsck --fast will skip the checksumming.
|
||||||
|
"""]]
|
|
@ -0,0 +1,10 @@
|
||||||
|
[[!comment format=mdwn
|
||||||
|
username="http://joeyh.name/"
|
||||||
|
ip="4.154.6.49"
|
||||||
|
subject="comment 2"
|
||||||
|
date="2012-12-01T18:34:05Z"
|
||||||
|
content="""
|
||||||
|
I tried signing up for livedrive, but I cannot log into it with WebDav at all. Do they require a Pro account to use WebDav?
|
||||||
|
|
||||||
|
When it's \"testing webdav server\", it tries to make a collection (a subdirectory), and uploads a file to it, and sets the file's properties, and deletes the file. One of these actions must be failing, perhaps because the webdav server implementation does not support it. Or perhaps because the webdav client library is doing something wrong. I've instrumented the test, so it'll say which one.
|
||||||
|
"""]]
|
|
@ -0,0 +1,10 @@
|
||||||
|
[[!comment format=mdwn
|
||||||
|
username="http://joeyh.name/"
|
||||||
|
ip="4.154.6.49"
|
||||||
|
subject="comment 3"
|
||||||
|
date="2012-12-01T20:13:36Z"
|
||||||
|
content="""
|
||||||
|
I've identified the problem keeping it working with livedrive. Once a patch I've written is applied to the Haskell DAV library, I'll be able to update git-annex to support it.
|
||||||
|
|
||||||
|
I don't know about sd2dav.1und1.de. The error looks like it doesn't support WebDAV file locking.
|
||||||
|
"""]]
|
|
@ -0,0 +1,10 @@
|
||||||
|
[[!comment format=mdwn
|
||||||
|
username="http://joeyh.name/"
|
||||||
|
ip="4.154.6.49"
|
||||||
|
subject="comment 1"
|
||||||
|
date="2012-12-01T19:22:03Z"
|
||||||
|
content="""
|
||||||
|
There are several difficulties with reordering the queue that way. One is that the failure may be intermittent; another is that the queue is fed by a scanning process, so doesn't always have a well-defined end.
|
||||||
|
|
||||||
|
Another way to deal with this problem, which I think I prefer, is to allow multiple actions from the queue to run at once. Then slow or unreachable remotes don't block it from using other remotes.
|
||||||
|
"""]]
|
|
@ -0,0 +1,10 @@
|
||||||
|
[[!comment format=mdwn
|
||||||
|
username="EskildHustvedt"
|
||||||
|
ip="84.48.83.221"
|
||||||
|
subject="comment 2"
|
||||||
|
date="2012-12-01T19:31:18Z"
|
||||||
|
content="""
|
||||||
|
I agree your method might be preferable, the end result is the same, and would have avoided the issues I had (and, of course, running multiple transfers at once has other benefits as well).
|
||||||
|
|
||||||
|
An alternate way would be to push every transfer NOT from host X to the front of the queue (avoiding most of the \"no defined end\" issue and largely solving the problem), but if multiple actions at once is feasible then that'd still be much nicer.
|
||||||
|
"""]]
|
Loading…
Reference in a new issue