Merge remote-tracking branch 'branchable/master'
This commit is contained in:
commit
7630cf4f84
5 changed files with 51 additions and 0 deletions
|
@ -0,0 +1,9 @@
|
|||
[[!comment format=mdwn
|
||||
username="http://joey.kitenet.net/"
|
||||
nickname="joey"
|
||||
subject="comment 17"
|
||||
date="2011-04-03T16:53:51Z"
|
||||
content="""
|
||||
@gernot step 0 is to upgrade git-annex to current git, on all systems where you use it, in case that wasn't clear.
|
||||
|
||||
"""]]
|
|
@ -0,0 +1,14 @@
|
|||
[[!comment format=mdwn
|
||||
username="http://joey.kitenet.net/"
|
||||
nickname="joey"
|
||||
subject="comment 1"
|
||||
date="2011-04-03T16:49:01Z"
|
||||
content="""
|
||||
How remote is REMOTE? If it's a directory on the same computer, then git-annex copy --to is actually quickly checking that each file is present on the remote, and when it is, skipping copying it again.
|
||||
|
||||
If the remote is ssh, git-annex copy talks to the remote to see if it has the file. This makes copy --to slow, as Rich [[complained_before|forum/batch_check_on_remote_when_using_copy]]. :)
|
||||
|
||||
So, copy --to does not trust location tracking information (unless --fast is specified), which means that it should be doing exactly what you want it to do in your situation -- transferring every file that is really not present in the destination repository already.
|
||||
|
||||
Neither does copy --from, by the way. It always checks if each file is present in the current repository's annex before trying to download it.
|
||||
"""]]
|
|
@ -0,0 +1,12 @@
|
|||
[[!comment format=mdwn
|
||||
username="https://www.google.com/accounts/o8/id?id=AItOawkSq2FDpK2n66QRUxtqqdbyDuwgbQmUWus"
|
||||
nickname="Jimmy"
|
||||
subject="comment 2"
|
||||
date="2011-04-03T16:59:47Z"
|
||||
content="""
|
||||
Remote as in \"another physical machine\". I assumed that
|
||||
|
||||
git annex copy --force --to REMOTE .
|
||||
|
||||
would have not trusted the contents in the current directory (or the remote that is being copied to) and then just go off and re-download/upload all the files and overwrite what is already there. I expected the combination of *--force* and copy *--to* that it would not bother to check if the files are there or not and just copy it regardless of the outcome.
|
||||
"""]]
|
|
@ -0,0 +1,8 @@
|
|||
[[!comment format=mdwn
|
||||
username="https://www.google.com/accounts/o8/id?id=AItOawkSq2FDpK2n66QRUxtqqdbyDuwgbQmUWus"
|
||||
nickname="Jimmy"
|
||||
subject="comment 3"
|
||||
date="2011-04-03T17:12:35Z"
|
||||
content="""
|
||||
On second thought maybe the current behaviour is better than what I am suggesting that the force command should do. I guess it's better to be safe than sorry.
|
||||
"""]]
|
|
@ -0,0 +1,8 @@
|
|||
[[!comment format=mdwn
|
||||
username="http://joey.kitenet.net/"
|
||||
nickname="joey"
|
||||
subject="comment 1"
|
||||
date="2011-04-03T16:39:35Z"
|
||||
content="""
|
||||
I dunno about parrallel downloads -- eek! -- but there is at least room for improvement of what \"git annex get\" does when there are multiple remotes that have a file, and the one it decides to use is not available, or very slow, or whatever.
|
||||
"""]]
|
Loading…
Reference in a new issue