Merge branch 'master' of ssh://git-annex.branchable.com

This commit is contained in:
Joey Hess 2012-11-02 11:59:44 -04:00
commit 70b9d3ae68
4 changed files with 70 additions and 0 deletions

View file

@ -0,0 +1,28 @@
a special remote (encrypted rsync) that got copied to long ago (not sure when, there are old files that already have sizes in their unencrypted file names) seems to use the aa/bb/GPGHMACSHA1-- format instead of aaa/bbb/GPGHMACSHA1-. ``git annex fsck`` over such files produces very irritating output:
<code>
fsck L1100423.JPG (gpg) (checking …remote…...)
rsync: change_dir "…somewhere…/0a0/8cd/GPGHMACSHA1--91234b770b34eeff811d09c97ce94bb2398b3d72" failed: No such file or directory (2)
sent 8 bytes received 12 bytes 40.00 bytes/sec
total size is 0 speedup is 0.00
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1536) [Receiver=3.0.9]
rsync failed -- run git annex again to resume file transfer
GPGHMACSHA1--91234b770b34eeff811d09c97ce94bb2398b3d72
3922730 100% 623.81kB/s 0:00:06 (xfer#1, to-check=0/1)
sent 30 bytes received 3923328 bytes 523114.40 bytes/sec
total size is 3922730 speedup is 1.00
(checksum...) ok
</code>
(observed with debian's git-annex 3.20121017).
while this does output an "ok" at th end and a zero exit status, having such messages in an fsck is highly irritating.
i see two ways to enhance the situation:
* silence the "not found" error when the file is found in another location
* a way to rename the files in the remote (i guess the aaa/bbb part can be derived from the file name; in that case, that could even be done w/o network interaction).

View file

@ -0,0 +1,8 @@
[[!comment format=mdwn
username="https://www.google.com/accounts/o8/id?id=AItOawn7gQ1zZDdWhXy9H51W2krZYShNmKL3qfM"
nickname="Karsten"
subject="comment 1"
date="2012-11-02T07:21:15Z"
content="""
I might be thinking too simple, but can't you just put another annex repository on an usbdrive and use it to carry the metadata around? Add it as a remote to both compuers annex repositories and sync when you come/leave. As it does not have to carry the actual data some 100M will usually suffice. Just don't use any special remotes, but simply a cloned git repository.
"""]]

View file

@ -0,0 +1,18 @@
Imagine this situation:
You have a laptop and a NAS.
On your laptop you want to consume a large media file located on the NAS.
So you type:
git annex get --from nas mediafile
But now you have to wait for the download to complete, unless either
* rsync is pointed directly to the file in the object storage ("--inplace")
or
* the symlink temporarily points to the partial file during a transfer
which would allow you instantaneous consumption of your media.
It might make sense to make this behavior configurable, because not everyone might agree with having partial content (that mismatches its key) around.
So what do you say?

View file

@ -0,0 +1,16 @@
[[!comment format=mdwn
username="http://hands.com/~phil/"
nickname="hands"
subject="and you can specify ranges to dropunused"
date="2012-11-02T09:07:48Z"
content="""
so having run:
git annex unused
you can then run:
git annex dropunused 1-10000
or whatever, and it deletes the items in that range from the most recent <tt>unused</tt> invocation
"""]]