Merge branch 'master' of ssh://git-annex.branchable.com
This commit is contained in:
commit
73540f3e16
3 changed files with 32 additions and 0 deletions
|
@ -0,0 +1,11 @@
|
|||
[[!comment format=mdwn
|
||||
username="https://www.google.com/accounts/o8/id?id=AItOawnx8kHW66N3BqmkVpgtXDlYMvr8TJ5VvfY"
|
||||
nickname="Yaroslav"
|
||||
subject="great to see such a large scale effort ongoing"
|
||||
date="2015-03-06T04:47:30Z"
|
||||
content="""
|
||||
and I would still maintain my view that removing intermediate directory withing .git/annex/objects whose current roles is simply to provide read-only protection might half the burden on the underlying file system, either annex repo(s) are multitude or a single one [1]. lean view [2] could also be of good use as well[2]. Similar exercises with simulated annex'es with >5M files also \"helped\" to identify problems with ZOL (ZFS on Linux) caching suggesting that even mere handling of such vast arrays of tiny files (as dead symlinks) might give filesystems a good test, so the leaner impact would be -- the better.
|
||||
|
||||
[1] e.g. https://github.com/datalad/datalad/issues/32#issuecomment-70523036
|
||||
[2] https://github.com/datalad/datalad/issues/25
|
||||
"""]]
|
|
@ -29,12 +29,17 @@ Why doesn't the UUID work? :/
|
|||
|
||||
I even [tried renaming the remote to the UUID... didn't work](http://ix.io/gJI)
|
||||
|
||||
**Solution**: Neither UUID or the description is used by get. I also should not have resorted to [[special_remotes]] setup for setting up a git remote.
|
||||
|
||||
# Issue 1
|
||||
|
||||
Keep getting `git-annex-shell: user error (git ["config","--null","--list"] exited 126)` even though when I run `git config` my return error is 0: <http://ix.io/gJG>
|
||||
|
||||
**Solution**: This was because my ssh git URL was incorrect. A better error message has been implemented: <http://source.git-annex.branchable.com/?p=source.git;a=commitdiff;h=3439ea4>
|
||||
|
||||
|
||||
# Issue 2
|
||||
|
||||
I can't work out the [git-annex remote type for ssh, in order to rename the remote](http://ix.io/gJH). I think the issue here is that my ssh remote name "Jamie's bible" doesn't match with the `git remote` name bible.
|
||||
|
||||
**Solution**: A _rw_ git URL configured with `git remote` are not [[special_remotes]]. I confused the two. If you need to define public git URL ([[time capsule use case|future_proofing]]), it is possible with an undocumented `git annex initremote foo type=git location=url`. So to summarise, just manually setup the git remote `git remote add ssh://someplace/path/to/repo` (don't worry about the name) and git-annex will find it!
|
||||
|
|
16
doc/todo/Facilitate_public_pretty_S3_URLs.mdwn
Normal file
16
doc/todo/Facilitate_public_pretty_S3_URLs.mdwn
Normal file
|
@ -0,0 +1,16 @@
|
|||
I archive all my photos/video to a bucket CNAMED to http://s.natalian.org/ with a simple YYYY-MM-DD prefix.
|
||||
|
||||
E.g. <http://s.natalian.org/2015-03-06/1425615579_1918x1060.png>
|
||||
|
||||
I'm not doing a great job of backing up the S3 bucket to another S3 compatible host, since `s3cmd sync`/`aws sync` is so slow, but that's beside the point. Ideally it could be tracked by **git-annex**!
|
||||
|
||||
Adding all the objects into git-annex, IIUC currently would require me:
|
||||
|
||||
* to download the ~80GB and then add them to git-annex
|
||||
* there is no way to keep my current S3 URLs with the [[special_remotes/S3]] since `git-annex` has it's own special way of storing to a bucket, e.g. https://s3-ap-southeast-1.amazonaws.com/s3-10418340-834d-41c2-b38f-7ee84bf6a23a/SHA256E-s1034208123--235e4f288d094c2e1870bc3d9d353abf34542c04c1d26905e882718a7ccf74cf.mp4 - I'd rather not have HTTP redirects
|
||||
* AFAICT there is no way currently with git-annex to mark the [[special_remotes/S3]] as public, which is needed for public URLs to work
|
||||
* AFAICT there is no current automated method the mapping via `git-annex addurl` with the public URLs of the each file in the bucket
|
||||
|
||||
The ideal solution in my mind is for git-annex to track the contents of S3 as they are now, preserving the URLs and tracking the checksums in a separate index file.
|
||||
|
||||
Thank you!
|
Loading…
Reference in a new issue