From 199f2942dc8a95dbedf03f04745cb8eaac59bafb Mon Sep 17 00:00:00 2001 From: "https://www.google.com/accounts/o8/id?id=AItOawkRGMQkg9ck_pr47JXZV_C2DJQXrO8LgpI" Date: Sat, 13 Sep 2014 06:28:01 +0000 Subject: [PATCH 1/3] Added a comment: Hard linking on local clone --- ...comment_1_16b13b2510183a9da5f960ae5765e581._comment | 10 ++++++++++ 1 file changed, 10 insertions(+) create mode 100644 doc/devblog/day_219__catching_up_and_looking_back/comment_1_16b13b2510183a9da5f960ae5765e581._comment diff --git a/doc/devblog/day_219__catching_up_and_looking_back/comment_1_16b13b2510183a9da5f960ae5765e581._comment b/doc/devblog/day_219__catching_up_and_looking_back/comment_1_16b13b2510183a9da5f960ae5765e581._comment new file mode 100644 index 0000000000..5b839b55cb --- /dev/null +++ b/doc/devblog/day_219__catching_up_and_looking_back/comment_1_16b13b2510183a9da5f960ae5765e581._comment @@ -0,0 +1,10 @@ +[[!comment format=mdwn + username="https://www.google.com/accounts/o8/id?id=AItOawkRGMQkg9ck_pr47JXZV_C2DJQXrO8LgpI" + nickname="Michael" + subject="Hard linking on local clone" + date="2014-09-13T06:28:01Z" + content=""" +Thanks for this feature. It will save a lot of space when working on one-off projects with big scientific datasets. + +Unfortunately, there is probably no easy solution to achieve similar savings across file systems. On our shared cluster individual labs have their data in separate ZFS volumes (to ease individual backup handling), but data is often shared (i.e. copied) across volumes when cloning an annex. We need expensive de-duplication on the backup-server to, at least, prevent this kind of waste to hit the backups -- but the master file server still suffers (de-duplication ratio sometimes approaching a factor of 2.0). +"""]] From 911d05bd3db294fd5f9140277fc45316b448653f Mon Sep 17 00:00:00 2001 From: Petter_petterson Date: Sat, 13 Sep 2014 07:54:58 +0000 Subject: [PATCH 2/3] Added a comment: addition --- .../comment_1_fc914b5998a09943fc8c1917a0e36096._comment | 8 ++++++++ 1 file changed, 8 insertions(+) create mode 100644 doc/forum/How_do_I_sync_files_from_mobile_to_a_repo__63__/comment_1_fc914b5998a09943fc8c1917a0e36096._comment diff --git a/doc/forum/How_do_I_sync_files_from_mobile_to_a_repo__63__/comment_1_fc914b5998a09943fc8c1917a0e36096._comment b/doc/forum/How_do_I_sync_files_from_mobile_to_a_repo__63__/comment_1_fc914b5998a09943fc8c1917a0e36096._comment new file mode 100644 index 0000000000..ac49670b9e --- /dev/null +++ b/doc/forum/How_do_I_sync_files_from_mobile_to_a_repo__63__/comment_1_fc914b5998a09943fc8c1917a0e36096._comment @@ -0,0 +1,8 @@ +[[!comment format=mdwn + username="Petter_petterson" + ip="89.160.15.173" + subject="addition" + date="2014-09-13T07:54:58Z" + content=""" +I understand that the copy of the cellphones' photos are stored on the server too, when typing git annex whereis I see that it exists on the server, but I need to be able to, at will copy out the jpg files for editing and using in other places. +"""]] From 971f9924e0d5b8ee3f22c2bb21c29e09daca63ca Mon Sep 17 00:00:00 2001 From: "https://www.google.com/accounts/o8/id?id=AItOawmH7o6q2l99M-PQolOfbR3_i5B_jtTIcAE" Date: Sat, 13 Sep 2014 15:29:18 +0000 Subject: [PATCH 3/3] Added a comment: How to publish your files to the public --- ..._cf6755d88463878f2ea6e4c300899027._comment | 33 +++++++++++++++++++ 1 file changed, 33 insertions(+) create mode 100644 doc/tips/using_Amazon_S3/comment_7_cf6755d88463878f2ea6e4c300899027._comment diff --git a/doc/tips/using_Amazon_S3/comment_7_cf6755d88463878f2ea6e4c300899027._comment b/doc/tips/using_Amazon_S3/comment_7_cf6755d88463878f2ea6e4c300899027._comment new file mode 100644 index 0000000000..3c7b817f72 --- /dev/null +++ b/doc/tips/using_Amazon_S3/comment_7_cf6755d88463878f2ea6e4c300899027._comment @@ -0,0 +1,33 @@ +[[!comment format=mdwn + username="https://www.google.com/accounts/o8/id?id=AItOawmH7o6q2l99M-PQolOfbR3_i5B_jtTIcAE" + nickname="Giovanni" + subject="How to publish your files to the public" + date="2014-09-13T15:29:18Z" + content=""" +I don't know if this is what Jack wanted, but you can upload your files to S3 and let them be accessible through a public URL. + +First, go to (or create) the bucket you will use at [S3](https://console.aws.amazon.com/s3/) and add a public get policy to it: + +``` + { + \"Version\": \"2008-10-17\", + \"Statement\": [ + { + \"Sid\": \"AllowPublicRead\", + \"Effect\": \"Allow\", + \"Principal\": { + \"AWS\": \"*\" + }, + \"Action\": \"s3:GetObject\", + \"Resource\": \"arn:aws:s3:::BUCKETNAME/*\" + } + ] + } +``` + +Then set up your special remote with the options `encryption=none`, `bucket='BUCKETNAME'` `chunk=0` (and any others you want). + +Your files will be accessible through `http://BUCKETNAME.s3-website-LOCATION.amazonaws.com/KEY` where location is the one specified through the options `datacenter` and KEY is the SHA-SOMETHING hash of the file, created by git annex and accessible if you run `git annex lookupkey FILEPATH`. + +This way you can share a link to each file you have at your S3 remote. +"""]]