S3: Publically accessible buckets can be used without creds.
This commit is contained in:
parent
4acd28bf21
commit
5f0f063a7a
8 changed files with 115 additions and 61 deletions
|
@ -50,8 +50,12 @@ the S3 remote.
|
|||
|
||||
* `public` - Set to "yes" to allow public read access to files sent
|
||||
to the S3 remote. This is accomplished by setting an ACL when each
|
||||
file is uploaded to the remote. So, it can be changed but changes
|
||||
will only affect subseqent uploads.
|
||||
file is uploaded to the remote. So, changes to this setting will
|
||||
only affect subseqent uploads.
|
||||
|
||||
* `publicurl` - Configure the URL that is used to download files
|
||||
from the bucket when they are available publically.
|
||||
(This is automatically configured for Amazon S3 and the Internet Archive.)
|
||||
|
||||
* `partsize` - Amazon S3 only accepts uploads up to a certian file size,
|
||||
and storing larger files requires a multipart upload process.
|
||||
|
|
|
@ -22,16 +22,17 @@ Next, create the S3 remote, and describe it.
|
|||
The configuration for the S3 remote is stored in git. So to make another
|
||||
repository use the same S3 remote is easy:
|
||||
|
||||
# cd /media/usb/annex
|
||||
# export AWS_ACCESS_KEY_ID="08TJMT99S3511WOZEP91"
|
||||
# export AWS_SECRET_ACCESS_KEY="s3kr1t"
|
||||
# git pull laptop
|
||||
# git annex enableremote cloud
|
||||
enableremote cloud (gpg) (checking bucket) ok
|
||||
|
||||
Now the remote can be used like any other remote.
|
||||
Notice that to enable an existing S3 remote, you have to provide the Amazon
|
||||
AWS credentials because they were not stored in the repository. (It is
|
||||
possible to configure git-annex to do that, but not the default.)
|
||||
|
||||
# git annex copy my_cool_big_file --to cloud
|
||||
copy my_cool_big_file (gpg) (checking cloud...) (to cloud...) ok
|
||||
# git annex move video/hackity_hack_and_kaxxt.mov --to cloud
|
||||
move video/hackity_hack_and_kaxxt.mov (checking cloud...) (to cloud...) ok
|
||||
See [[public_Amazon_S3_remote]] for how to set up a Amazon S3 remote that
|
||||
can be used by the public, without them needing AWS credentials.
|
||||
|
||||
See [[special_remotes/S3]] for details.
|
||||
See [[special_remotes/S3]] for details about configuring S3 remotes.
|
||||
|
|
|
@ -9,3 +9,5 @@ Besides, you never know if and when the file really is available on s3, so runni
|
|||
How hard would it be to fix that in the s3 remote?
|
||||
|
||||
Thanks! --[[anarcat]]
|
||||
|
||||
> [[done]] --[[Joey]]
|
||||
|
|
|
@ -0,0 +1,10 @@
|
|||
[[!comment format=mdwn
|
||||
username="joey"
|
||||
subject="""comment 3"""
|
||||
date="2015-06-05T20:17:38Z"
|
||||
content="""
|
||||
The remote can indeed fallback when there are no creds.
|
||||
|
||||
Also, git-annex can set an ACL on files it uploads, if the remote is
|
||||
configured with public=yes, so no manual ACL setting will be needed.
|
||||
"""]]
|
|
@ -1,11 +0,0 @@
|
|||
[[!comment format=mdwn
|
||||
username="joey"
|
||||
subject="""comment 3"""
|
||||
date="2015-06-05T17:28:52Z"
|
||||
content="""
|
||||
Based on
|
||||
<http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html>
|
||||
and my testing, S3 does not default to allowing public access to buckets. So,
|
||||
this seems like something that it makes sense for the user to
|
||||
manually configure when setting up a s3 remote.
|
||||
"""]]
|
Loading…
Add table
Add a link
Reference in a new issue