Commit graph

46 commits

Author SHA1 Message Date
Joey Hess
0e44c252c8
avoid getting creds from environment during autoenable
When autoenabling special remotes of type S3, weddav, or glacier, do not
take login credentials from environment variables, as the user may not be
expecting the autoenable to happen, and may have those set for other
purposes.
2021-03-17 09:41:12 -04:00
meribold
52412b773b AWS tells me that the Reduced Redundancy storage class is less cost effective than Standard and not recommended 2020-06-08 14:58:48 +00:00
Joey Hess
1532d67c3e
S3: Support signature=v4
To use S3 Signature Version 4. Some S3 services seem to require v4, while
others may only support v2, which remains the default.

I'm also not sure if v4 works correctly in all cases, there is this
upstream bug report: https://github.com/aristidb/aws/issues/262
I've only tested it against the default S3 endpoint.
2020-05-07 13:18:11 -04:00
Joey Hess
2be4122bfc
include passthrough params in --describe-other-params 2020-01-20 16:53:27 -04:00
Joey Hess
7264b9aa8c
tweak wording 2019-05-01 14:29:10 -04:00
Joey Hess
67d6280242
document importree for S3 2019-04-23 13:19:08 -04:00
Joey Hess
39d55a8541
back-compat warning for protocol=https 2019-03-22 12:22:34 -04:00
Joey Hess
7d37011a11
S3: Added protocol= initremote setting, to allow https to be used on a non-standard port
protocol=https implies port=443 and
port=443 implies protocol=https
-- this was necessary because the existing configs set port=443, but
with a protocol setting, users will naturally want to use it, and then
there's no need for them to supply the default https port. So we keep
back-compat, add a nicer way to enable https, and also add support for
non-standard https ports.
2019-03-22 12:17:05 -04:00
Joey Hess
7dd2672bd2
remove manual versioning mention from here
git-annex will tell the user if they need to manually enable versioning
2019-01-29 14:13:14 -04:00
Joey Hess
b7daf2685f
support public versioned S3 access
Makes git annex whereis display the versionId urls.

And, when a s3 remote is enabled without creds, git-annex will use the
versionId urls to access its contents.

This commit was sponsored by Fernando Jimenez on Patreon.
2018-09-06 14:31:41 -04:00
Joey Hess
7407a80c27
S3: Support AWS_SESSION_TOKEN
This commit was sponsored by Boyd Stephen Smith Jr. on Patreon.
2018-09-05 15:53:57 -04:00
Joey Hess
0ff5a41311
S3 versioning=yes config
Not yet used.

This commit was supported by the NSF-funded DataLad project.
2018-08-30 13:45:28 -04:00
Joey Hess
b8780da832
hint about when requesttyle=path is needed 2018-08-01 16:06:34 -04:00
Joey Hess
ba0745b5c2
S3: fix documentation of publicurl
5f0f063a7a documented it as being
configured automatically, but the code never did that. Rather than try
to hard-code whatever urls amazon  uses for its buckets, it seems better
to ask the user to find the url and set it.
2018-07-02 12:30:39 -04:00
Joey Hess
44cd5ae313
S3 export (untested)
It opens a http connection per file exported, but then so does git
annex copy --to s3.

Decided not to munge exported filenames for IA. Too large a chance of
the munging having confusing results. Instead, export of files not
supported by IA, eg with spaces in their name, will fail.

This commit was supported by the NSF-funded DataLad project.
2017-09-08 15:46:24 -04:00
http://xgm.de/oid/
61f362b4ff 2017-02-14 18:14:24 +00:00
Joey Hess
5d05aad74c
S3: Allow configuring with requeststyle=path to use path-style bucket access instead of the default DNS-style access.
untested
2016-02-09 15:36:36 -04:00
Joey Hess
4153507864
Fix failure to build with aws-0.13.0 and finish nearline support.
* Fix failure to build with aws-0.13.0.
* When built with aws-0.13.0, the S3 special remote can be used to create
  google nearline buckets, by setting storageclass=NEARLINE.
2015-11-02 11:14:03 -04:00
Joey Hess
26d6566307 S3 storage classes expansion
Added support for storageclass=STANDARD_IA to use Amazon's
new Infrequently Accessed storage.

Also allows using storageclass=NEARLINE to use Google's NearLine storage.

The necessary changes to aws to support this are in
https://github.com/aristidb/aws/pull/176
2015-09-17 17:20:01 -04:00
Joey Hess
5f0f063a7a S3: Publically accessible buckets can be used without creds. 2015-06-05 16:23:35 -04:00
Joey Hess
4acd28bf21 public=yes config to send AclPublicRead
In my tests, this has to be set when uploading a file to the bucket
and then the file can be accessed using the bucketname.s3.amazonaws.com
url.

Setting it when creating the bucket didn't seem to make the whole bucket
public, or allow accessing files stored in it. But I have gone ahead and
also sent it when creating the bucket just in case that is needed in some
case.
2015-06-05 14:38:01 -04:00
anarcat
abe4181067 remove too much time 2015-05-20 22:19:37 +00:00
https://id.koumbit.net/anarcat
9fa6841e55 2015-02-13 00:27:31 +00:00
Joey Hess
2b138f684e add link to google cloud storage tip 2015-02-04 14:54:16 -04:00
Joey Hess
bc633954e4 remove obsolete note; s3-aws was merged already 2015-01-05 16:45:22 -04:00
https://www.google.com/accounts/o8/id?id=AItOawnWvnTWY6LrcPB4BzYEBn5mRTpNhg5EtEg
4136300f64 2014-12-08 17:24:31 +00:00
Joey Hess
2fbaf6d89c reorder 2014-11-06 14:26:01 -04:00
Joey Hess
83901c6c17 better partsize docs
The minimum allowsed size actually refers to the part size!
2014-11-04 16:38:46 -04:00
Joey Hess
93feefae05 Revert "work around minimum part size problem"
This reverts commit a42022d8ff.

I misunderstood the cause of the problem.
2014-11-04 16:21:55 -04:00
Joey Hess
a42022d8ff work around minimum part size problem
When uploading the last part of a file, which was 640229 bytes, S3 rejected
that part: "Your proposed upload is smaller than the minimum allowed size"

I don't know what the minimum is, but the fix is just to include the last
part into the previous part. Since this can result in a part that's
double-sized, use half-sized parts normally.
2014-11-04 16:06:13 -04:00
Joey Hess
6e89d070bc WIP multipart S3 upload
I'm a little stuck on getting the list of etags of the parts.
This seems to require taking the md5 of each part locally,
which doesn't get along well with lazily streaming in the part from the
file. It would need to read the file twice, or lose laziness and buffer a
whole part -- but parts might be quite large.

This seems to be a problem with the API provided; S3 is supposed to return
an etag, but that is not exposed. I have filed a bug:
https://github.com/aristidb/aws/issues/141
2014-10-28 14:17:30 -04:00
Joey Hess
57872b457b pass metadata headers and storage class to S3 when putting objects 2014-08-09 14:44:53 -04:00
Joey Hess
6fcca2f13e WIP converting S3 special remote from hS3 to aws library
Currently, initremote works, but not the other operations. They should be
fairly easy to add from this base.

Also, https://github.com/aristidb/aws/issues/119 blocks internet archive
support.

Note that since http-conduit is used, this also adds https support to S3.
Although git-annex encrypts everything anyway, so that may not be extremely
useful. It is not enabled by default, because existing S3 special remotes
have port=80 in their config. Setting port=443 will enable it.

This commit was sponsored by Daniel Brockman.
2014-08-08 19:00:53 -04:00
Joey Hess
32e4368377 S3: support chunking
The assistant defaults to 1MiB chunk size for new S3 special remotes.
Which will work around a couple of bugs:
  http://git-annex.branchable.com/bugs/S3_memory_leaks/
  http://git-annex.branchable.com/bugs/S3_upload_not_using_multipart/
2014-08-02 15:51:58 -04:00
Joey Hess
47d0219ac8 update docs for changed initremote encryption syntax 2013-09-04 23:46:50 -04:00
Joey Hess
85d83e7756 To enable an existing special remote, the new enableremote command must be used. The initremote command now is used only to create new special remotes. 2013-04-26 18:22:52 -04:00
Joey Hess
9221e62d87 Allow controlling whether login credentials for S3 and webdav are committed to the repository, by setting embedcreds=yes|no when running initremote. 2012-11-19 17:32:58 -04:00
Joey Hess
e4bf74a965 store S3 creds in a 600 mode file inside the local git repo 2012-09-26 14:42:32 -04:00
Joey Hess
ad4e152fd6 S3: Add fileprefix setting. 2012-08-09 13:54:54 -04:00
https://www.google.com/accounts/o8/id?id=AItOawnoUOqs_lbuWyZBqyU6unHgUduJwDDgiKY
8cca7616ae 2012-05-30 00:25:22 +00:00
Joey Hess
f97009fffe correct S3 environment variable names 2012-05-29 15:01:36 -04:00
Joey Hess
67c9f84a1f fix broken links 2011-11-08 12:23:03 -04:00
Joey Hess
27285adc20 link 2011-05-16 11:55:07 -04:00
Joey Hess
1d2984441c add a few tweaks to make it easy to use the Internet Archive's variant of S3
In particular, munge key filenames to comply with the IA's filename limits,
disable encryption, support their nonstandard way of creating buckets, and
allow x-amz-* headers to be specified in initremote to set item metadata.

Still TODO: initremote does not handle multiword metadata headers right.
2011-05-16 11:20:35 -04:00
Joey Hess
e259c86975 cleanup 2011-05-16 02:21:40 -04:00
Joey Hess
647f7cf47c added documentation for using the Internet Archive as a remote via S3
Renamed Amazon_S3 page to just S3.
2011-05-16 02:07:59 -04:00
Renamed from doc/special_remotes/Amazon_S3.mdwn (Browse further)