Added a comment
This commit is contained in:
parent
23bd3564d1
commit
c7684de78e
1 changed files with 12 additions and 0 deletions
|
@ -0,0 +1,12 @@
|
||||||
|
[[!comment format=mdwn
|
||||||
|
username="http://joeyh.name/"
|
||||||
|
ip="209.250.56.96"
|
||||||
|
subject="comment 3"
|
||||||
|
date="2014-10-24T16:02:23Z"
|
||||||
|
content="""
|
||||||
|
The OOM is [[S3_memory_leaks]]; fixed in the s3-aws branch.
|
||||||
|
|
||||||
|
Yeah, GET of a bucket is doable. Another problem with it though is, if the bucket has a lot of contents, such as many files, or large files split into many chunks, that all has to be buffered in memory or processed as a stream. It would make sense in operations where git-annex knows it wants to check every key in a bucket. `git annex unused --from $s3remote` is the case that springs to mind where it could be quite useful to do that. Integrating it with `get`, not so much.
|
||||||
|
|
||||||
|
I'd be inclined to demote this to a wishlist todo item to try to use bucket GET for `unused`. And/or rethink whether it makes sense for `copy --to` to run in --fast mode by default. I've been back and forth on that question before, but just from a runtime perspective, not from a 13 cents perspective. ;)
|
||||||
|
"""]]
|
Loading…
Reference in a new issue