From c7684de78eaa8bb8516f21716084e1425ca72843 Mon Sep 17 00:00:00 2001
From: "http://joeyh.name/" <http://joeyh.name/@web>
Date: Fri, 24 Oct 2014 16:02:24 +0000
Subject: [PATCH] Added a comment

---
 ...mment_3_6ccbb1cff7bc6b4640220d98f7ce21c3._comment | 12 ++++++++++++
 1 file changed, 12 insertions(+)
 create mode 100644 doc/bugs/Issue_fewer_S3_GET_requests/comment_3_6ccbb1cff7bc6b4640220d98f7ce21c3._comment

diff --git a/doc/bugs/Issue_fewer_S3_GET_requests/comment_3_6ccbb1cff7bc6b4640220d98f7ce21c3._comment b/doc/bugs/Issue_fewer_S3_GET_requests/comment_3_6ccbb1cff7bc6b4640220d98f7ce21c3._comment
new file mode 100644
index 0000000000..353281db65
--- /dev/null
+++ b/doc/bugs/Issue_fewer_S3_GET_requests/comment_3_6ccbb1cff7bc6b4640220d98f7ce21c3._comment
@@ -0,0 +1,12 @@
+[[!comment format=mdwn
+ username="http://joeyh.name/"
+ ip="209.250.56.96"
+ subject="comment 3"
+ date="2014-10-24T16:02:23Z"
+ content="""
+The OOM is [[S3_memory_leaks]]; fixed in the s3-aws branch.
+
+Yeah, GET of a bucket is doable. Another problem with it though is, if the bucket has a lot of contents, such as many files, or large files split into many chunks, that all has to be buffered in memory or processed as a stream. It would make sense in operations where git-annex knows it wants to check every key in a bucket. `git annex unused --from $s3remote` is the case that springs to mind where it could be quite useful to do that. Integrating it with `get`, not so much.
+
+I'd be inclined to demote this to a wishlist todo item to try to use bucket GET for `unused`. And/or rethink whether it makes sense for `copy --to` to run in --fast mode by default. I've been back and forth on that question before, but just from a runtime perspective, not from a 13 cents perspective. ;)
+"""]]