From ca68beaa64cb051bd2d8c2e78c8f63bf8fc4cf0a Mon Sep 17 00:00:00 2001 From: Joey Hess Date: Tue, 4 Nov 2014 17:18:20 -0400 Subject: [PATCH] add todo item so I don't forget; it will only come into effect when this branch is merged --- doc/todo/S3_multipart_interruption_cleanup.mdwn | 14 ++++++++++++++ 1 file changed, 14 insertions(+) create mode 100644 doc/todo/S3_multipart_interruption_cleanup.mdwn diff --git a/doc/todo/S3_multipart_interruption_cleanup.mdwn b/doc/todo/S3_multipart_interruption_cleanup.mdwn new file mode 100644 index 0000000000..adb5fd2cb0 --- /dev/null +++ b/doc/todo/S3_multipart_interruption_cleanup.mdwn @@ -0,0 +1,14 @@ +When a multipart S3 upload is being made, and gets interrupted, +the parts remain in the bucket, and S3 may charge for them. + +I am not sure what happens if the same object gets uploaded again. Is S3 +nice enough to remove the old parts? I need to find out.. + +If not, this needs to be dealt with somehow. One way would be to configure an +expiry of the uploaded parts, but this is tricky as a huge upload could +take arbitrarily long. Another way would be to record the uploadid and the +etags of the parts, and then resume where it left off the next time the +object is sent to S3. (Or at least cancel the old upload; resume isn't +practical when uploading an encrypted object.) + +It could store that info in either the local FS or the git-annex branch.