fix S3 upload buffering problem
Provide file size to new version of hS3.
This commit is contained in:
parent
d8329731c6
commit
6fcd3e1ef7
4 changed files with 43 additions and 30 deletions
|
@ -1,4 +1,4 @@
|
|||
S3 has two memory leaks.
|
||||
S3 has memory leaks
|
||||
|
||||
## with encryption
|
||||
|
||||
|
@ -8,16 +8,10 @@ not yet for S3, in 5985acdfad8a6791f0b2fc54a1e116cee9c12479.
|
|||
|
||||
## always
|
||||
|
||||
The other occurs independant of encryption use. Copying a 100 mb
|
||||
file to S3 causes an immediate sharp memory spike to 119 mb.
|
||||
Copying the file back from S3 causes a slow memory increase toward 119 mb.
|
||||
It's likely that this memory is used by the hS3 library, if it does not
|
||||
construct the message to Amazon lazily. (And it may not be possible to
|
||||
construct it lazily, if it includes checksum headers..)
|
||||
|
||||
I have emailed the hS3 author about this. He wrote back quickly, seems
|
||||
only getting the size of the file is causing it to be buffered, and a quick
|
||||
fix should be forthcoming. Update: 0.5.6 has been released which will
|
||||
allow providing file size out of band to avoid buffering when uploading.
|
||||
Downloading will take further work in hS3.
|
||||
--[[Joey]]
|
||||
The author of hS3 is aware of the problem, and working on it.
|
||||
|
||||
## fixed
|
||||
|
||||
memory leak while uploading content to S3
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue