Revert "work around minimum part size problem"

This reverts commit a42022d8ff.

I misunderstood the cause of the problem.
This commit is contained in:
Joey Hess 2014-11-04 16:21:55 -04:00
parent a42022d8ff
commit 93feefae05
2 changed files with 8 additions and 19 deletions

View file

@ -181,16 +181,9 @@ store r h = fileStorer $ \k f p -> do
} }
uploadid <- S3.imurUploadId <$> sendS3Handle h startreq uploadid <- S3.imurUploadId <$> sendS3Handle h startreq
{- The actual part size will be a even multiple of the -- The actual part size will be a even multiple of the
- 32k chunk size that hGetUntilMetered uses. -- 32k chunk size that hGetUntilMetered uses.
- let partsz' = (partsz `div` toInteger defaultChunkSize) * toInteger defaultChunkSize
- Also, half-size parts are used. This is so that
- the final part of a file can be rolled into the
- last full-size part, which avoids a problem when the
- final part could otherwise be too small for S3 to accept
- it.
-}
let partsz' = (partsz `div` toInteger defaultChunkSize `div` 2) * toInteger defaultChunkSize
-- Send parts of the file, taking care to stream each part -- Send parts of the file, taking care to stream each part
-- w/o buffering in memory, since the parts can be large. -- w/o buffering in memory, since the parts can be large.
@ -202,7 +195,7 @@ store r h = fileStorer $ \k f p -> do
else do else do
-- Calculate size of part that will -- Calculate size of part that will
-- be read. -- be read.
let sz = if fsz - pos < partsz' * 2 let sz = if fsz - pos < partsz'
then fsz - pos then fsz - pos
else partsz' else partsz'
let p' = offsetMeterUpdate p (toBytesProcessed pos) let p' = offsetMeterUpdate p (toBytesProcessed pos)

View file

@ -21,14 +21,10 @@ the S3 remote.
* `chunk` - Enables [[chunking]] when storing large files. * `chunk` - Enables [[chunking]] when storing large files.
`chunk=1MiB` is a good starting point for chunking. `chunk=1MiB` is a good starting point for chunking.
* `partsize` - Amazon S3 only accepts uploads up to a certian file size, * `partsize` - Specifies the largest object to attempt to store in the
and storing larger files requires a multipart upload process. bucket. Multipart uploads will be used when storing larger objects.
Setting `partsize=1GiB` is recommended for Amazon S3; this will This is not enabled by default, but can be enabled or changed at any
cause multipart uploads to be done using parts up to 1GiB in size. time. Setting `partsize=1GiB` is reasonable for S3.
This is not enabled by default, since other S3 implementations may
not support multipart uploads, but can be enabled or changed at any
time.
* `keyid` - Specifies the gpg key to use for [[encryption]]. * `keyid` - Specifies the gpg key to use for [[encryption]].