Did not keep backwards compat for sticky bit records. An incremental fsck
that is already in progress will start over on upgrade to this version.
This is not yet ready for merging. The autobuilders need to have sqlite
installed.
Also, interrupting a fsck --incremental does not commit the database.
So, resuming with fsck --more restarts from beginning.
Memory: Constant during a fsck of tens of thousands of files.
(But, it does seem to buffer whole transation in memory, so
may really scale with number of files.)
CPU: ?
With the following flags:
$ cabal configure --ghc --prefix=/usr --with-compiler=/usr/bin/ghc --with-hc-pkg=/usr/bin/ghc-pkg --prefix=/usr --libdir=/usr/lib64 --libsubdir=git-annex-5.20141203/ghc-7.8.3.20141119 --datadir=/usr/share/ --datasubdir=git-annex-5.20141203/ghc-7.8.3.20141119 --ghc-option=-O2 --ghc-option=+RTS --ghc-option=-H64M --ghc-option=-M4G --ghc-option=-RTS --ghc-option=-O0 --ghc-option=-j4 --ghc-option=-optl-Wl,-O1 --ghc-option=-optl-Wl,--as-needed --ghc-option=-optl-Wl,--hash-style=gnu --disable-executable-stripping --docdir=/usr/share/doc/git-annex-5.20141203 --verbose --sysconfdir=/etc --disable-library-stripping --flags=-android --flags=-androidsplice --flags=-assistant --flags=cryptohash --flags=dbus --flags=-desktop-notify --flags=dns --flags=-ekg --flags=-feed --flags=-inotify --flags=pairing --flags=production --flags=-quvi --flags=s3 --flags=tahoe --flags=tdfa --flags=-testsuite --flags=-webapp --flags=-webapp-secure --flags=-webdav --flags=-xmpp
ghc detects missing module (used directly by Remote.S3):
Remote/Helper/Http.hs:16:8:
Could not find module ‘Network.HTTP.Client’
It is a member of the hidden package ‘http-client-0.3.8.2’.
Perhaps you need to add ‘http-client’ to the build-depends in your .cabal file.
Use -v to see a list of the files searched for.
Signed-off-by: Sergei Trofimovich <siarheit@google.com>
Now that deps are sorted out in hackage, cabal is unlikely to try to
install a too old AWS, so I don't think this flag is worth the bother of
being completely correct with the dependency versioning.
This avoids me needing to enable to flag on the autobuilders..
I'm a little stuck on getting the list of etags of the parts.
This seems to require taking the md5 of each part locally,
which doesn't get along well with lazily streaming in the part from the
file. It would need to read the file twice, or lose laziness and buffer a
whole part -- but parts might be quite large.
This seems to be a problem with the API provided; S3 is supposed to return
an etag, but that is not exposed. I have filed a bug:
https://github.com/aristidb/aws/issues/141
Didn't know that this library existed!
This includes making git-annex not re-exec itself on start on windows, and
making the test suite on Windows run tests without forking.
This needs optparse-applicative 0.10. Dropped support for 0.9 and older,
but kept 0.9.1 working since autobuilders and debian testing still use it.
(The display is not perfect with 0.9.1.)
This is needed only because of the new MonadMask needed for bracket
in the new version. Ifdefing it everywhere is not practical, since the
Setup.hs uses it.
The hoary old HTTP library was only used when checking if an url exists,
when curl was not available. It had many problems, including not supporting
https at all.
Now, this is done using http-conduit for all urls that it supports. Falls
back to curl for any url that http-conduit doesn't like (probably ftp etc,
but could also be an url that its parser chokes on for whatever reason).
This adds a new dependency on http-conduit, but webdav support already
indirectly depended on that, and the s3-aws branch also uses it.
This opens up the possibility of using http-conduit for large file
downloads, but for now I've left it using wget/curl.
This commit was sponsored by Paul Tötterman.
The hoary old HTTP library was only used when checking if an url exists,
when curl was not available. It had many problems, including not supporting
https at all.
Now, this is done using http-conduit for all urls that it supports. Falls
back to curl for any url that http-conduit doesn't like (probably ftp etc,
but could also be an url that its parser chokes on for whatever reason).
This adds a new dependency on http-conduit, but webdav support already
indirectly depended on that, and the s3-aws branch also uses it.