Known problems:
1. Tries to tahoe start when daemon is already running.
2. If multiple tahoe remotes are set up on the same computer,
they will have the same node.url configured by default,
and this confuses tahoe commands.
This commit was sponsored by LeastAuthority.com
Fixed up a number of things that had worked around there not being a way to
get that.
Most notably, transfer info files on windows now include the process id,
since no locking is currently done. This means the file format varies
between windows and unix.
The gcc response file should make it build with webdav (fingers crossed).
webapp is waiting on a haskell platform upgrade on the autobuilder.
Current one has a too old version of network for hxt to install.
This used to work, but now hsc2hs is failing with a usage message.
Since I have not changed my windows build environment at all, it must be
some change due to a change in the cabal file. Perhaps too make flags are
causing it to hit a windows command line length limit?
Anyway, these hsc files did nothing on Windows, so can be omitted and not
built to work around yet another epic windows weirdness.
Thought was that this would be faster than a map, since a vector can be
updated more efficiently. It turns out to not seem to matter; runtime and
memory usage are basically identical.
This is a massive win on OSX, which doesn't have a sha256sum normally.
Only use external hash commands when the file is > 1 mb,
since cryptohash is quite close to them in speed.
SHA is still used to calculate HMACs. I don't quite understand
cryptohash's API for those.
Used the following benchmark to arrive at the 1 mb number.
1 mb file:
benchmarking sha256/internal
mean: 13.86696 ms, lb 13.83010 ms, ub 13.93453 ms, ci 0.950
std dev: 249.3235 us, lb 162.0448 us, ub 458.1744 us, ci 0.950
found 5 outliers among 100 samples (5.0%)
4 (4.0%) high mild
1 (1.0%) high severe
variance introduced by outliers: 10.415%
variance is moderately inflated by outliers
benchmarking sha256/external
mean: 14.20670 ms, lb 14.17237 ms, ub 14.27004 ms, ci 0.950
std dev: 230.5448 us, lb 150.7310 us, ub 427.6068 us, ci 0.950
found 3 outliers among 100 samples (3.0%)
2 (2.0%) high mild
1 (1.0%) high severe
2 mb file:
benchmarking sha256/internal
mean: 26.44270 ms, lb 26.23701 ms, ub 26.63414 ms, ci 0.950
std dev: 1.012303 ms, lb 925.8921 us, ub 1.122267 ms, ci 0.950
variance introduced by outliers: 35.540%
variance is moderately inflated by outliers
benchmarking sha256/external
mean: 26.84521 ms, lb 26.77644 ms, ub 26.91433 ms, ci 0.950
std dev: 347.7867 us, lb 210.6283 us, ub 571.3351 us, ci 0.950
found 6 outliers among 100 samples (6.0%)
import Crypto.Hash
import Data.ByteString.Lazy as L
import Criterion.Main
import Common
testfile :: FilePath
testfile = "/run/shm/data" -- on ram disk
main = defaultMain
[ bgroup "sha256"
[ bench "internal" $ whnfIO internal
, bench "external" $ whnfIO external
]
]
sha256 :: L.ByteString -> Digest SHA256
sha256 = hashlazy
internal :: IO String
internal = show . sha256 <$> L.readFile testfile
external :: IO String
external = do
s <- readProcess "sha256sum" [testfile]
return $ fst $ separate (== ' ') s
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iQIVAwUAUh0JyckQ2SIlEuPHAQihEQ/+Ix0nlizcigYfHqmqea8DQTE4mlIlGEng
IkUX5e1dir1JqqHk+4jjSdrjuopAnZ5wsf+bWnMEC7yOYftq/59OGowDIy9jWqqf
oN0BQDNIwgR8+5WETewM9ZWc+I5yCcj6sEyf0/BrJJY5QUMENDsRKLF79ADPHWuU
5EGezOHTD0m+KBqSw+s+Cj+XUmZHJgPq2j6kFqlrFMc7qv7rLnj3bI3Mc7aHfWua
g0oHMyXA2jMiP2UwjMU/hAPq6dqt0fjeJ7vyYyIsETQ1jZsdZQtRjaSSxNAEBEJ1
eGD0zYJ6GdusD0IpI+UVp6Wkw3y0kKA0WsYXtu3vVqyEjO7uHBGcWPePiGISd7nd
GPW7KBftKFBa7DQe56d/Ztk05Aze0eeek50DGMovzKfOj/FysNn3ystUy43RBSF8
40kCb8S/Rs2VPVOWNrcccYo2Hx5SFaYG+YBHJTxYNdwsx4DyMB1iGowe3fASJgnF
VMhqkKWn55i/CdJG2jlWr4wSiW44iY3b/na/f3Ktb1FamveYCxJIH+trFBAv3qn3
EWh0wuFeg8VF/WDK/IRcuH12jup784JhIEjTeTvURd4/TJXkNtxO1xHV7BG1VuhR
rSr9AUNfdiPwRZViK4jQUd6x9OF+Qil8pSe3XbvIsJLfpWkiK80kcbFCFmXaJIrQ
5cspJ23ALTI=
=gpx9
-----END PGP SIGNATURE-----
Merge tag 'debian/4.20130827' into debian-wheezy-backport
Release 4.20130827 for sid [dgit]
# gpg: Signature made Tue Aug 27 20:19:21 2013 UTC using RSA key ID 2512E3C7
# gpg: Good signature from "Joey Hess <joeyh@debian.org>"
# gpg: aka "Joey Hess <joey@kitenet.net>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: E85A 5F63 B31D 24C1 EBF0 D81C C910 D922 2512 E3C7
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIVAwUAUgyVackQ2SIlEuPHAQi3ORAAsNQE4iju50yznUQcCRHNG4HXNuZoqj+T
kMyIDQ0bGuI1nlt9Ams9jZM3y7HoUfjiOOQgeoiLQyGHm6jXreiT6LCZIfAcpE7w
2OY9nqmUns6C7JRDkXG+1ONcawJFqKJVtCeuAruFbLf5CLKErOYJnJNY6zRj7zvB
H4hEoGsW/z1CDBWNd5Zh6KGWshJMkCanpNAUBu8IiBc65wdFBkg5xj0MvnmYwZHC
peKPvap6C2SXL1noTLC2o5UVCboZKrOcFp3g6Xy0zNaHHYPMFvktzZ16gD/VlHJa
i5YStKg4VkVl8R5sPGOxqkNZIDnh4+qKSIivYWUBZJPTUMMfDbAQC+FzOdryCHj8
uZJNY/MGGrqg8jKWNwubZY6klK1Ou0HtpFNy5ANudU0EgE0JBgg95LfbnbgWIRrf
FcrJg3VM1zkkHEi3iZmSlZuOST1nO5c+Y71XH0PT6mntrNh65fn/d5a+DcYFx22L
rv3asKVZkAI45BQ/yMDKMLiZufjPVduEXK54hq05QFgSfIDtxzv+BdlTZF/PwMlf
3VuC9ish71+CJVhO7e1qk29ubJtKXfDExcZDhbs0k6IS1jtUNnfgHDnNhrckjgnf
2Vwh4zHDOwWJ2le9Bj6NqjHjsckgCUewwCN+P/7bI+X4YX9xnHEid58YG1S/gxFi
QzCRcLXZr2E=
=SNiD
-----END PGP SIGNATURE-----
Merge tag '4.20130815' into debian-wheezy-backport
tagging version 4.20130815
# gpg: Signature made Thu Aug 15 08:46:33 2013 UTC using RSA key ID 2512E3C7
# gpg: Good signature from "Joey Hess <joeyh@debian.org>"
# gpg: aka "Joey Hess <joey@kitenet.net>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: E85A 5F63 B31D 24C1 EBF0 D81C C910 D922 2512 E3C7
When quvi is installed, git-annex addurl automatically uses it to detect
when an page is a video, and downloads the video file.
web special remote: Also support using quvi, for getting files,
or checking if files exist in the web.
This commit was sponsored by Mark Hepburn. Thanks!
Cabal does not seem to have a way to check if flag A is set and then, if
flag B is set, add a dep. Instead, it makes flag B get unset if the
dep is not available.
This is a compromise. I would like to nice every thread except for the
webapp thread, but it's not practical to do so. That would need every
thread to run as a bound thread, which could add significant overhead.
And any forkIO would escape the nice level.
As seen in this bug report, the lifted exception handling using the StateT
monad throws away state changes when an action throws an exception.
http://git-annex.branchable.com/bugs/git_annex_fork_bombs_on_gpg_file/
.. Which can result in cached values being redundantly calculated, or other
possibly worse bugs when the annex state gets out of sync with reality.
This switches from a StateT AnnexState to a ReaderT (MVar AnnexState).
All changes to the state go via the MVar. So when an Annex action is
running inside an exception handler, and it makes some changes, they
immediately go into affect in the MVar. If it then throws an exception
(or even crashes its thread!), the state changes are still in effect.
The MonadCatchIO-transformers change is actually only incidental.
I could have kept on using lifted-base for the exception handling.
However, I'd have needed to write a new instance of MonadBaseControl
for the new monad.. and I didn't write the old instance.. I begged Bas
and he kindly sent it to me. Happily, MonadCatchIO-transformers is
able to derive a MonadCatchIO instance for my monad.
This is a deep level change. It passes the test suite! What could it break?
Well.. The most likely breakage would be to code that runs an Annex action
in an exception handler, and *wants* state changes to be thrown away.
Perhaps the state changes leaves the state inconsistent, or wrong. Since
there are relatively few places in git-annex that catch exceptions in the
Annex monad, and the AnnexState is generally just used to cache calculated
data, this is unlikely to be a problem.
Oh yeah, this change also makes Assistant.Types.ThreadedMonad a bit
redundant. It's now entirely possible to run concurrent Annex actions in
different threads, all sharing access to the same state! The ThreadedMonad
just adds some extra work on top of that, with its own MVar, and avoids
such actions possibly stepping on one-another's toes. I have not gotten
rid of it, but might try that later. Being able to run concurrent Annex
actions would simplify parts of the Assistant code.