For an unknown reason, getAddrInfo currently is segfaulting. Note that
in February, I had used warpDebug, which uses getAddrInfo, and it worked.
Don't know if my toolchain has changed and broke it, or it's due to having
a different Android device now. Anyway, work around it by hardcoding the
address to use.
This caused the code to fail on Android, which doesn't know that flag.
It seems best to let the OS default flags be used. Tested working ok on
Linux. The only difference on Linux is it might return a v4mapped ipv6
address sometimes.
This better handles error messages formatted for console display, by
adding a <br> after each line.
Hmm, I wonder if it'd be worth pulling in a markdown formatter, and running
the messages through it?
I have a report that this is necessary for freebsd.
It seemed not necessary for OSX, but even there the kqueue
man page says to include this header (along with sys/time.h
which it already does).
This looks at the string one char at a time, which is hardly efficient..
but more than good enough for expanding variables in
relatively short command lines.
Unless highRandomQuality=false (or --fast) is set, use Libgcypt's
'GCRY_VERY_STRONG_RANDOM' level by default for cipher generation, like
it's done for OpenPGP key generation.
On the assistant side, the random quality is left to the old (lower)
level, in order not to scare the user with an enless page load due to
the blocking PRNG waiting for IO actions.
There was confusion in different parts of the progress bar code about
whether an update contained the total number of bytes transferred, or the
number of bytes transferred since the last update. One way this bug
showed up was progress bars that seemed to stick at zero for a long time.
In order to fix it comprehensively, I add a new BytesProcessed data type,
that is explicitly a total quantity of bytes, not a delta.
Note that this doesn't necessarily fix every problem with progress bars.
Particularly, buffering can now cause progress bars to seem to run ahead
of transfers, reaching 100% when data is still being uploaded.
When a page is loaded, the javascript requests an notification url, and
does long polling on the url to be informed of changes. But if a change
occured before the notification url was requested, it would not be notified
of that change, and so the page display would not update.
I fixed this by *always* updating the page display after it gets
the notification url. This is extra work, but the overhead is not noticable
in the other overhead of loading a page.
(A nicer way would be to somehow record the version of a page initially
loaded, and then compare it with the current version when getting the
notification url, and only force an update if it's changed. But getting
the "version" of the different parts of the page that use long polling
is difficult.)
There are two types of equality here, and which one is right varies,
so this forces me to consider and choose between them.
Based on this, I learned that the commit in git anex sync was
always doing a strong comparison, even when in a repository where
the inodes had changed. Fixed that.
* addurl: Escape invalid characters in urls, rather than failing to
use an invalid url.
* addurl: Properly handle url-escaped characters in file:// urls.
This fixes the issue mentioned in the last commit.
Turns out just collecting UUID of clients behind a XMPP remote is
insufficient (although I should probably still do it for other reasons),
because a single remote repo might be connected via both XMPP and local
pairing. So a way is needed to know when a push was received from any
client using a given XMPP remote over XMPP, as opposed to via ssh.
This cannot completely guard against a runaway log event, and only runs
every hour anyway, but it should avoid most problems with very
long-running, active assistants using up too much space.
The only thing lost is ./ghci
Speed: make fast used to take 20 seconds here, when rebuilding from
touching Command/Unused.hs. With cabal, it's 29 seconds.