test: When limiting tests to run with -p, work around tasty limitation by
automatically including dependent tests.
This fixes a reversion because it didn't used to use dependencies and
forced tasty to run the init tests first. That changed when parallelizing
the test suite.
It will sometimes do a little more work than strictly required,
because it adds init tests deps when limited to eg quickcheck tests,
which don't depend on them. But this only adds a few seconds work.
Sponsored-by: Dartmouth College's Datalad project
Those segfaults were caused by setEnv, which should have been fixed by
commits 79017c612e16653d00253f6862b925b287102624 and
ebb76f0486.
Sponsored-by: Dartmouth College's Datalad project
setEnv is not thread safe and could cause a getEnv by another thread to
segfault, or perhaps other had behavior. This is particularly a problem
when using tasty, because tasty runs the test in a thread, and a getEnv
in another thread.
The use of top-level TMVars is ugly, but ok because only 1 test actually
runs at a time per process. Because it has to chdir into the test repo.
The setEnv that remains happens before tasty is running.
Sponsored-by: Dartmouth College's Datalad project
setEnv is not thread safe and could cause a getEnv by another thread to
segfault, or perhaps other had behavior.
Sponsored-by: Dartmouth College's Datalad project
This avoids displaying the unexpected exit codes message when
the list is eg [ExitSuccess, ExitFailure 1].
Sponsored-by: Dartmouth College's Datalad project
Using removePathForcibly avoids concurrent removal problems.
The i386ancient build still uses an old version of ghc and directory that
do not include removePathForcibly though.
Sponsored-by: Dartmouth College's Datalad project
Default to the number of CPU cores, which seems about optimal
on my laptop. Using one more saves me 2 seconds actually.
Better packing of workers improves speed significantly.
In 2 tests runs, I saw segfaulting workers despite my attempt
to work around that issue. So detect when a worker does, and re-run it.
Removed installSignalHandlers again, because I was seeing an
error "lost signal due to full pipe", which I guess was somehow caused
by using it.
Sponsored-by: Dartmouth College's Datalad project
Using concurrent-output this is easy. Just have to check if tasty has
color enabled, and propagate it into the worker processes, some of which
will be run without a controlling console.
Also added a call to installSignalHandlers; I noticed that interrupting
the test suite could leave the console in a bad state and this fixes
that.
The ansi-terminal dependency is free, since tasty also depends on it.
Sponsored-by: Dartmouth College's Datalad project
Unit tests are the main bulk of runtime, so splitting them into 2 or 3
parts should help.
For now, the number of parts is still 1, because on my 4 core laptop,
2 was a little bit slower, and 3 slower yet. However, this probably does
vary based on the number of cores, so needs to be revisited, and perhaps
made dynamic.
Since each test mode gets split into the specified number of parts,
plus property and remote tests, 2 gives 8 parts, and 3 gives 11 parts.
Load went to maybe 18, so there was probably contention slowing things
down.
So probably it needs to start N workers with some parts, and when a
worker finishes, run it with the next part, until all parts are
processed.
Sponsored-by: Dartmouth College's Datalad project
Note the very weird workaround for what appears to be some kind of tasty
bug, which causes a segfault. This is not new to this modification,
I was seeing a segfault before at least intermittently when limiting
git-annex test -p to only run a single test group.
Also, the path from one test repo to a remote test repo used to be
"../../foo", which somehow broke when moving the test repos from .t to
.t/N. I don't actually quite understand how it used to work, but
"../foo" seems correct and works in the new situation.
Test output from the concurrent processes is not yet serialized.
Should be easy to do using concurrent-output.
More test groups will probably make the speedup larger. It would
probably be best to have a larger number of test groups and divvy them
amoung subprocesses numbered based on the number of CPU cores, perhaps
times 2 or 3.
Sponsored-by: Dartmouth College's Datalad project
Avoid git-annex test being very slow when run from within the standalone
linux tarball or OSX app.
It may not really be necessary to add to PATH the directory where the
git-annex binary resides, but it can't hurt. Most places where the test
suite or git-annex run git-annex, they use programPath, so won't need
a modified PATH. But I'm not sure if that's always the case.
Sponsored-by: Dartmouth College's Datalad project
This eliminates the distinction between decodeBS and decodeBS', encodeBS
and encodeBS', etc. The old implementation truncated at NUL, and the
primed versions had to do extra work to avoid that problem. The new
implementation does not truncate at NUL, and is also a lot faster.
(Benchmarked at 2x faster for decodeBS and 3x for encodeBS; more for the
primed versions.)
Note that filepath-bytestring 1.4.2.1.8 contains the same optimisation,
and upgrading to it will speed up to/fromRawFilePath.
AFAIK, nothing relied on the old behavior of truncating at NUL. Some
code used the faster versions in places where I was sure there would not
be a NUL. So this change is unlikely to break anything.
Also, moved s2w8 and w82s out of the module, as they do not involve
filesystem encoding really.
Sponsored-by: Shae Erisson on Patreon
Make sure that tip keeps working.
I tried to go futher and touch the file and make sure it stayed what it
was converted to, but struggled with some weird and not entirely
reproducable behavior, so kept the tests simple for now.
Display the transcript as part of the failure message for the assertion.
This avoids scrambling the tasty display.
This commit was sponsored by Ethan Aubin on Patreon.
Only displaying git-annex and git command output when something went wrong.
A few could still leak stderr. These include the couple of calls
to readProcess, which reads stdin but lets stderr through. But they don't
leak any usually, so probably only would when failing anyway.
Currently, there is no excess output at all!
This commit was sponsored by Brock Spratlen on Patreon.
Believed to be no longer needed as I've squashed the last ones.
Note that, in Test.Framework, I can see no reason for the code to have
run it twice. It does not cause running processes to exit after all,
so any process that has leaked and is running and causing problems with
cleanup of the directory won't be helped by running it.
This commit was sponsored by Mark Reidenbach on Patreon.
Not yet 100% done, so far I've grepped for waitForProcess and converted
everything that uses that to start the process with withCreateProcess.
Except for some things like P2P.IO and Assistant.TransferrerPool,
and Utility.CoProcess, that manage a pool of processes. See #2
in https://git-annex.branchable.com/todo/more_extensive_retries_to_mask_transient_failures/#comment-209f8a8c38e63fb3a704e1282cb269c7
for how those will need to be dealt with.
checkSuccessProcess, ignoreFailureProcess, and forceSuccessProcess calls waitForProcess, so
callers of them will also need to be dealt with, and have not been yet.
retrieveExport is part of ongoing transition to make remote methods
throw exceptions, rather than silently hide them.
getKey very rarely fails, and when it does it's always for the same reason
(user configured annex.backend to url for some reason). So, this will
avoid dealing with Nothing everywhere it's used.
This commit was sponsored by Ilya Shlyakhter on Patreon.
The parser and looking up config keys in the map should both be faster
due to using ByteString.
I had hoped this would speed up startup time, but any improvement to
that was too small to measure. Seems worth keeping though.
Note that the parser breaks up the ByteString, but a config map ends up
pointing to the config as read, which is retained in memory until every
value from it is no longer used. This can change memory usage
patterns marginally, but won't affect git-annex.
Finally builds (oh the agoncy of making it build), but still very
unmergable, only Command.Find is included and lots of stuff is badly
hacked to make it compile.
Benchmarking vs master, this git-annex find is significantly faster!
Specifically:
num files old new speedup
48500 4.77 3.73 28%
12500 1.36 1.02 66%
20 0.075 0.074 0% (so startup time is unchanged)
That's without really finishing the optimization. Things still to do:
* Eliminate all the fromRawFilePath, toRawFilePath, encodeBS,
decodeBS conversions.
* Use versions of IO actions like getFileStatus that take a RawFilePath.
* Eliminate some Data.ByteString.Lazy.toStrict, which is a slow copy.
* Use ByteString for parsing git config to speed up startup.
It's likely several of those will speed up git-annex find further.
And other commands will certianly benefit even more.
I just had a test that crashed at cleanup on linux with:
.t/gpgtest/12/S.gpg-agent.browser: removeDirectoryRecursive:removeContentsRecursive:removePathRecursive:removeContentsRecursive:removePathRecursive:removeContentsRecursive:removePathRecursive:getSymbolicLinkStatus: does not exist (No such file or directory)
sleeping 10 seconds and will retry directory cleanup
git-annex: .t/gpgtest/14/S.gpg-agent.browser: removeDirectoryRecursive:removeContentsRecursive:removePathRecursive:removeContentsRecursive:removePathRecursive:removeContentsRecursive:removePathRecursive:getSymbolicLinkStatus: does not exist (No such file or directory)
removePathForcibly is supposed to be more robust to things in the directory vanishing while it's running, etc.
Will probably avoid such crashes.
It was added to directory-1.2.7, which comes with ghc since 8.0.2.
Since base >= 4.11.1.0 means ghc 8.4.4, I expect all builds will have it,
but I ifdefed it to be sure.
Its repeated opening and writing to the sqlite database somehow caused
inode cache information to occasionally be lost.
This loses code coverage, since running git-annex as a child process
prevents tracking what parts of the code are exercised. I have not looked
at the code coverage in a long time. It would probably be possible to
collect code coverage for the child procesess and merge it together.
On second thought, the extra time running the test suite is worth it.
It will be gained back once we finally get rid of direct mode.
There are two failing tests, same two that have been failing on windows
(though the failure does not look identical). So this should also spare me
the Windows VM while fixing.
This way a failure to clean up the main repo dir from a previous pass
can't result in reusing that repo, which won't be configured right for the
current pass.
No behavior changes, but this shows everywhere that a progress meter
could be displayed when hashing a file to add to the annex.
Many of the places don't make sense to display a progress meter though,
eg when importing the copy of the file probably swamps the hashing of
the file.
This does not change the overall license of the git-annex program, which
was already AGPL due to a number of sources files being AGPL already.
Legally speaking, I'm adding a new license under which these files are
now available; I already released their current contents under the GPL
license. Now they're dual licensed GPL and AGPL. However, I intend
for all my future changes to these files to only be released under the
AGPL license, and I won't be tracking the dual licensing status, so I'm
simply changing the license statement to say it's AGPL.
(In some cases, others wrote parts of the code of a file and released it
under the GPL; but in all cases I have contributed a significant portion
of the code in each file and it's that code that is getting the AGPL
license; the GPL license of other contributors allows combining with
AGPL code.)
This way when there's a failure the output is available to understand
the problem. Should help with some intermittent test failures that I
have not been able to reproduce to understand why it's failing.
Does mean the test output is more verbose, but it was already very
verbose.