.. | ||
.gitignore | ||
BuildPerformanceTest.cs | ||
HelloWorld.cs | ||
Performance.xproj | ||
project.json | ||
README.md | ||
run-perftests.py |
Running Performance Tests
Pre-Requisites
- Python 2.7+ or 3.5+
- msbuild.exe (must be on
PATH
)
Single Perf Run
-
Build the CLI repo to get dotnet.exe, or otherwise source the CLI. For meaningful perf results, be sure to use release mode.
-
cd <cli_repo_root>/test/Performance
-
python run-perftests.py <dotnet_bin> --name <unique_run_name> --xunit-perf-path <x_repo_path>
where:<dotnet_bin>
is the path to the dotnet binary whose perf you want to measure.<x_repo_path>
should point either to an non-existent directory, or to the root of a local clone of xunit-performance. If a non-existent directory is specified, the repo will automatically be cloned.- NOTE: You can also set the environment variable
XUNIT_PERFORMANCE_PATH
to avoid having to pass this variable every time.
- NOTE: You can also set the environment variable
-
View the
*.csv
/*.xml
results in the current directory.
Comparison Run
In general, follow the same steps as for a single perf run. The following additional steps are required:
-
In addition to the dotnet.exe that you're testing, be sure to also build or otherwise source the baseline dotnet.exe. This could be the "stage0" exe, or the exe from the last nightly build, or the exe built from sources prior to changes you made, etc.
-
When invoking
run-perftests.py
, add an additional parameter:--base <base_bin>
, which points to the baseline dotnet.exe mentioned in step 1. -
View the
*.html
file generated for the perf comparison analysis.
Debugging Issues
The output of commands invoked by run-perftests
is hidden by default. You can
see the output after an error by looking in the logs/run-perftests
directory.
Alternatively, you can rerun run-perftests
with --verbose
, which will print
all output to the console instead of piping it to log files.