dotnet-installer/test/Performance
Justin Goshi 6d57ca7e14 Migrating all test projects (#4668)
* WIP migrate tests

* WIP fixing more tests

* WIP fix test build break

* Test results files are now trx

* Get CI to pass until we get an xunit xml logger

* Added DotNetTestPJ since that was needed for one test

* Fix build break

* Forgot to add DotNetTestPJ as a build task

* Need to restore project.json for the project used in ubuntu test

* Restore PJ for ubuntu test

* Switch the Ubuntu test to csproj based
2016-11-11 21:46:29 -10:00
..
.gitignore Add script to run and compare CLI perf tests (#2746) 2016-05-02 17:07:10 -07:00
BuildPerformanceTest.cs [WIP] Removes *3 verbs, making msbuild the driver (#4456) 2016-10-27 18:46:43 -07:00
HelloWorld.cs [WIP] Removes *3 verbs, making msbuild the driver (#4456) 2016-10-27 18:46:43 -07:00
Performance.csproj Migrating all test projects (#4668) 2016-11-11 21:46:29 -10:00
Performance.xproj Build-Time Hello World Performance Test (#2681) 2016-04-26 17:52:34 -07:00
project.json [WIP] Removes *3 verbs, making msbuild the driver (#4456) 2016-10-27 18:46:43 -07:00
README.md Add README howto for perf tests 2016-05-10 08:38:07 -07:00
run-perftests.py perf tests: add fallback feed to dotnet restore (#3160) 2016-05-25 11:26:46 -07:00

Running Performance Tests

Pre-Requisites

  • Python 2.7+ or 3.5+
  • msbuild.exe (must be on PATH)

Single Perf Run

  1. Build the CLI repo to get dotnet.exe, or otherwise source the CLI. For meaningful perf results, be sure to use release mode.

  2. cd <cli_repo_root>/test/Performance

  3. python run-perftests.py <dotnet_bin> --name <unique_run_name> --xunit-perf-path <x_repo_path>
    where:

    • <dotnet_bin> is the path to the dotnet binary whose perf you want to measure.
    • <x_repo_path> should point either to an non-existent directory, or to the root of a local clone of xunit-performance. If a non-existent directory is specified, the repo will automatically be cloned.
      • NOTE: You can also set the environment variable XUNIT_PERFORMANCE_PATH to avoid having to pass this variable every time.
  4. View the *.csv / *.xml results in the current directory.

Comparison Run

In general, follow the same steps as for a single perf run. The following additional steps are required:

  1. In addition to the dotnet.exe that you're testing, be sure to also build or otherwise source the baseline dotnet.exe. This could be the "stage0" exe, or the exe from the last nightly build, or the exe built from sources prior to changes you made, etc.

  2. When invoking run-perftests.py, add an additional parameter: --base <base_bin>, which points to the baseline dotnet.exe mentioned in step 1.

  3. View the *.html file generated for the perf comparison analysis.

Debugging Issues

The output of commands invoked by run-perftests is hidden by default. You can see the output after an error by looking in the logs/run-perftests directory. Alternatively, you can rerun run-perftests with --verbose, which will print all output to the console instead of piping it to log files.