dotnet-installer/test/Performance
2016-06-10 01:09:46 -05:00
..
.gitignore Add script to run and compare CLI perf tests (#2746) 2016-05-02 17:07:10 -07:00
BuildPerformanceTest.cs Revert "Revert "Add performance tests"" (#2820) 2016-05-12 10:33:32 -07:00
HelloWorld.cs Reduce nuget output 2016-05-24 21:34:11 -07:00
Performance.xproj Build-Time Hello World Performance Test (#2681) 2016-04-26 17:52:34 -07:00
project.json Updating CoreSetup to rc3-004443-00. 2016-06-10 01:09:46 -05:00
README.md Add README howto for perf tests 2016-05-10 08:38:07 -07:00
run-perftests.py perf tests: add fallback feed to dotnet restore (#3160) 2016-05-25 11:26:46 -07:00

Running Performance Tests

Pre-Requisites

  • Python 2.7+ or 3.5+
  • msbuild.exe (must be on PATH)

Single Perf Run

  1. Build the CLI repo to get dotnet.exe, or otherwise source the CLI. For meaningful perf results, be sure to use release mode.

  2. cd <cli_repo_root>/test/Performance

  3. python run-perftests.py <dotnet_bin> --name <unique_run_name> --xunit-perf-path <x_repo_path>
    where:

    • <dotnet_bin> is the path to the dotnet binary whose perf you want to measure.
    • <x_repo_path> should point either to an non-existent directory, or to the root of a local clone of xunit-performance. If a non-existent directory is specified, the repo will automatically be cloned.
      • NOTE: You can also set the environment variable XUNIT_PERFORMANCE_PATH to avoid having to pass this variable every time.
  4. View the *.csv / *.xml results in the current directory.

Comparison Run

In general, follow the same steps as for a single perf run. The following additional steps are required:

  1. In addition to the dotnet.exe that you're testing, be sure to also build or otherwise source the baseline dotnet.exe. This could be the "stage0" exe, or the exe from the last nightly build, or the exe built from sources prior to changes you made, etc.

  2. When invoking run-perftests.py, add an additional parameter: --base <base_bin>, which points to the baseline dotnet.exe mentioned in step 1.

  3. View the *.html file generated for the perf comparison analysis.

Debugging Issues

The output of commands invoked by run-perftests is hidden by default. You can see the output after an error by looking in the logs/run-perftests directory. Alternatively, you can rerun run-perftests with --verbose, which will print all output to the console instead of piping it to log files.