dotnet-installer/test/Performance
Bill Wert 785cab3072 Reduce nuget output
This addresses part of #1623. Unfortunately, because the CLI takes Nuget
as a binary, it is hard to get to where I think we should really be.
This change makes default verbosity "minimal", which is the first level
where you get any status output. Unfortunately, things like package
downgrade warnings and the like still appear there. This does hide all
the "info" and "trace" messages by default.

I also removed the now useless (and previously undocumented)
--quiet.
2016-05-24 21:34:11 -07:00
..
.gitignore Add script to run and compare CLI perf tests (#2746) 2016-05-02 17:07:10 -07:00
BuildPerformanceTest.cs Revert "Revert "Add performance tests"" (#2820) 2016-05-12 10:33:32 -07:00
HelloWorld.cs Reduce nuget output 2016-05-24 21:34:11 -07:00
Performance.xproj Build-Time Hello World Performance Test (#2681) 2016-04-26 17:52:34 -07:00
project.json Updating core packages to rc3-24123-01 2016-05-23 17:19:11 -07:00
README.md Add README howto for perf tests 2016-05-10 08:38:07 -07:00
run-perftests.py Add script to run and compare CLI perf tests (#2746) 2016-05-02 17:07:10 -07:00

Running Performance Tests

Pre-Requisites

  • Python 2.7+ or 3.5+
  • msbuild.exe (must be on PATH)

Single Perf Run

  1. Build the CLI repo to get dotnet.exe, or otherwise source the CLI. For meaningful perf results, be sure to use release mode.

  2. cd <cli_repo_root>/test/Performance

  3. python run-perftests.py <dotnet_bin> --name <unique_run_name> --xunit-perf-path <x_repo_path>
    where:

    • <dotnet_bin> is the path to the dotnet binary whose perf you want to measure.
    • <x_repo_path> should point either to an non-existent directory, or to the root of a local clone of xunit-performance. If a non-existent directory is specified, the repo will automatically be cloned.
      • NOTE: You can also set the environment variable XUNIT_PERFORMANCE_PATH to avoid having to pass this variable every time.
  4. View the *.csv / *.xml results in the current directory.

Comparison Run

In general, follow the same steps as for a single perf run. The following additional steps are required:

  1. In addition to the dotnet.exe that you're testing, be sure to also build or otherwise source the baseline dotnet.exe. This could be the "stage0" exe, or the exe from the last nightly build, or the exe built from sources prior to changes you made, etc.

  2. When invoking run-perftests.py, add an additional parameter: --base <base_bin>, which points to the baseline dotnet.exe mentioned in step 1.

  3. View the *.html file generated for the perf comparison analysis.

Debugging Issues

The output of commands invoked by run-perftests is hidden by default. You can see the output after an error by looking in the logs/run-perftests directory. Alternatively, you can rerun run-perftests with --verbose, which will print all output to the console instead of piping it to log files.