Skip to content

Latest commit

 

History

History
73 lines (47 loc) · 3.81 KB

README.md

File metadata and controls

73 lines (47 loc) · 3.81 KB

Micro Benchmarks

This folder contains micro benchmarks that test the performance of .NET Runtime(s).

Tooling

To run the benchmarks, you need to download dotnet cli or use the python script, please see Prerequisites for more.

We use BenchmarkDotNet as the benchmarking tool, you can read more about it in our short summary (it's recommended). The key thing that you need to remember is that BenchmarkDotNet runs every benchmark in a dedicated process and stops the benchmarking when a specified level of precision is met.

To learn more about designing benchmarks, please read Microbenchmark Design Guidelines. It's essential to be familar with these guidelines before modifying or adding any micro benchmarks.

Quick Start

The first thing that you need to choose is the Target Framework. Available options are: netcoreapp3.1|net6.0|net7.0|net8.0|net462. You can specify the target framework using -f|--framework argument. For the sake of simplicity, all examples below use net8.0 as the target framework.

The following commands are run from the src/benchmarks/micro directory.

To run the benchmarks in Interactive Mode, where you will be asked which benchmark(s) to run:

dotnet run -c Release -f net8.0

To list all available benchmarks (read more):

dotnet run -c Release -f net8.0 --list flat|tree

To filter the benchmarks using a glob pattern applied to namespace.typeName.methodName (read more):

dotnet run -c Release -f net8.0 --filter *Span*

To profile the benchmarked code and produce an ETW Trace file (read more):

dotnet run -c Release -f net8.0 --filter $YourFilter --profiler ETW

To run the benchmarks for multiple runtimes (read more):

dotnet run -c Release -f net7.0 --filter * --runtimes net7.0 net8.0

Private Runtime Builds

If you contribute to dotnet/runtime and want to benchmark local builds of .NET Core you need to build dotnet/runtime in Release (including tests - so a command similar to build clr+libs+libs.tests -rc release -lc release) and then provide the path(s) to CoreRun(s). Provided CoreRun(s) will be used to execute every benchmark in a dedicated process:

dotnet run -c Release -f net8.0 --filter $YourFilter \
    --corerun C:\git\runtime\artifacts\bin\testhost\net8.0-windows-Release-x64\shared\Microsoft.NETCore.App\8.0.0\CoreRun.exe

To make sure that your changes don't introduce any regressions, you can provide paths to CoreRuns with and without your changes and use the Statistical Test feature to detect regressions/improvements (read more):

dotnet run -c Release -f net8.0 \
    --filter BenchmarksGame* \
    --statisticalTest 3ms \
    --coreRun \
        "C:\git\runtime_upstream\artifacts\bin\testhost\net8.0-windows-Release-x64\shared\Microsoft.NETCore.App\8.0.0\CoreRun.exe" \
        "C:\git\runtime_fork\artifacts\bin\testhost\net8.0-windows-Release-x64\shared\Microsoft.NETCore.App\8.0.0\CoreRun.exe"

If you prefer to use dotnet cli instead of CoreRun, you need to pass the path to cli via the --cli argument.

BenchmarkDotNet allows you to run the benchmarks for private builds of Full .NET Framework and CoreRT

We once again encourage you to read the full docs about BenchmarkDotNet.