Skip to main content

Benchmarking in C#

A bit of a change to my normal blog posts this time.... In the next few posts I will be talking about benchmark testing and in particular a NuGet package that allows you to run bench-marking tests in a .NET environment. 

So in this post I will demonstrate how you can use a NuGet package to measure the time it takes to open notepad on your local machine. 

This will be the starting point and I am aiming to build up some more complex examples as I learn about the NuGet package. 

So here goes.......

The NuGet package is called BenchMarkDotNet and is a powerful library for bench-marking various tasks. 

The GitHub page can be found here:

https://github.com/dotnet/BenchmarkDotNet

So to use this package, the first thing you need to do is create a new Console App (.NET Framework) project in Visual Studio 


You then need to add the NuGet package to the project (You should find the NuGet package if you search for 'BenchMarkDotNet'):



At the time of writing the latest version is v0.11.1

Once you have done this you need to create 2 classes. One will contain the code that runs the test while the other one will contain the the code that defines the benchmark test. So in my example I have the following:

  • Program.cs - This will contain the code that runs the benchmark test.
  • MyFirstBenchMark.cs - This class contains the code that defines the benchmark test.

Below are the contents of my classes:

Program.cs




Now this class is very simple and just contains a couple of lines to run the benchmark test. Note that the Console.ReadKey() method is there to make sure that the console window does not disappear when the test is completed. 


MyFirstBenchMark.cs



Now this class is a little more complex and I will delve into the detail in future blog posts, but a few things to note:

1) The SimpleJob attribute is used to parameterise your benchmark test
2) The GlobalSetUp attribute should contain code that is run before the benchmark test is run 
3) The GlobalCleanup attribute contains code that should be run after the benchmark test has been run
4) The Benchmark attribute contains a method that runs the benchmark test.

This project will:
  • Open notepad (A set number of times - defined by the SimpleJob attribute)
  • Cleanup the test by closing down all of the instances of notepad.
One thing you need to do is run the project in Release mode and you can do this by changing the Debug drop-down to Release




When you run the project you will be greeted with the following:




Now there is more than what I have shown above as the console will display details of all the tests that were run and you should see notepad being open and closed during the test run. Above, you will see a Mean, Error and Standard Deviation details about the tests that were run. These details are also exported in csv, md and html and can be found in the \bin\Release\BenchmarkDotNet.Artifacts\results directory. This is basically the same as above but in a more importable friendly format. 

So there you go, a quick example of how to run a benchmark test against opening notepad.

Next time I will explain how the SimpleJob attribute works and what the various parameters do.

Please feel free to download the project from my GitHub page:

https://github.com/daveyboywardlaw/BenchMarkingExploration






Comments

Popular posts from this blog

How to deal with pauses and timeouts in specflow

So this blogpost is in response to the weekly Specflow blog posts and challenges that have been written by Gojko Adzic. This weeks challenge was how would you rewrite or rephrase the below scenario: Given a user registers successfully When the account page reloads And the user waits 2 seconds Then the account page displays "Account approved" My initial though was something like this: Given a user registers successfully  When the account page reloads   Then the account page is displayed within a satisfactory time period     And the account page displays "Account Approved" Now the problem with this scenario is what defines a satisfactory time? You could add it as a comment or in a scenario outline but over time the time a user waits could change and if this is updated in the code behind but the scenario outline or comments are not, then what the test does and what is described do not match - this would potentially cause issues in the future. My next ide...

Benchmarking in C# - Simple Job attribute

In this post I am going to continue my servies on BenchMarkDotNet and I will explain what the various parameters do that are present in the simplejob attribute.  Now I say all....... the documentation is not great so I will explain what 3 out of the 4 do :) So If you remember in my previous post I had a MyFirstBenchmark class which contained the details about the benchmark test that I wanted to run. In this class there is an attribute called SimpleJob and this attribute contains a few parameters that can be configured when you run a test.  By default BenchMarkDotNet will choose the number of iterations that will return the best precision. However, using the simplejob attribute allows you to quickly get a set of statistics. Below is how my test was setup: Now as you can see I had the following parameters in the SimpleJob attribute: launchCount - This parameter allows you to define how many times the benchmarking test is run warmupCount ...

Monitoring and Observability

In this post I am going to talk about monitoring and observability. Now what are monitoring and observability? Monitoring is defined as….. "to watch and check a situation carefully for a period of time in order to discover something about it" From <https://dictionary.cambridge.org/dictionary/english/monitoring> Observability is defined as….. "to watch carefully the way something happens or the way someone does something, especially in order to learn more about it" From <https://dictionary.cambridge.org/dictionary/english/observe?q=Observing> So the main difference between the 2 is that one is around checking to discover something whilst the other is about watching something in order to learn more about it. Why are both important? They are both important as they can tell us different things about a system and can provide us with information that can feed into decisions and actions that can be made both now and in the future to improve software quality as ...