I have a .NET application for which performance issues were reported. My previous question on how to improve performance was closed. The comments told me that I was not doing the performance measurements correctly and I'm not following the performance improvement process.
So, what is that process and how do I perform a performance test for .NET correctly?
CodePudding user response:
So, what is that process and how do I perform a performance test for .NET correctly?
Step 1: read the performance articles by @Eric Lippert
Benchmarking mistakes, part 1 which is about
- Mistake #1: Choosing a bad metric
- Mistake #2: Over-focusing on subsystem performance at the expense of end-to-end performance
- Mistake #3: Running your benchmark in the debugger
- Mistake #4: Benchmarking the debug build instead of the release build
-
- Mistake #5: Using a clock instead of a stopwatch
-
- Mistake #6: Treat the first run as nothing special when measuring average performance
- Mistake #7: Assuming that runtime characteristics in one environment tell you what behavior will be in a different environment
-
- Mistake #8: Forget to take garbage collection effects into account.
Step 2: understand the performance articles by Eric Lippert. Read them again and figure out how exactly you will apply the concepts to your program, IDE etc. How do you compile in release mode? How do you run without a debugger? How do you implement time measurements?
Step 3: make a checklist to ensure that you are following all advises
Step 4: get your performance performance requirements. How fast does it need to be, exactly? In absolute values, on which hardware?
It is ok to use your machine as a reference. But we need to know what machine that is in order to understand what features it has (like CPU, cache levels, turbo boost activated or not, hyperthreading activated or not).
Step 5: profile your application. Use an appropriate tool.
Many companies have JetBrains dotTrace as part of Resharper Ultimate.
Step 6: Analyze the results. Where are hotspots and bottlenecks? How did you find them? Were they suggested by a tool? If so, which one?
Step 7: Think of possible optimizations
Introducing multithreading to a problem that was sequential before may bring you 8x performance. Better algorithms can easily bring you 20x performance (non-multithreaded). Think hard in this step.
Step 8: Forecast the different options. How much performance gain do you expect for each individual option?
If you don't reach the required performance with all options together, you might not even want to start touching the code.
Step 9: Set up version control, because you'll need it.
See step 13. You must be able to roll back. You may get your multithreading wrong and have data races. Your optimization may turn out worse than before. You must be able to rollback to a clean state.
Step 10: Implement the (next) most promising option
Step 11: Verify the result, i.e. profile again (on the same machine, same settings, ...)
Step 12: Compare the new result against the initial result.
Step 13: Keep the changes if they are good, rollback if not.
Step 14: if the performance goal is reached: stop. If not, go back to step 7 or 8 and implement one of the other options.
Stop means: do not implement more. You have reached the requirement. Do not waste your time by optimizing more. "Premature optimization is the root of all evil" [Donald Knuth]
My previous question on how to improve performance was closed.
When asking a performance question here on Stack Overflow:
- provide evidence that you have the basic knowledge.
- provide evidence that you followed the rules (e.g. release build, ...).
- provide the metric you are optimizing.
- provide your hardware details.
- do not post a link to hundreds of lines of code. Reduce the affected code to a minimum.
- run that minimized code in BenchmarkDotNet because
- out of the box, it does a lot of things right (like averaging)
- it's free, so everybody can use it to verify your results
- provide the numbers - all numbers (!) from step 4, 5 and 11.