Adapt dotnet memory allocations#705
Conversation
2841ffc to
8edf525
Compare
epompeii
left a comment
There was a problem hiding this comment.
@ricardoboss thank you for all of your work so far!
I've made a few suggestions and had a couple of questions.
There was a problem hiding this comment.
Would it be possible to get a smaller example file or are they all this large?
There was a problem hiding this comment.
Yeah of course, it was just easily accessible for me. I can provide a smaller one
| pub mod dotnet { | ||
| use bencher_valid::{BYTES, NANOSECONDS}; | ||
|
|
||
| create_measure!(Latency, "Latency", "latency", NANOSECONDS); |
There was a problem hiding this comment.
We shouldn't need to create a new Latency Measure, we should just use the existing one.
There was a problem hiding this comment.
Oh ok, I thought every adapter should define its own set. Will switch to the default measure then
| Some(results_map.into()) | ||
| } | ||
|
|
||
| pub fn new_dotnet(benchmark_metrics: Vec<(BenchmarkName, Vec<DotNetMeasure>)>) -> Option<Self> { |
There was a problem hiding this comment.
For consistency, lets move this method above new_iai. Adapters are (mostly) sorted alphabetically across the code base.
| } | ||
|
|
||
| #[test] | ||
| fn adapter_c_sharp_dot_net_memory() { |
There was a problem hiding this comment.
Can the harness collect both memory and wall-clock time measurements at the same time or are they mutually exclusive?
There was a problem hiding this comment.
That is above my paygrade... I have no idea 😅
There was a problem hiding this comment.
It seems like it does:
Yes, BenchmarkDotNet collects both simultaneously (well, technically in separate runs to avoid interference, but in the same invocation). When you add
[MemoryDiagnoser], the harness:
- Runs timing measurements normally
- Performs a separate run for memory/GC metrics (so they don't skew latency numbers)
- Combines everything into one unified JSON output per benchmark
In the sample file, there are both:
{
"Method": "Tokenize",
"Parameters": "ExampleFileName=expressions.step",
"Statistics": {
"Mean": 106.338...,
"Median": 105.607...,
"StandardDeviation": 3.223...,
...
},
"Memory": {
"Gen0Collections": 168,
"Gen1Collections": 0,
"Gen2Collections": 0,
"TotalOperations": 4194304,
"BytesAllocatedPerOperation": 672
}
}
With that being the case, we're going to want to make sure we:
- Always collect the wall-clock time measurements
- Optionally collect the memory measurements (if they are present)
Right now, it seems like we are only collecting the memory measurements if they are present (skipping 1.).
There was a problem hiding this comment.
I'm not sure I follow... How is the statistics object relevant for the memory allocations? Isn't this used for the latency measurement? That part should mostly be untouched by my changes (latency is still being collected)
There was a problem hiding this comment.
Great! Then we just need to make sure we are testing that the latency measures are still being collected properly.
That is, this adapter_c_sharp_dot_net_memory test should assert on each expected measure (including Latency).
| ); | ||
| } | ||
|
|
||
| pub mod dotnet { |
There was a problem hiding this comment.
For consistency, lets move this below json and above iai.
| if memory.is_some() { | ||
| let json_allocated_metric = JsonNewMetric { | ||
| value: memory.unwrap().bytes_allocated_per_operation.into(), | ||
| let m = memory.unwrap(); |
There was a problem hiding this comment.
Never unwrap in production code. Try this instead:
| let m = memory.unwrap(); | |
| if let Some(memory) = memory { |
Note this would replace the if memory.is_some() { above, so we wouldn't need the let m = memory.unwrap(); at all.
Fixes #699
Adds handling for the
memorykey in BenchmarkDotNet, in case C# benchmarks are run with the[MemoryDiagnoser]attribute.If found, adds a new metric
allocatedmeasured in bytes.