Skip to main content

benchmark

Implements the benchmark module for runtime benchmarking.

You can import these APIs from the benchmark package. For example:

import benchmark
from time import sleep

You can pass any fn as a parameter into benchmark.run[...](), it will return a Report where you can get the mean, duration, max, and more:

fn sleeper():
sleep(.01)

var report = benchmark.run[sleeper]()
print(report.mean())
0.012256487394957985

You can print a full report:

report.print()
---------------------
Benchmark Report (s)
---------------------
Mean: 0.012265747899159664
Total: 1.459624
Iters: 119
Warmup Mean: 0.01251
Warmup Total: 0.025020000000000001
Warmup Iters: 2
Fastest Mean: 0.0121578
Slowest Mean: 0.012321428571428572

Or all the batch runs:

report.print_full()
---------------------
Benchmark Report (s)
---------------------
Mean: 0.012368649122807017
Total: 1.410026
Iters: 114
Warmup Mean: 0.0116705
Warmup Total: 0.023341000000000001
Warmup Iters: 2
Fastest Mean: 0.012295586956521738
Slowest Mean: 0.012508099999999999

Batch: 1
Iterations: 20
Mean: 0.012508099999999999
Duration: 0.250162

Batch: 2
Iterations: 46
Mean: 0.012295586956521738
Duration: 0.56559700000000002

Batch: 3
Iterations: 48
Mean: 0.012380562499999999
Duration: 0.59426699999999999

If you want to use a different time unit you can bring in the Unit and pass it in as an argument:

from benchmark import Unit

report.print(Unit.ms)
---------------------
Benchmark Report (ms)
---------------------
Mean: 0.012312411764705882
Total: 1.465177
Iters: 119
Warmup Mean: 0.012505499999999999
Warmup Total: 0.025010999999999999
Warmup Iters: 2
Fastest Mean: 0.012015649999999999
Slowest Mean: 0.012421204081632654

The unit's are just aliases for StringLiteral, so you can for example:

print(report.mean("ms"))
12.199145299145298

Benchmark.run takes four arguments to change the behaviour, to set warmup iterations to 5:

r = benchmark.run[sleeper](5)
0.012004808080808081

To set 1 warmup iteration, 2 max iterations, a min total time of 3 sec, and a max total time of 4 s:

r = benchmark.run[sleeper](1, 2, 3, 4)

Note that the min total time will take precedence over max iterations

Batch

A batch of benchmarks, the benchmark.run() function works out how many iterations to run in each batch based the how long the previous iterations took.

Fields:

  • duration (Int): Total duration of batch stored as nanoseconds.
  • iterations (Int): Total iterations in the batch.

Implemented traits:

AnyType, CollectionElement, Copyable, Movable

Methods:

mean

mean(self: Self, unit: String) -> SIMD[f64, 1]

Returns the average duration of the batch.

Args:

  • unit (String): The time unit to display for example: ns, ms, s (default s).

Returns:

The average duration of the batch.

Unit

Time Unit used by Benchmark Report.

Aliases:

  • ns = "ns": Nanoseconds
  • ms = "ms": Milliseconds
  • s = "s": Seconds

Implemented traits:

AnyType

Report

Contains the average execution time, iterations, min and max of each batch.

Fields:

  • warmup_iters (Int): The total warmup iterations.
  • warmup_duration (Int): The total duration it took to warmup.
  • runs (List[Batch]): A List of benchmark runs.

Implemented traits:

AnyType, CollectionElement, Copyable, Movable

Methods:

__init__

__init__(inout self: Self)

Default initializer for the Report.

Sets all values to 0

__copyinit__

__copyinit__(inout self: Self, existing: Self)

Creates a shallow copy (it doesn't copy the data).

Args:

  • existing (Self): The Report to copy.

iters

iters(self: Self) -> Int

The total benchmark iterations.

Returns:

The total benchmark iterations.

duration

duration(self: Self, unit: String) -> SIMD[f64, 1]

The total duration it took to run all benchmarks.

Args:

  • unit (String): The time unit to display for example: ns, ms, s (default s).

Returns:

The total duration it took to run all benchmarks.

mean

mean(self: Self, unit: String) -> SIMD[f64, 1]

The average duration of all benchmark runs.

Args:

  • unit (String): The time unit to display for example: ns, ms, s (default s).

Returns:

The average duration of all benchmark runs.

min

min(self: Self, unit: String) -> SIMD[f64, 1]

The batch of benchmarks that was the fastest to run.

Args:

  • unit (String): The time unit to display for example: ns, ms, s (default s).

Returns:

The fastest duration out of all batches.

max

max(self: Self, unit: String) -> SIMD[f64, 1]

The batch of benchmarks that was the slowest to run.

Args:

  • unit (String): The time unit to display for example: ns, ms, s (default s).

Returns:

The slowest duration out of all batches.

print

print(self: Self, unit: String)

Prints out the shortened version of the report.

Args:

  • unit (String): The time unit to display for example: ns, ms, s (default s).

print_full(self: Self, unit: String)

Prints out the full version of the report with each batch of benchmark runs.

Args:

  • unit (String): The time unit to display for example: ns, ms, s (default s).

run

run[func: fn() -> None](num_warmup: Int, max_iters: Int, min_runtime_secs: SIMD[f64, 1], max_runtime_secs: SIMD[f64, 1], max_batch_size: Int) -> Report

Benchmarks the function passed in as a parameter.

Benchmarking continues until 'min_time_ns' has elapsed and either max_time_ns OR max_iters is achieved.

Parameters:

  • func (fn() -> None): The function to benchmark.

Args:

  • num_warmup (Int): Number of warmup iterations to run before starting benchmarking (default 2).
  • max_iters (Int): Max number of iterations to run (default 1_000_000_000).
  • min_runtime_secs (SIMD[f64, 1]): Upper bound on benchmarking time in secs (default 2).
  • max_runtime_secs (SIMD[f64, 1]): Lower bound on benchmarking time in secs (default 60).
  • max_batch_size (Int): The maximum number of iterations to perform per time measurement.

Returns:

Average execution time of func in ns.

run[func: fn() capturing -> None](num_warmup: Int, max_iters: Int, min_runtime_secs: SIMD[f64, 1], max_runtime_secs: SIMD[f64, 1], max_batch_size: Int) -> Report

Benchmarks the function passed in as a parameter.

Benchmarking continues until 'min_time_ns' has elapsed and either max_time_ns OR max_iters is achieved.

Parameters:

  • func (fn() capturing -> None): The function to benchmark.

Args:

  • num_warmup (Int): Number of warmup iterations to run before starting benchmarking (default 2).
  • max_iters (Int): Max number of iterations to run (default 1_000_000_000).
  • min_runtime_secs (SIMD[f64, 1]): Upper bound on benchmarking time in secs (default 2).
  • max_runtime_secs (SIMD[f64, 1]): Lower bound on benchmarking time in secs (default 60).
  • max_batch_size (Int): The maximum number of iterations to perform per time measurement.

Returns:

Average execution time of func in ns.