Skip to main content
Log in

Mojo struct

Bench

struct Bench

Constructs a Benchmark object, used for running multiple benchmarks and comparing the results.

Example:

from benchmark import (
Bench,
BenchConfig,
Bencher,
BenchId,
ThroughputMeasure,
BenchMetric,
Format,
)
from utils import IndexList
from gpu.host import DeviceContext
from pathlib import Path

fn example_kernel():
print("example_kernel")

var shape = IndexList[2](1024, 1024)
var bench = Bench(BenchConfig(max_iters=100))

@parameter
@always_inline
fn example(mut b: Bencher, shape: IndexList[2]) capturing raises:
@parameter
@always_inline
fn kernel_launch(ctx: DeviceContext) raises:
ctx.enqueue_function[example_kernel](
grid_dim=shape[0], block_dim=shape[1]
)

var bench_ctx = DeviceContext()
b.iter_custom[kernel_launch](bench_ctx)

bench.bench_with_input[IndexList[2], example](
BenchId("top_k_custom", "gpu"),
shape,
ThroughputMeasure(
BenchMetric.elements, shape.flattened_length()
),
ThroughputMeasure(
BenchMetric.flops, shape.flattened_length() * 3 # number of ops
),
)
# Add more benchmarks like above to compare results

# Pretty print in table format
print(bench)

# Dump report to csv file
bench.config.out_file = Path("out.csv")
bench.dump_report()

# Print in tabular csv format
bench.config.format = Format.tabular
print(bench)
from benchmark import (
Bench,
BenchConfig,
Bencher,
BenchId,
ThroughputMeasure,
BenchMetric,
Format,
)
from utils import IndexList
from gpu.host import DeviceContext
from pathlib import Path

fn example_kernel():
print("example_kernel")

var shape = IndexList[2](1024, 1024)
var bench = Bench(BenchConfig(max_iters=100))

@parameter
@always_inline
fn example(mut b: Bencher, shape: IndexList[2]) capturing raises:
@parameter
@always_inline
fn kernel_launch(ctx: DeviceContext) raises:
ctx.enqueue_function[example_kernel](
grid_dim=shape[0], block_dim=shape[1]
)

var bench_ctx = DeviceContext()
b.iter_custom[kernel_launch](bench_ctx)

bench.bench_with_input[IndexList[2], example](
BenchId("top_k_custom", "gpu"),
shape,
ThroughputMeasure(
BenchMetric.elements, shape.flattened_length()
),
ThroughputMeasure(
BenchMetric.flops, shape.flattened_length() * 3 # number of ops
),
)
# Add more benchmarks like above to compare results

# Pretty print in table format
print(bench)

# Dump report to csv file
bench.config.out_file = Path("out.csv")
bench.dump_report()

# Print in tabular csv format
bench.config.format = Format.tabular
print(bench)

You can pass arguments when running a program that makes use of Bench:

mojo benchmark.mojo -o out.csv -r 10
mojo benchmark.mojo -o out.csv -r 10

This will repeat the benchmarks 10 times and write the output to out.csv in csv format.

Fields

  • config (BenchConfig): Constructs a Benchmark object based on specific configuration and mode.
  • mode (Mode): Benchmark mode object representing benchmark or test mode.
  • info_vec (List[BenchmarkInfo]): A list containing the benchmark info.

Implemented traits

AnyType, UnknownDestructibility, Writable

Methods

__init__

__init__(out self, config: Optional[BenchConfig] = Optional(None), mode: Mode = Mode(0))

Constructs a Benchmark object based on specific configuration and mode.

Args:

  • config (Optional[BenchConfig]): Benchmark configuration object to control length and frequency of benchmarks.
  • mode (Mode): Benchmark mode object representing benchmark or test mode.

bench_with_input

bench_with_input[: origin.set, //, T: AnyType, bench_fn: fn(mut Bencher, $1|1) raises capturing -> None](mut self, bench_id: BenchId, input: T, measures: List[ThroughputMeasure] = List())

Benchmarks an input function with input args of type AnyType.

Parameters:

  • T (AnyType): Benchmark function input type.
  • bench_fn (fn(mut Bencher, $1|1) raises capturing -> None): The function to be benchmarked.

Args:

  • bench_id (BenchId): The benchmark Id object used for identification.
  • input (T): Represents the target function's input arguments.
  • measures (List[ThroughputMeasure]): Optional arg used to represent a list of ThroughputMeasure's.

bench_with_input[: origin.set, //, T: AnyType, bench_fn: fn(mut Bencher, $1|1) raises capturing -> None](mut self, bench_id: BenchId, input: T, *measures: ThroughputMeasure)

Benchmarks an input function with input args of type AnyType.

Parameters:

  • T (AnyType): Benchmark function input type.
  • bench_fn (fn(mut Bencher, $1|1) raises capturing -> None): The function to be benchmarked.

Args:

  • bench_id (BenchId): The benchmark Id object used for identification.
  • input (T): Represents the target function's input arguments.
  • *measures (ThroughputMeasure): Variadic arg used to represent a list of ThroughputMeasure's.

bench_with_input[: origin.set, //, T: AnyTrivialRegType, bench_fn: fn(mut Bencher, $1|1) raises capturing -> None](mut self, bench_id: BenchId, input: T, measures: List[ThroughputMeasure] = List())

Benchmarks an input function with input args of type AnyTrivialRegType.

Parameters:

  • T (AnyTrivialRegType): Benchmark function input type.
  • bench_fn (fn(mut Bencher, $1|1) raises capturing -> None): The function to be benchmarked.

Args:

  • bench_id (BenchId): The benchmark Id object used for identification.
  • input (T): Represents the target function's input arguments.
  • measures (List[ThroughputMeasure]): Optional arg used to represent a list of ThroughputMeasure's.

bench_with_input[: origin.set, //, T: AnyTrivialRegType, bench_fn: fn(mut Bencher, $1|1) raises capturing -> None](mut self, bench_id: BenchId, input: T, *measures: ThroughputMeasure)

Benchmarks an input function with input args of type AnyTrivialRegType.

Parameters:

  • T (AnyTrivialRegType): Benchmark function input type.
  • bench_fn (fn(mut Bencher, $1|1) raises capturing -> None): The function to be benchmarked.

Args:

  • bench_id (BenchId): The benchmark Id object used for identification.
  • input (T): Represents the target function's input arguments.
  • *measures (ThroughputMeasure): Variadic arg used to represent a list of ThroughputMeasure's.

bench_function

bench_function[: origin.set, //, bench_fn: fn(mut Bencher) capturing -> None](mut self, bench_id: BenchId, measures: List[ThroughputMeasure] = List())

Benchmarks or Tests an input function.

Parameters:

  • bench_fn (fn(mut Bencher) capturing -> None): The function to be benchmarked.

Args:

  • bench_id (BenchId): The benchmark Id object used for identification.
  • measures (List[ThroughputMeasure]): Optional arg used to represent a list of ThroughputMeasure's.

bench_function[: origin.set, //, bench_fn: fn(mut Bencher) capturing -> None](mut self, bench_id: BenchId, *measures: ThroughputMeasure)

Benchmarks or Tests an input function.

Parameters:

  • bench_fn (fn(mut Bencher) capturing -> None): The function to be benchmarked.

Args:

  • bench_id (BenchId): The benchmark Id object used for identification.
  • *measures (ThroughputMeasure): Variadic arg used to represent a list of ThroughputMeasure's.

bench_function[: origin.set, //, bench_fn: fn(mut Bencher) raises capturing -> None](mut self, bench_id: BenchId, measures: List[ThroughputMeasure] = List())

Benchmarks or Tests an input function.

Parameters:

  • bench_fn (fn(mut Bencher) raises capturing -> None): The function to be benchmarked.

Args:

  • bench_id (BenchId): The benchmark Id object used for identification.
  • measures (List[ThroughputMeasure]): Optional arg used to represent a list of ThroughputMeasure's.

bench_function[: origin.set, //, bench_fn: fn(mut Bencher) raises capturing -> None](mut self, bench_id: BenchId, *measures: ThroughputMeasure)

Benchmarks or Tests an input function.

Parameters:

  • bench_fn (fn(mut Bencher) raises capturing -> None): The function to be benchmarked.

Args:

  • bench_id (BenchId): The benchmark Id object used for identification.
  • *measures (ThroughputMeasure): Variadic arg used to represent a list of ThroughputMeasure's.

dump_report

dump_report(mut self)

Prints out the report from a Benchmark execution. If Bench.config.out_file is set, it will also write the output in the format set in out_file_format to the file defined in out_file.

pad

pad(self, width: Int, string: String) -> String

Pads a string to a given width.

Args:

  • width (Int): The width to pad the string to.
  • string (String): The string to pad.

Returns:

A string padded to the given width.

__str__

__str__(self) -> String

Returns a string representation of the benchmark results.

Returns:

A string representing the benchmark results.

write_to

write_to[W: Writer](self, mut writer: W)

Writes the benchmark results to a writer.

Parameters:

  • W (Writer): A type conforming to the Writer trait.

Args:

  • writer (W): The writer to write to.