simplebench.runners modulešŸ”—

Test runners for benchmarking.

class simplebench.runners.SimpleRunner(
*,
case: Case,
kwargs: dict[str, Any],
session: Session | None = None,
runner: Callable[..., Any] | None = None,
)[source]šŸ”—

Bases: object

A class to run benchmarks for various actions.

Parameters:
  • case (Case) – The benchmark case to run.

  • kwargs (dict[str, Any]) – The keyword arguments for the benchmark case.

  • session (Session, optional) – The session in which the benchmark is run.

  • runner (Callable[..., Any], optional) – The function to use to run the benchmark. If None, uses default_runner() from SimpleRunner.

Variables:
  • case (Case) – The benchmark case to run.

  • kwargs (dict[str, Any]) – The keyword arguments for the benchmark case.

  • session (Session, optional) – The session in which the benchmark is run.

  • run (Callable[..., Any]) – The function to use to run the benchmark.

calibrate_rounds(
*,
timer: Callable[[], int],
kwargs: dict[str, Any],
setup: Callable[[...], Any] | None = None,
teardown: Callable[[...], Any] | None = None,
action: Callable[[...], Any],
) int[source]šŸ”—

Auto-calibrate the number of rounds for the benchmark.

This method estimates an appropriate number of rounds to use for the benchmark based on the precision and overhead of the timer function and the expected execution time of the action being benchmarked.

The goal is to choose a number of rounds such that the total time taken for the action is significantly larger than the timer precision and overhead, to reduce the impact of timer quantization errors on the measurement.

Parameters:
  • timer – The timer function to use for the benchmark.

  • kwargs – Keyword arguments to pass to the action.

  • setup – A setup function to run before each iteration.

  • teardown – A teardown function to run after each iteration.

  • action – The action to benchmark.

Returns:

The calibrated number of rounds for the benchmark.

case: CasešŸ”—

The benchmark Case to run.

default_runner(
*,
n: int | float,
action: Callable[[...], Any],
setup: Callable[[...], Any] | None = None,
teardown: Callable[[...], Any] | None = None,
kwargs: dict[str, Any] | None = None,
) Results[source]šŸ”—

Run a generic benchmark using the specified action and test case.

Parameters:
  • n –

    The O() ā€˜n’ weight of the benchmark. This is used to calculate a weight for the purpose of O() analysis.

    For example, if the action being benchmarked is a function that sorts a list of length n, then n should be the length of the list. If the action being benchmarked is a function that performs a constant-time operation, then n should be 1.

  • action – The action to benchmark.

  • setup – A setup function to run before each iteration.

  • teardown – A teardown function to run after each iteration.

  • kwargs – Keyword arguments to pass to the action.

Returns:

The results of the benchmark.

Return type:

Results

kwargs: dict[str, Any]šŸ”—

The keyword arguments for the benchmark function.

run(
*,
n: int | float,
action: Callable[[...], Any],
setup: Callable[[...], Any] | None = None,
teardown: Callable[[...], Any] | None = None,
kwargs: dict[str, Any] | None = None,
) Results[source]šŸ”—

Enforce a timeout while running the benchmark with the specified runner.

This method wraps the benchmark execution in a Timeout to enforce the timeout specified in the benchmark case.

Warning

Important Timeout Behavior Notice If the benchmark exceeds the specified timeout set in the case, a SimpleBenchTimeoutError will be raised, and no results will be returned.

Because the worker thread may still be running in the background after a timeout, it is recommended to exit the program cleanly rather than continuing execution because this can (probably WILL) lead to undefined behavior.

Parameters:
  • n –

    The O() ā€˜n’ weight of the benchmark. This is used to calculate a weight for the purpose of O() analysis.

    For example, if the action being benchmarked is a function that sorts a list of length n, then n should be the length of the list. If the action being benchmarked is a function that performs a constant-time operation, then n should be 1.

  • action – The function to benchmark.

  • setup – A setup function to run before each iteration.

  • teardown – A teardown function to run after each iteration.

  • kwargs – Keyword arguments to pass to the function being benchmarked.

Returns:

The results of the benchmark.

Return type:

Results

Raises:

SimpleBenchTimeoutError – If the benchmark exceeds the specified timeout for the case.

session: Session | NonešŸ”—

The session in which the benchmark is run.

property variation_marks: dict[str, Any]šŸ”—

Return the variation marks for the benchmark.

The variation marks are defined by the variation_cols and the current keyworded arguments to the function being benchmarked.

The variation marks identify the specific variations being tested in a run from the kwargs values.

Returns:

The variation marks for the benchmark.

Return type:

dict[str, Any]