src.bluetooth_sig.utils.profiling

Profiling and performance measurement utilities for Bluetooth SIG library.

Attributes

Name

Description

T

Classes

Name

Description

ProfilingSession

Track multiple profiling results in a session.

TimingResult

Result of a timing measurement.

Functions

Name

Description

benchmark_function(→ TimingResult)

Benchmark a function by running it multiple times.

compare_implementations(→ dict[str, TimingResult])

Compare performance of multiple implementations.

format_comparison(→ str)

Format comparison results as a human-readable table.

timer(→ collections.abc.Generator[dict[str, float], ...)

Context manager for timing a single operation.

Module Contents

class src.bluetooth_sig.utils.profiling.ProfilingSession

Bases: msgspec.Struct

Track multiple profiling results in a session.

add_result(result: TimingResult) None

Add a timing result to the session.

name: str
results: list[TimingResult]
class src.bluetooth_sig.utils.profiling.TimingResult

Bases: msgspec.Struct

Result of a timing measurement.

avg_time: float
iterations: int
max_time: float
min_time: float
operation: str
per_second: float
total_time: float
src.bluetooth_sig.utils.profiling.benchmark_function(func: Callable[[], T], iterations: int = 1000, operation: str = 'function') TimingResult

Benchmark a function by running it multiple times.

Parameters:
  • func – Function to benchmark (should take no arguments)

  • iterations – Number of times to run the function

  • operation – Name of the operation for reporting

Returns:

TimingResult with detailed performance metrics

Example

>>> result = benchmark_function(
...     lambda: translator.parse_characteristic("2A19", b"\\x64"),
...     iterations=10000,
...     operation="Battery Level parsing",
... )
>>> print(result)

Note

Uses time.perf_counter() for high-resolution timing. The function includes a warmup run to avoid JIT compilation overhead in the measurements. Individual timings are collected to compute min/max statistics.

src.bluetooth_sig.utils.profiling.compare_implementations(implementations: dict[str, Callable[[], Any]], iterations: int = 1000) dict[str, TimingResult]

Compare performance of multiple implementations.

Parameters:
  • implementations – Dict mapping implementation name to callable

  • iterations – Number of times to run each implementation

Returns:

Dictionary mapping implementation names to their TimingResults

Example

>>> results = compare_implementations(
...     {
...         "manual": lambda: manual_parse(data),
...         "sig_lib": lambda: translator.parse_characteristic("2A19", data),
...     },
...     iterations=10000,
... )
>>> for name, result in results.items():
...     print(f"{name}: {result.avg_time * 1000:.4f}ms")
src.bluetooth_sig.utils.profiling.format_comparison(results: dict[str, TimingResult], baseline: str | None = None) str

Format comparison results as a human-readable table.

Parameters:
  • results – Dictionary of timing results

  • baseline – Optional name of baseline implementation for comparison

Returns:

Formatted string with comparison table

src.bluetooth_sig.utils.profiling.timer(_operation: str = 'operation') collections.abc.Generator[dict[str, float], None, None]

Context manager for timing a single operation.

Parameters:

_operation – Name of the operation being timed (currently unused, reserved for future use)

Yields:

Dictionary that will contain ‘elapsed’ key with timing result

Example

>>> with timer("parse") as t:
...     parse_characteristic(data)
>>> print(f"Elapsed: {t['elapsed']:.4f}s")
src.bluetooth_sig.utils.profiling.T