Mojo function
max
max(src: Buffer[type, size, address_space=address_space, origin=origin]) -> SIMD[type, 1]
Computes the max element in a buffer.
Args:
- src (
Buffer[type, size, address_space=address_space, origin=origin]
): The buffer.
Returns:
The maximum of the buffer elements.
max[reduce_axis: Int](src: NDBuffer[type, rank, shape, strides, alignment=alignment, address_space=address_space, exclusive=exclusive], dst: NDBuffer[type, rank, shape])
Computes the max across reduce_axis of an NDBuffer.
Parameters:
- reduce_axis (
Int
): The axis to reduce across.
Args:
- src (
NDBuffer[type, rank, shape, strides, alignment=alignment, address_space=address_space, exclusive=exclusive]
): The input buffer. - dst (
NDBuffer[type, rank, shape]
): The output buffer.
max[: origin.set, : origin.set, //, type: DType, input_fn: fn[Int, Int](IndexList[$1]) capturing -> SIMD[$1|2, $0], output_fn: fn[Int, Int](IndexList[$1], SIMD[$1|2, $0]) capturing -> None, /, single_thread_blocking_override: Bool = False, target: StringLiteral = "cpu"](input_shape: IndexList[size], reduce_dim: Int, context: MojoCallContextPtr = MojoCallContextPtr())
Computes the max across the input and output shape.
This performs the max computation on the domain specified by input_shape
,
loading the inputs using the input_fn
. The results are stored using
the output_fn
.
Parameters:
- type (
DType
): The type of the input and output. - input_fn (
fn[Int, Int](IndexList[$1]) capturing -> SIMD[$1|2, $0]
): The function to load the input. - output_fn (
fn[Int, Int](IndexList[$1], SIMD[$1|2, $0]) capturing -> None
): The function to store the output. - single_thread_blocking_override (
Bool
): If True, then the operation is run synchronously using a single thread. - target (
StringLiteral
): The target to run on.
Args:
- input_shape (
IndexList[size]
): The input shape. - reduce_dim (
Int
): The axis to perform the max on. - context (
MojoCallContextPtr
): The pointer to DeviceContext.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!