Mojo function
max_pool_gpu
max_pool_gpu[dtype: DType, int_type: DType](ctx: DeviceContext, input: TileTensor[dtype, input.LayoutType, input.origin, address_space=input.address_space, linear_idx_type=input.linear_idx_type, element_size=input.element_size], filter: TileTensor[int_type, filter.LayoutType, filter.origin, address_space=filter.address_space, linear_idx_type=filter.linear_idx_type, element_size=filter.element_size], strides: TileTensor[int_type, strides.LayoutType, strides.origin, address_space=strides.address_space, linear_idx_type=strides.linear_idx_type, element_size=strides.element_size], dilations: TileTensor[int_type, dilations.LayoutType, dilations.origin, address_space=dilations.address_space, linear_idx_type=dilations.linear_idx_type, element_size=dilations.element_size], paddings: TileTensor[int_type, paddings.LayoutType, paddings.origin, address_space=paddings.address_space, linear_idx_type=paddings.linear_idx_type, element_size=paddings.element_size], output: TileTensor[dtype, output.LayoutType, output.origin, address_space=output.address_space, linear_idx_type=output.linear_idx_type, element_size=output.element_size], ceil_mode: Bool = False)
Computes max pooling on GPU.
Args:
- ctx (
DeviceContext): The DeviceContext to use for GPU execution. - input (
TileTensor): (On device) Batched image input to the pool2d operator. - filter (
TileTensor): (On host) Filter size on height and width dimensions with assumed tuple (filter_h, filter_w). - strides (
TileTensor): (On host) Strides on height and width dimensions with assumed tuple (stride_h, stride_w). - dilations (
TileTensor): (On host) Dilations on height and width dimensions with assumed tuple (dilation_h, dilation_w). - paddings (
TileTensor): (On host) Paddings on height and width dimensions with assumed tuple (pad_h_before, pad_h_after, pad_w_before, pad_w_after)). - output (
TileTensor): (On device) Pre-allocated output tensor space. - ceil_mode (
Bool): Ceiling mode defines the output shape and implicit padding.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!