Mojo function
batched_matmul_dynamic_scaled_fp8_naive
batched_matmul_dynamic_scaled_fp8_naive[c_type: DType, a_type: DType, b_type: DType, a_scales_type: DType, b_scales_type: DType, //, *, scales_granularity_mnk: IndexList[3], transpose_b: Bool = False](c_: TileTensor[c_type, c_.LayoutType, c_.origin, address_space=c_.address_space, linear_idx_type=c_.linear_idx_type, element_size=c_.element_size], a_: TileTensor[a_type, a_.LayoutType, a_.origin, address_space=a_.address_space, linear_idx_type=a_.linear_idx_type, element_size=a_.element_size], b_: TileTensor[b_type, b_.LayoutType, b_.origin, address_space=b_.address_space, linear_idx_type=b_.linear_idx_type, element_size=b_.element_size], a_scales_: TileTensor[a_scales_type, a_scales_.LayoutType, a_scales_.origin, address_space=a_scales_.address_space, linear_idx_type=a_scales_.linear_idx_type, element_size=a_scales_.element_size], b_scales_: TileTensor[b_scales_type, b_scales_.LayoutType, b_scales_.origin, address_space=b_scales_.address_space, linear_idx_type=b_scales_.linear_idx_type, element_size=b_scales_.element_size], ctx: DeviceContext)
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!