Docs/Model API/MAX AI kernels/quantization/qmatmul Docs/Model API/MAX AI kernels/quantization/qmatmulMojo moduleqmatmul comptime valuesβ K_BATCH_SIZEβ comptime K_BATCH_SIZE = 512 Defines the batch size of K used to pack A and unpack B weights. Functionsβ βmatmul_qint4: βmatmul_qint4_pack_b: Was this page helpful?Thank you! We'll create more content like this.Thank you for helping us improve!π What went wrong?Some code doesnβt workIt includes inaccurate informationIt's missing information I needIt was difficult to understandOtherSubmit