Mojo function
sliced_add
sliced_add[dtype: DType, //, target: StringSlice[StaticConstantOrigin]](c: TileTensor[dtype, address_space=c.address_space, linear_idx_type=c.linear_idx_type, element_size=c.element_size], a: TileTensor[dtype, address_space=a.address_space, linear_idx_type=a.linear_idx_type, element_size=a.element_size], b: TileTensor[dtype, address_space=b.address_space, linear_idx_type=b.linear_idx_type, element_size=b.element_size], lora_end_idx: TileTensor[DType.int64, address_space=lora_end_idx.address_space, linear_idx_type=lora_end_idx.linear_idx_type, element_size=lora_end_idx.element_size], ctx: Optional[DeviceContext])
Adds tensors a and b element-wise for rows < lora_end_idx, otherwise copies a.
This is used for LoRA where only some sequences have LoRA applied. For rows in [0, lora_end_idx): c = a + b For rows in [lora_end_idx, batch_seq_len): c = a
Args:
- βc (
TileTensor[dtype, address_space=c.address_space, linear_idx_type=c.linear_idx_type, element_size=c.element_size]): Output tensor. - βa (
TileTensor[dtype, address_space=a.address_space, linear_idx_type=a.linear_idx_type, element_size=a.element_size]): First input tensor. - βb (
TileTensor[dtype, address_space=b.address_space, linear_idx_type=b.linear_idx_type, element_size=b.element_size]): Second input tensor. - βlora_end_idx (
TileTensor[DType.int64, address_space=lora_end_idx.address_space, linear_idx_type=lora_end_idx.linear_idx_type, element_size=lora_end_idx.element_size]): Scalar tensor with end index of LoRA token portion (rows to apply add). - βctx (
Optional[DeviceContext]): Device context for GPU operations.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!