Skip to main content

Mojo function

sliced_add

sliced_add[dtype: DType, //, target: StringSlice[StaticConstantOrigin]](c: TileTensor[dtype, c.LayoutType, c.origin, address_space=c.address_space, linear_idx_type=c.linear_idx_type, element_shape_types=c.element_shape_types], a: TileTensor[dtype, a.LayoutType, a.origin, address_space=a.address_space, linear_idx_type=a.linear_idx_type, element_shape_types=a.element_shape_types], b: TileTensor[dtype, b.LayoutType, b.origin, address_space=b.address_space, linear_idx_type=b.linear_idx_type, element_shape_types=b.element_shape_types], lora_end_idx: TileTensor[DType.int64, lora_end_idx.LayoutType, lora_end_idx.origin, address_space=lora_end_idx.address_space, linear_idx_type=lora_end_idx.linear_idx_type, element_shape_types=lora_end_idx.element_shape_types], ctx: Optional[DeviceContext])

Adds tensors a and b element-wise for rows < lora_end_idx, otherwise copies a.

This is used for LoRA where only some sequences have LoRA applied. For rows in [0, lora_end_idx): c = a + b For rows in [lora_end_idx, batch_seq_len): c = a

Args:

  • c (TileTensor): Output tensor.
  • a (TileTensor): First input tensor.
  • b (TileTensor): Second input tensor.
  • lora_end_idx (TileTensor): Scalar tensor with end index of LoRA token portion (rows to apply add).
  • ctx (Optional): Device context for GPU operations.

Was this page helpful?