Skip to main content

Mojo function

broadcast

broadcast[dtype: DType, in_layout: TensorLayout, in_origin: Origin[mut=in_origin.mut], //, ngpus: Int, pdl_level: PDLLevel = PDLLevel(), use_multimem: Bool = False](input_tensor: TileTensor[dtype, in_layout, in_origin], output_tensor: TileTensor[dtype, in_layout], rank_sigs: InlineArray[UnsafePointer[Signal, MutAnyOrigin], 8], ctx: DeviceContext, root: Int, _max_num_blocks: Optional[Int] = None)

Broadcast data from root GPU to all participating GPUs.

Parameters:

  • ​dtype (DType): Data type of the tensor elements.
  • ​in_layout (TensorLayout): Layout of the input TileTensor.
  • ​in_origin (Origin[mut=in_origin.mut]): Origin of the input TileTensor.
  • ​ngpus (Int): Number of GPUs participating in the broadcast.
  • ​pdl_level (PDLLevel): Controls PDL behavior for P2P kernels.
  • ​use_multimem (Bool): Whether to use multimem mode for improved performance.

Args: