Skip to main content

Mojo function

broadcast

broadcast[dtype: DType, in_layout: TensorLayout, in_origin: Origin[mut=in_origin.mut], //, ngpus: Int, pdl_level: PDLLevel = PDLLevel(), use_multimem: Bool = False](input_tensor: TileTensor[dtype, in_layout, in_origin], output_tensor: TileTensor[dtype, in_layout, output_tensor.origin], rank_sigs: InlineArray[UnsafePointer[Signal, MutAnyOrigin], 8], ctx: DeviceContext, root: Int, _max_num_blocks: Optional[Int] = None)

Broadcast data from root GPU to all participating GPUs.

Parameters:

  • dtype (DType): Data type of the tensor elements.
  • in_layout (TensorLayout): Layout of the input TileTensor.
  • in_origin (Origin): Origin of the input TileTensor.
  • ngpus (Int): Number of GPUs participating in the broadcast.
  • pdl_level (PDLLevel): Controls PDL behavior for P2P kernels.
  • use_multimem (Bool): Whether to use multimem mode for improved performance.

Args:

  • input_tensor (TileTensor): Input tensor from root GPU as a TileTensor.
  • output_tensor (TileTensor): Output tensor for THIS GPU as a TileTensor.
  • rank_sigs (InlineArray): Per-GPU Signal pointers.
  • ctx (DeviceContext): Device context for THIS GPU.
  • root (Int): Root GPU rank (source of broadcast data).
  • _max_num_blocks (Optional): Optional grid limit.

Was this page helpful?