Skip to main content

Python module

conv_transpose

ConvTranspose1d

class max.nn.conv_transpose.ConvTranspose1d(length, in_channels, out_channels, dtype, stride=1, padding=0, dilation=1, output_padding=0, device=None, has_bias=False, permute=False, name=None)

A 1D transposed convolution operator over an input image composed of several input planes.

conv = nn.ConvTranspose1d(
    in_channels,
    out_channels,
    kernel_size,
    stride,
    padding,
    output_padding,
    has_bias=False,
    name="conv3d_weight",
    device=DeviceRef.GPU(),
)

Initializes the ConvTranspose1d layer with weights and optional bias.

Parameters:

  • length (int) – The length of the convolution kernel.
  • in_channels (int) – Number of channels in the input image.
  • out_channels (int) – Number of channels produced by the convolution.
  • dtype (DType) – The data type for weights and bias.
  • stride (tuple[int, int]) – Stride of the convolution. Default: 1.
  • padding (tuple[int, int, int, int]) – Padding added to input. Default: 0.
  • dilation (tuple[int, int]) – Spacing between kernel elements. Default: 1.
  • output_padding (tuple[int, int]) – Additional size added to output shape. Default: 0.
  • device (DeviceRef | None) – The target device for computation.
  • has_bias (bool) – When True, adds a bias vector. Default: False.
  • permute (bool) – Whether to permute weights between PyTorch and MAX format.
  • name (str | None) – Base name for weights.

bias

bias: Weight | None = None

The optional bias vector stored on CPU with shape (out_channels,). Model init moves the bias to device if present.

device

device: DeviceRef | None

The device where matrix operations are performed.

dilation

dilation: tuple[int, int]

Not implemented yet. Assuming dilation = 1 for now.

output_padding

output_padding: tuple[int, int]

0

Type:

Additional size added to one side of the output shape. Default

padding

padding: tuple[int, int, int, int]

Controls the amount of padding applied before and after the input for depth, height, and width dimensions.

permute

permute: bool

bool controls whether self.weight is permuted from PyTorch order to max order. PyTorch order is: (in_channels, out_channels, kernel_length) Max API order: (kernel_length, out_channels, in_channels).

stride

stride: tuple[int, int]

Controls the stride for the cross-correlation.

weight

weight: Weight

The weight matrix stored on CPU with shape (kernel_length, out_channels, in_channels). Model init moves the weight to device.

WeightNormConvTranspose1d

class max.nn.conv_transpose.WeightNormConvTranspose1d(length, in_channels, out_channels, dtype, stride=1, padding=0, dilation=1, output_padding=0, device=None, has_bias=False, permute=False, name=None)

A 1D transposed convolution operator over an input image composed of several input planes. This version uses weight normalization as described in https://arxiv.org/abs/1602.07868.

Weight normalization reparameterizes weights in terms of a direction vector v and a magnitude scalar g. This can help improve optimization by decoupling the length and direction of weight vectors.

For example:
```python conv = WeightNormConvTranspose1d( length=kernel_size, in_channels=in_channels, out_channels=out_channels, dtype=dtype, stride=stride, padding=padding, output_padding=output_padding, has_bias=False, device=DeviceRef.GPU(), )

Initializes the WeightNormConvTranspose1d layer.

<dl class='field-list'><dt>

**Parameters:**

</dt><dd>

* <strong class='code'>length</strong> ([<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int)) – The length of the convolution kernel.
* <strong class='code'>in_channels</strong> ([<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int)) – Number of channels in the input image.
* <strong class='code'>out_channels</strong> ([<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int)) – Number of channels produced by the convolution.
* <strong class='code'>dtype</strong> ([<em class='code'>DType</em>](../dtype.md#max.dtype.DType)) – The data type for weights and bias.
* <strong class='code'>stride</strong> ([<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int) <em class='code'>|</em> [<em class='code'>tuple</em>](https://docs.python.org/3/library/stdtypes.html#tuple)<em class='code'>[</em>[<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int)<em class='code'>,</em> [<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int)<em class='code'>]</em>) – Stride of the convolution. Default: 1.
* <strong class='code'>padding</strong> ([<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int) <em class='code'>|</em> [<em class='code'>tuple</em>](https://docs.python.org/3/library/stdtypes.html#tuple)<em class='code'>[</em>[<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int)<em class='code'>,</em> [<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int)<em class='code'>,</em> [<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int)<em class='code'>,</em> [<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int)<em class='code'>]</em>) – Padding added to input. Default: 0.
* <strong class='code'>dilation</strong> ([<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int) <em class='code'>|</em> [<em class='code'>tuple</em>](https://docs.python.org/3/library/stdtypes.html#tuple)<em class='code'>[</em>[<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int)<em class='code'>,</em> [<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int)<em class='code'>]</em>) – Spacing between kernel elements. Default: 1.
* <strong class='code'>output_padding</strong> ([<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int) <em class='code'>|</em> [<em class='code'>tuple</em>](https://docs.python.org/3/library/stdtypes.html#tuple)<em class='code'>[</em>[<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int)<em class='code'>,</em> [<em class='code'>int</em>](https://docs.python.org/3/library/functions.html#int)<em class='code'>]</em>) – Additional size added to output shape. Default: 0.
* <strong class='code'>device</strong> ([<em class='code'>DeviceRef</em>](../graph/type.md#max.graph.type.DeviceRef) <em class='code'>|</em> <em class='code'>None</em>) – The target device for computation.
* <strong class='code'>has_bias</strong> ([<em class='code'>bool</em>](https://docs.python.org/3/library/functions.html#bool)) – When True, adds a bias vector. Default: False.
* <strong class='code'>permute</strong> ([<em class='code'>bool</em>](https://docs.python.org/3/library/functions.html#bool)) – Whether to permute weights between PyTorch and MAX format.
* <strong class='code'>name</strong> ([<em class='code'>str</em>](https://docs.python.org/3/library/stdtypes.html#str) <em class='code'>|</em> <em class='code'>None</em>) – Base name for weights.

</dd></dl>

### `conv` \{#max.nn.conv_transpose.WeightNormConvTranspose1d.conv}

> conv: [ConvTranspose1d](#max.nn.conv_transpose.ConvTranspose1d)

The underlying ConvTranspose1d layer.

### `device` \{#max.nn.conv_transpose.WeightNormConvTranspose1d.device}

> device: [DeviceRef](../graph/type.md#max.graph.type.DeviceRef) | [None](https://docs.python.org/3/library/constants.html#None)

The device where matrix operations are performed.

### `weight_g` \{#max.nn.conv_transpose.WeightNormConvTranspose1d.weight_g}

> weight_g: [Weight](../graph/Weight.md#max.graph.Weight)

The magnitude parameter g for weight normalization.

### `weight_v` \{#max.nn.conv_transpose.WeightNormConvTranspose1d.weight_v}

> weight_v: [Weight](../graph/Weight.md#max.graph.Weight)

The direction parameter v for weight normalization.

Was this page helpful?