Skip to main content

PyTorch config

#include "max/c/pytorch/config.h"

Functions

M_newTorchInputSpec()

M_TorchInputSpec *M_newTorchInputSpec(const int64_t *shape, int64_t rankSize, M_Dtype type)

Creates TorchScript input specification.

You need the M_TorchInputSpec object returned here as an argument for M_setTorchInputSpecs().

If the model supports an input with dynamic shapes, use M_getDynamicDimensionValue() for that dimension size.

For example:

// Static input shape:
int64_t shape1[] = {100, 200};
M_TorchInputSpec *inputSpec1 = M_newTorchInputSpec(shape1, /*rank=*/2,
/*dtype=*/M_INT32);
// Dynamic input shape:
int64_t shape2[] = {100, 200, M_getDynamicDimensionValue()};
M_TorchInputSpec *inputSpec2 = M_newTorchInputSpec(shape2, /*rank=*/3,
/*dtype=*/M_INT32);

M_TorchInputSpec *inputSpecs[2] = {inputSpec1, inputSpec2};
M_setTorchInputSpecs(compileConfig, inputSpecs, 2);

M_AsyncCompiledModel *compiledModel = M_compileModel(context,
&compileConfig,
status);

Note: When storing data in memory, we always use a diminishing stride size. That is, earlier dimensions in the shape have larger strides than later dimensions. For example, a C array declared as int arr[1][2][3] would have a shape specified as {1, 2, 3}.

  • Parameters:

    • shape – The input tensor shape, if rank is static. Otherwise, use NULL, if the shape is fully dynamic.
    • rankSize – The input tensor rank, if rank is static. Otherwise, use M_getDynamicRankValue(), if the rank is unknown (the shape is fully dynamic). Note that the rank can still be static even when some dimension sizes are dynamic (such as when only the batch size is dynamic); in that case, use M_getDynamicDimensionValue() for that dimension size (in the shape).
    • dtype – The datatype for the input.
  • Returns:

    A pointer to the inputspec. You are responsible for the memory associated with the pointer returned. The memory can be deallocated by calling M_freeTorchInputSpec().

M_setTorchInputSpecs()

void M_setTorchInputSpecs(M_CompileConfig *config, M_TorchInputSpec **inputSpecs, size_t inputSpecsSize)

Sets the input specifications for a TorchScript model.

You must call this before you compile a TorchScript model with M_compileModel(), in order to specify the input specs. (This is not needed to compile a TensorFlow SavedModel or ONNX model.)

  • Parameters:

    • config – The compilation configuration for your model.
    • inputSpecs – The input specifications, including the shape, rank, and type for each input tensor. These specs are copied into the configuration, so it’s safe to release the M_TensorSpec array after this function returns.
    • inputSpecsSize – The number of input specifications to set.

M_freeTorchInputSpec()

void M_freeTorchInputSpec(M_TorchInputSpec *compileSpec)

Deallocates the memory for the input spec. No-op if spec is NULL.

  • Parameters:

    spec – The input spec to deallocate.