PyTorch config
#include "max/c/pytorch/config.h"
#include "max/c/pytorch/config.h"
Functions
M_newTorchInputSpec()
M_TorchInputSpec *M_newTorchInputSpec(const int64_t *shape, const char *const *dimNames, int64_t rankSize, M_Dtype type, const char *device, M_Status *status)
Creates TorchScript input specification.
You need the M_TorchInputSpec
object returned here as an argument for M_setTorchInputSpecs()
.
If the model supports an input with dynamic shapes, use M_getDynamicDimensionValue()
for that dimension size.
For example:
```c // Static input shape: int64_t shape1[] = {100, 200}; M_TorchInputSpec inputSpec1 = M_newTorchInputSpec(/*shape=/shape1, /*dimNames=*/NULL, /*rankSize=*/2, /*dtype=*/M_INT32, /*device=*”cpu”, /*status=*/status); // Dynamic input shape: int64_t shape2[] = {100, 200, M_getDynamicDimensionValue()}; M_TorchInputSpec inputSpec2 = M_newTorchInputSpec(/*shape=/shape2, /*dimNames=*/NULL, /*rankSize=*/3, /*dtype=*/M_INT32, */ // /*device=*”cpu”, /** /*status=*/status);
M_TorchInputSpec *inputSpecs[2] = {inputSpec1, inputSpec2};
M_setTorchInputSpecs(compileConfig, inputSpecs, 2);
M_AsyncCompiledModel *compiledModel = M_compileModel(context,
&compileConfig,
status);
M_TorchInputSpec *inputSpecs[2] = {inputSpec1, inputSpec2};
M_setTorchInputSpecs(compileConfig, inputSpecs, 2);
M_AsyncCompiledModel *compiledModel = M_compileModel(context,
&compileConfig,
status);
Note: When storing data in memory, we always use a diminishing stride size.
That is, earlier dimensions in the shape have larger strides than later
dimensions. For example, a C array declared as int arr[1][2][3]
would have
a shape specified as {1, 2, 3}
.
Input specs may also optionally name their dimensions. If you wish to name
the dimensions, pass an array of size rankSize
of NUL-terminated strings
in the dimNames
parameter. A dimension's name may be NULL
if you do
not want to name that dimension. Dimension names may only contain
alphanumeric characters and underscores, and the initial character of a
dimension name must not be numeric.
@param shape The input tensor shape, if rank is static. Otherwise, use
NULL
, if the shape is fully dynamic.
@param dimNames Optional array of optional dimension names. Array may be
left NULL
to leave all dimensions unnamed, and elements may be left
NULL
to leave individual dimensions unnamed.
@param rankSize The input tensor rank, if rank is static. Otherwise, use
M_getDynamicRankValue()
, if the rank is unknown (the shape is fully
dynamic). Note that the rank can still be static even when some
dimension sizes are dynamic (such as when only the batch size is dynamic);
in that case, use M_getDynamicDimensionValue()
for that dimension size
(in the shape
).
@param type The datatype for the input.
@param device The device on which the input is on. NULL
or empty string
indicates that the default device should be used. Valid devices are, for
example, cpu
, cuda
, cuda:0
.
@param status Status used to report errors.
@returns A pointer to the input spec, or NULL
in case of failure. You
are responsible for the memory associated with the pointer returned. The
memory can be deallocated by calling M_freeTorchInputSpec()
.
<a id="_CPPv420M_setTorchInputSpecsP15M_CompileConfigPP16M_TorchInputSpec6size_t"></a>
<a id="_CPPv320M_setTorchInputSpecsP15M_CompileConfigPP16M_TorchInputSpec6size_t"></a>
<a id="_CPPv220M_setTorchInputSpecsP15M_CompileConfigPP16M_TorchInputSpec6size_t"></a>
<a id="M_setTorchInputSpecs__M_CompileConfigP.M_TorchInputSpecPP.s"></a>
<a id="config_8h_1a06076d5611b2e8ee80c93151888a139f"></a>
### `M_setTorchInputSpecs()`
> void M_setTorchInputSpecs([M_CompileConfig](../types.md#_CPPv415M_CompileConfig) \*config, [M_TorchInputSpec](types.md#_CPPv416M_TorchInputSpec) \*\*inputSpecs, size_t inputSpecsSize)
Sets the input specifications for a TorchScript model.
You must call this before you compile a TorchScript model with `[M_compileModel()](../model.md#model_8h_1a88afca26a64b945885e1e1a0d09b5750)`, in order to specify the input specs. (This is not needed to compile a TensorFlow SavedModel or ONNX model.)
* **Parameters:**
* **config** – The compilation configuration for your model.
* **inputSpecs** – The input specifications, including the shape, rank, and type for each input tensor. These specs are copied into the configuration, so it’s safe to release the `M_TensorSpec` array after this function returns.
* **inputSpecsSize** – The number of input specifications to set.
<a id="_CPPv421M_setTorchLibraryPathP15M_CompileConfigPKc"></a>
<a id="_CPPv321M_setTorchLibraryPathP15M_CompileConfigPKc"></a>
<a id="_CPPv221M_setTorchLibraryPathP15M_CompileConfigPKc"></a>
<a id="M_setTorchLibraryPath__M_CompileConfigP.cCP"></a>
<a id="config_8h_1af27fb6f707c64daee15704c198db4cc8"></a>
### `M_setTorchLibraryPath()`
> void M_setTorchLibraryPath([M_CompileConfig](../types.md#_CPPv415M_CompileConfig) \*config, const char \*path)
Sets the torch library path for a TorchScript model.
* **Parameters:**
* **config** – The compilation configuration for your model.
* **path** – The torch library path.
<a id="_CPPv429M_setTorchMetadataLibraryPathP15M_CompileConfigPKc"></a>
<a id="_CPPv329M_setTorchMetadataLibraryPathP15M_CompileConfigPKc"></a>
<a id="_CPPv229M_setTorchMetadataLibraryPathP15M_CompileConfigPKc"></a>
<a id="M_setTorchMetadataLibraryPath__M_CompileConfigP.cCP"></a>
<a id="config_8h_1a9d78d2a261b7de91ab37589174b9f527"></a>
### `M_setTorchMetadataLibraryPath()`
> void M_setTorchMetadataLibraryPath([M_CompileConfig](../types.md#_CPPv415M_CompileConfig) \*config, const char \*path)
Sets the torch metadata library path for a TorchScript model.
* **Parameters:**
* **config** – The compilation configuration for your model.
* **path** – The torch metadata library path.
<a id="_CPPv428M_setTorchMetadataLibraryPtrP15M_CompileConfigPv"></a>
<a id="_CPPv328M_setTorchMetadataLibraryPtrP15M_CompileConfigPv"></a>
<a id="_CPPv228M_setTorchMetadataLibraryPtrP15M_CompileConfigPv"></a>
<a id="M_setTorchMetadataLibraryPtr__M_CompileConfigP.voidP"></a>
<a id="config_8h_1a10ed5abf97cc850686fc1a004b9412e8"></a>
### `M_setTorchMetadataLibraryPtr()`
> void M_setTorchMetadataLibraryPtr([M_CompileConfig](../types.md#_CPPv415M_CompileConfig) \*config, void \*ptr)
Sets the torch metadata library pointer for a TorchScript model.
* **Parameters:**
* **config** – The compilation configuration for your model.
* **ptr** – The torch metadata library path.
<a id="_CPPv420M_freeTorchInputSpecP16M_TorchInputSpec"></a>
<a id="_CPPv320M_freeTorchInputSpecP16M_TorchInputSpec"></a>
<a id="_CPPv220M_freeTorchInputSpecP16M_TorchInputSpec"></a>
<a id="M_freeTorchInputSpec__M_TorchInputSpecP"></a>
<a id="config_8h_1af88da26c4a62d27f96394490933238a3"></a>
### `M_freeTorchInputSpec()`
> void M_freeTorchInputSpec([M_TorchInputSpec](types.md#_CPPv416M_TorchInputSpec) \*compileSpec)
Deallocates the memory for the input spec. No-op if `spec` is NULL.
* **Parameters:**
**compileSpec** – The input spec to deallocate.
<a id="_CPPv420M_setTorchInputSpecsP15M_CompileConfigPP16M_TorchInputSpec6size_t"></a>
<a id="_CPPv320M_setTorchInputSpecsP15M_CompileConfigPP16M_TorchInputSpec6size_t"></a>
<a id="_CPPv220M_setTorchInputSpecsP15M_CompileConfigPP16M_TorchInputSpec6size_t"></a>
<a id="M_setTorchInputSpecs__M_CompileConfigP.M_TorchInputSpecPP.s"></a>
<a id="config_8h_1a06076d5611b2e8ee80c93151888a139f"></a>
### `M_setTorchInputSpecs()`
> void M_setTorchInputSpecs([M_CompileConfig](../types.md#_CPPv415M_CompileConfig) \*config, [M_TorchInputSpec](types.md#_CPPv416M_TorchInputSpec) \*\*inputSpecs, size_t inputSpecsSize)
Sets the input specifications for a TorchScript model.
You must call this before you compile a TorchScript model with `[M_compileModel()](../model.md#model_8h_1a88afca26a64b945885e1e1a0d09b5750)`, in order to specify the input specs. (This is not needed to compile a TensorFlow SavedModel or ONNX model.)
* **Parameters:**
* **config** – The compilation configuration for your model.
* **inputSpecs** – The input specifications, including the shape, rank, and type for each input tensor. These specs are copied into the configuration, so it’s safe to release the `M_TensorSpec` array after this function returns.
* **inputSpecsSize** – The number of input specifications to set.
<a id="_CPPv421M_setTorchLibraryPathP15M_CompileConfigPKc"></a>
<a id="_CPPv321M_setTorchLibraryPathP15M_CompileConfigPKc"></a>
<a id="_CPPv221M_setTorchLibraryPathP15M_CompileConfigPKc"></a>
<a id="M_setTorchLibraryPath__M_CompileConfigP.cCP"></a>
<a id="config_8h_1af27fb6f707c64daee15704c198db4cc8"></a>
### `M_setTorchLibraryPath()`
> void M_setTorchLibraryPath([M_CompileConfig](../types.md#_CPPv415M_CompileConfig) \*config, const char \*path)
Sets the torch library path for a TorchScript model.
* **Parameters:**
* **config** – The compilation configuration for your model.
* **path** – The torch library path.
<a id="_CPPv429M_setTorchMetadataLibraryPathP15M_CompileConfigPKc"></a>
<a id="_CPPv329M_setTorchMetadataLibraryPathP15M_CompileConfigPKc"></a>
<a id="_CPPv229M_setTorchMetadataLibraryPathP15M_CompileConfigPKc"></a>
<a id="M_setTorchMetadataLibraryPath__M_CompileConfigP.cCP"></a>
<a id="config_8h_1a9d78d2a261b7de91ab37589174b9f527"></a>
### `M_setTorchMetadataLibraryPath()`
> void M_setTorchMetadataLibraryPath([M_CompileConfig](../types.md#_CPPv415M_CompileConfig) \*config, const char \*path)
Sets the torch metadata library path for a TorchScript model.
* **Parameters:**
* **config** – The compilation configuration for your model.
* **path** – The torch metadata library path.
<a id="_CPPv428M_setTorchMetadataLibraryPtrP15M_CompileConfigPv"></a>
<a id="_CPPv328M_setTorchMetadataLibraryPtrP15M_CompileConfigPv"></a>
<a id="_CPPv228M_setTorchMetadataLibraryPtrP15M_CompileConfigPv"></a>
<a id="M_setTorchMetadataLibraryPtr__M_CompileConfigP.voidP"></a>
<a id="config_8h_1a10ed5abf97cc850686fc1a004b9412e8"></a>
### `M_setTorchMetadataLibraryPtr()`
> void M_setTorchMetadataLibraryPtr([M_CompileConfig](../types.md#_CPPv415M_CompileConfig) \*config, void \*ptr)
Sets the torch metadata library pointer for a TorchScript model.
* **Parameters:**
* **config** – The compilation configuration for your model.
* **ptr** – The torch metadata library path.
<a id="_CPPv420M_freeTorchInputSpecP16M_TorchInputSpec"></a>
<a id="_CPPv320M_freeTorchInputSpecP16M_TorchInputSpec"></a>
<a id="_CPPv220M_freeTorchInputSpecP16M_TorchInputSpec"></a>
<a id="M_freeTorchInputSpec__M_TorchInputSpecP"></a>
<a id="config_8h_1af88da26c4a62d27f96394490933238a3"></a>
### `M_freeTorchInputSpec()`
> void M_freeTorchInputSpec([M_TorchInputSpec](types.md#_CPPv416M_TorchInputSpec) \*compileSpec)
Deallocates the memory for the input spec. No-op if `spec` is NULL.
* **Parameters:**
**compileSpec** – The input spec to deallocate.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!