Skip to main content

Model

#include "max/c/model.h"

Functions

M_newCompileConfig()

M_CompileConfig *M_newCompileConfig()

Creates an object you can use to configure model compilation.

You need M_CompileConfig as an argument for several functions, including M_setModelPath(), M_setTorchInputSpecs(), and M_compileModel().

  • Returns:

    A pointer to a new compilation configuration. You are responsible for the memory associated with the pointer returned. You can deallocate the memory by calling M_freeCompileConfig(). This compilation configuration can only be used for a single compilation call. Any subsequent compilations must be passed a new M_CompileConfig (created by calling M_newCompileConfig again).

M_setModelPath()

void M_setModelPath(M_CompileConfig *compileConfig, const char *path)

Sets the path to a model.

You must call this before you call M_compileModel(). Otherwise, M_compileModel() returns an error in status.

Note: PyTorch models must be in TorchScript format, and TensorFlow models must be in SavedModel format. Or pass any ONNX model.

  • Parameters:

    • compileConfig – The compilation configuration for your model, from M_newCompileConfig().
    • path – The path to your model. The model does not need to exist on the filesystem at this point. This follows the same semantics and expectations as std::filesystem::path.

M_enableVisualization()

void M_enableVisualization(M_CompileConfig *compileConfig, const char *path)

Enables visualization.

When enabled, a maxviz file is generated and saved to a specified output directory. If no output directory is specified, the output file is saved to present working directory. The output file can be used as input to Netron to visualize a model graph.

  • Parameters:

    • compileConfig – The compilation configuration for your model, from M_newCompileConfig().
    • path – The path specified for the output visualization directory. This follows the same semantics and expectations as std::filesystem::path.

M_compileModel()

M_AsyncCompiledModel *M_compileModel(const M_RuntimeContext *context, M_CompileConfig **compileConfig, M_Status *status)

Compiles a model.

This immediately returns an M_AsyncCompiledModel, with compilation happening asynchronously. If you need to block to await compilation, you can then call M_waitForCompilation().

You must call M_setModelPath() before you call this. For example:

M_CompileConfig *compileConfig = M_newCompileConfig();
M_setModelPath(compileConfig, modelPath);
M_AsyncCompiledModel *compiledModel =
M_compileModel(context, &compileConfig, status);
if (M_isError(status)) {
logError(M_getError(status));
return EXIT_FAILURE;
}

When using a TorchScript model, you must also specify the input shapes via M_setTorchInputSpecs() before you compile it.

The M_AsyncCompiledModel returned here is not ready for inference yet. You need to then initialize the model with M_initModel().

  • Parameters:

    • context – The runtime context, from M_newRuntimeContext().
    • compileConfig – Address of compilation configuration for your model created with M_newCompileConfig(), and with the model set via M_setModelPath(). Ownership of configuration is handed over to API.
    • status – The status used to report errors in the case of failures during model compilation.
  • Returns:

    A pointer to an M_AsyncCompiledModel. You are responsible for the memory associated with the pointer returned. You can deallocate the memory by calling M_freeCompiledModel(). If the config is invalid, it returns a NULL pointer. If the model compilation fails, the pointer is NULL and the status parameter contains an error message. compileConfig will be reset to NULL after this call irrespective of status and cannot be reused, and any subsequent calls must take a new M_CompileConfig.

M_waitForCompilation()

void M_waitForCompilation(M_AsyncCompiledModel *compiledModel, M_Status *status)

Blocks execution until the model is compiled.

This waits for the async compiled model to be complete after calling M_compileModel(). When this function returns, the model is resolved to either a compiled model or an error.

  • Parameters:

    • compiledModel – The model received from M_compileModel().
    • status – The status used to report errors in the case of failures.

M_compileModelSync()

M_AsyncCompiledModel *M_compileModelSync(const M_RuntimeContext *context, M_CompileConfig **compileConfig, M_Status *status)

Synchronously compiles a model.

Unlike M_compileModel, this blocks until model compilation is complete. It returns an M_AsyncCompiledModel without needing to call M_waitForCompilation(). All other setup and usage is identical to M_compileModel.

  • Parameters:

    • context – The runtime context, from M_newRuntimeContext().
    • compileConfig – Address of compilation configuration for your model created with M_newCompileConfig(),, and with the model set via M_setModelPath(). Ownership of configuration is handed over to API.
    • status – The status used to report errors in the case of failures during model compilation.
  • Returns:

    A pointer to an M_AsyncCompiledModel. You are responsible for the memory associated with the pointer returned. You can deallocate the memory by calling M_freeCompiledModel(). If the config is invalid, it returns a NULL pointer. If the model compilation fails, the pointer is NULL and the status parameter contains an error message. compileConfig will be reset to NULL after this call irrespective of status and cannot be reused, and any subsequent calls must take a new M_CompileConfig.

M_initModel()

M_AsyncModel *M_initModel(const M_RuntimeContext *context, const M_AsyncCompiledModel *compiledModel, M_Status *status)

Sets up a model for execution.

You can call this immediately after M_compileModel()—you don’t need to wait for the async compilation.

This function also returns immediately with model initialization happening asynchronously. For example:

M_AsyncModel *model = M_initModel(context, compiledModel, status);
if (M_isError(status)) {
logError(M_getError(status));
return EXIT_FAILURE;
}

If you want to block until M_AsyncModel is initialized, you can call M_waitForModel(), but that’s not necessary and you can immediately call M_executeModelSync().

  • Parameters:

    • context – The runtime context, from M_newRuntimeContext().
    • compiledModel – The compiled model, from M_compileModel().
    • status – The status used to report errors in the case of failures. The status contains an error only if the given context or compiled model is invalid. Other errors will not surface until the next synchronization point.
  • Returns:

    A pointer to an M_AsyncModel that holds an async value to a compiled model. You are reponsible for the memory associated with the pointer returned. You can deallocate the memory by calling M_freeModel(). If model initialization fails, the status parameter contains an error message.

M_getInputNames()

M_TensorNameArray *M_getInputNames(const M_AsyncCompiledModel *model, M_Status *status)

Gets all input tensor names.

  • Parameters:

    • model – The compiled model.
    • status – The status used to report errors in the case of failures. The status contains an error only if the given model is invalid.
  • Returns:

    An array of input tensor names or a NULL pointer if the model is invalid. If NULL, the status parameter contains an error message. Callers are responsible for freeing the returned array by calling M_freeTensorNameArray().

M_getOutputNames()

M_TensorNameArray *M_getOutputNames(const M_AsyncCompiledModel *model, M_Status *status)

Gets all output tensor names

  • Parameters:

    • model – The compiled model.
    • status – The status used to report errors in the case of failures. The status contains an error only if the given model is invalid.
  • Returns:

    An array of output tensor names or a NULL pointer if the model is invalid. If NULL, the status parameter contains an error message. Callers are responsible for freeing the returned array by calling M_freeTensorNameArray().

M_getTensorNameAt()

const char *M_getTensorNameAt(const M_TensorNameArray *tensorNameArray, size_t index)

Gets the tensor name in tensorNameArray at index.

  • Parameters:

    • tensorNameArray – The tensor name array.
    • index – The index of the tensor name to get.
  • Returns:

    A pointer to the tensor name at index or a NULL pointer if the index is out of bounds, or if tensorNameArray is NULL. The returned string is owned by tensorNameArray. The returned string is null terminated.

M_getModelInputSpecByName()

M_TensorSpec *M_getModelInputSpecByName(const M_AsyncCompiledModel *model, const char *tensorName, M_Status *status)

Gets the specifications for an input tensor by the tensor’s name.

  • Parameters:

    • model – The compiled model.
    • tensorName – The name of the input tensor.
    • status – The status used to report errors in the case of failures. The status contains an error only if the given model or tensorName is invalid.
  • Returns:

    A pointer to an M_TensorSpec, or a NULL pointer if the model or index is invalid. If NULL, the status parameter contains an error message.

M_getModelOutputSpecByName()

M_TensorSpec *M_getModelOutputSpecByName(const M_AsyncCompiledModel *model, const char *tensorName, M_Status *status)

Gets the specifications for an output tensor by the tensor’s name.

  • Parameters:

    • model – The compiled model.
    • tensorName – The name of the output tensor.
    • status – The status used to report errors in the case of failures. The status contains an error only if the given model or tensorName is invalid.
  • Returns:

    A pointer to an M_TensorSpec, or a NULL pointer if the model or index is invalid. If NULL, the status parameter contains an error message.

M_waitForModel()

void M_waitForModel(M_AsyncModel *model, M_Status *status)

Blocks execution until the model is initialized.

This waits for the model setup to finish in M_initModel().

  • Parameters:

    • model – The model.
    • status – The status used to report errors in the case of failures.

M_executeModelSync()

M_AsyncTensorMap *M_executeModelSync(const M_RuntimeContext *context, M_AsyncModel *initializedModel, M_AsyncTensorMap *inputs, M_Status *status)

Executes a model synchronously.

The inputs and outputs are M_AsyncTensorMap objects to allow chaining of inference. This operation is blocking and waits until the output results are ready.

For a complete code example, see the guide to Get started in C.

  • Parameters:

    • context – The runtime context.
    • initializedModel – The model to execute, from M_initModel(). Although that function is async, you can pass the M_AsyncModel here immediately.
    • inputs – The tensor inputs.
    • status – The status used to report errors in the case of failures. This includes failures encountered while running the model; there is no need for an explicit synchronization point.
  • Returns:

    A pointer to an M_AsyncTensorMap that holds the output tensors. These tensors are in a resolved state. You are responsible for the memory associated with the pointer returned. You can deallocate the memory by calling M_freeAsyncTensorMap(). In the case that executing the model fails, the status parameter contains an error message.

M_getNumModelInputs()

size_t M_getNumModelInputs(const M_AsyncCompiledModel *model, M_Status *status)

Gets the number of inputs for the model.

If the model is not yet resolved/ready, this function blocks execution.

You should call M_compileModel() before calling this.

  • Parameters:

    • model – The compiled model.
    • status – The status used to report errors in the case of failures.
  • Returns:

    The number of inputs for the model, or 0 if there is an error in getting the model metadata. If 0, the status parameter contains an error message.

M_getNumModelOutputs()

size_t M_getNumModelOutputs(const M_AsyncCompiledModel *model, M_Status *status)

Gets the number of outputs for the model.

If the model is not yet resolved/ready, this function blocks execution.

You should call M_compileModel() before calling this.

  • Parameters:

    • model – The compiled model.
    • status – The status used to report errors in the case of failures.
  • Returns:

    The number of outputs for the model, or 0 if there is an error in getting the model metadata. If 0, the status parameter contains an error message.

M_validateInputTensorSpec()

void M_validateInputTensorSpec(const M_AsyncCompiledModel *model, M_AsyncTensorMap *tensors, M_Status *status)

Validate input tensor specs for compatibility with the compiled model.

The status message shows which validation check failed for the input.

  • Parameters:

    • model – The compiled model.
    • tensors – The tensors whose specs need to be validated
    • status – The status used to report errors in the case of failures.
  • Returns:

    True if the tensors has valid specs for the model

M_freeModel()

void M_freeModel(M_AsyncModel *model)

Deallocates the memory for the model. No-op if model is NULL.

  • Parameters:

    model – The model to deallocate.

M_freeCompiledModel()

void M_freeCompiledModel(M_AsyncCompiledModel *model)

Deallocates the memory for the compiled model. No-op if model is NULL.

  • Parameters:

    model – The compiled model to deallocate.

M_freeCompileConfig()

void M_freeCompileConfig(M_CompileConfig *config)

Deallocates the memory for the compile config. No-op if config is NULL.

  • Parameters:

    config – The compilation configuration to deallocate.