Model
Preview
MAX Engine is coming in Q1 2024. Sign up for updates.
#include "modular/c/model.h"
Functions
-
M_CompileConfig *M_newCompileConfig()¶
Creates an object you can use to configure model compilation.
You need this as an argument for several functions, most importantly for
M_setModelPath()
andM_compileModel()
.- Returns:
A pointer to a new compilation configuration. You are responsible for the memory associated with the pointer returned. You can deallocate the memory by calling
M_freeCompileConfig()
. This compilation configuration can only be used for a single compilation call. Any subsequent compilations must be passed a newM_CompileConfig
(created by callingM_newCompileConfig
again).
-
void M_setModelPath(M_CompileConfig *compileConfig, const char *path)¶
Sets the path to a model.
You must call this before you call
M_compileModel()
.- Parameters:
compileConfig – The compilation configuration for your model, from
M_newCompileConfig()
.path – The path to your model. The model does not need to exist on the filesystem at this point. This follows the same semantics and expectations as
std::filesystem::path
.
-
M_AsyncCompiledModel *M_compileModel(const M_RuntimeContext *context, M_CompileConfig *compileConfig, M_Status *status)¶
Compiles a model.
This immediately returns an
M_AsyncCompiledModel
, with compilation happening asynchronously. If you need to block to await compilation, you can then callM_waitForCompilation()
.You must call
M_setModelPath()
before you call this. For example:M_CompileConfig *compileConfig = M_newCompileConfig(); const char *modelPath = argv[1]; M_setModelPath(compileConfig, modelPath); M_AsyncCompiledModel *compiledModel = M_compileModel(context, compileConfig, status); if (M_isError(status)) { logError(M_getError(status)); return EXIT_FAILURE; }
The
M_AsyncCompiledModel
returned here is not ready for inference yet. You need to then initialize the model withM_initModel()
.- Parameters:
context – The runtime context, from
M_newRuntimeContext()
.compileConfig – The compilation configuration for your model, from
M_newCompileConfig()
, and with the model set viaM_setModelPath()
.status – The status used to report errors in the case of failures during model compilation.
- Returns:
A pointer to an
M_AsyncCompiledModel
. You are responsible for the memory associated with the pointer returned. You can deallocate the memory by callingM_freeCompiledModel()
. If the config is invalid, it returns aNULL
pointer. If the model compilation fails, the pointer isNULL
and thestatus
parameter contains an error message.compileConfig
cannot be reused after this call and any subsequent calls must take a newM_CompileConfig
.
-
void M_waitForCompilation(M_AsyncCompiledModel *compiledModel, M_Status *status)¶
Blocks execution until the model is compiled.
This waits for the async compiled model to be complete after calling
M_compileModel()
. When this function returns, the model is resolved to either a compiled model or an error.- Parameters:
compiledModel – The model received from
M_compileModel()
.status – The status used to report errors in the case of failures.
-
M_AsyncCompiledModel *M_compileModelSync(const M_RuntimeContext *context, M_CompileConfig *compileConfig, M_Status *status)¶
Synchronously compiles a model.
Unlike
M_compileModel
, this blocks until model compilation is complete. It returns anM_AsyncCompiledModel
without needing to callM_waitForCompilation()
. All other setup and usage is identical toM_compileModel
.- Parameters:
context – The runtime context, from
M_newRuntimeContext()
.compileConfig – The compilation configuration for your model, from
M_newCompileConfig()
, and with the model set viaM_setModelPath()
.status – The status used to report errors in the case of failures during model compilation.
- Returns:
A pointer to an
M_AsyncCompiledModel
. You are responsible for the memory associated with the pointer returned. You can deallocate the memory by callingM_freeCompiledModel()
. If the config is invalid, it returns aNULL
pointer. If the model compilation fails, the pointer isNULL
and thestatus
parameter contains an error message.compileConfig
cannot be reused after this call and any subsequent calls must take a newM_CompileConfig
.
-
M_AsyncModel *M_initModel(const M_RuntimeContext *context, const M_AsyncCompiledModel *compiledModel, M_Status *status)¶
Sets up a model for execution.
You can call this immediately after
M_compileModel()
—you don’t need to wait for the async compilation.This function also returns immediately with model initialization happening asynchronously. For example:
M_AsyncModel *model = M_initModel(context, compiledModel, status); if (M_isError(status)) { logError(M_getError(status)); return EXIT_FAILURE; }
If you want to block until
M_AsyncModel
is initialized, you can callM_waitForModel()
, but that’s not necessary and you can immediately callM_executeModelSync()
.- Parameters:
context – The runtime context, from
M_newRuntimeContext()
.compiledModel – The compiled model, from
M_compileModel()
.status – The status used to report errors in the case of failures. The status contains an error only if the given context or compiled model is invalid. Other errors will not surface until the next synchronization point.
- Returns:
A pointer to an
M_AsyncModel
that holds an async value to a compiled model. You are reponsible for the memory associated with the pointer returned. You can deallocate the memory by callingM_freeModel()
. If model initialization fails, thestatus
parameter contains an error message.
-
M_TensorNameArray *M_getInputNames(const M_AsyncCompiledModel *model, M_Status *status)¶
Gets all input tensor names.
- Parameters:
model – The compiled model.
status – The status used to report errors in the case of failures. The status contains an error only if the given model is invalid.
- Returns:
An array of input tensor names or a
NULL
pointer if the model is invalid. IfNULL
, thestatus
parameter contains an error message. Callers are responsible for freeing the returned array by callingM_freeTensorNameArray()
.
-
M_TensorNameArray *M_getOutputNames(const M_AsyncCompiledModel *model, M_Status *status)¶
Gets all output tensor names
- Parameters:
model – The compiled model.
status – The status used to report errors in the case of failures. The status contains an error only if the given model is invalid.
- Returns:
An array of output tensor names or a
NULL
pointer if the model is invalid. IfNULL
, thestatus
parameter contains an error message. Callers are responsible for freeing the returned array by callingM_freeTensorNameArray()
.
-
const char *M_getTensorNameAt(const M_TensorNameArray *tensorNameArray, size_t index)¶
Gets the tensor name in
tensorNameArray
atindex
.- Parameters:
tensorNameArray – The tensor name array.
index – The index of the tensor name to get.
- Returns:
A pointer to the tensor name at
index
or aNULL
pointer if the index is out of bounds, or iftensorNameArray
isNULL
. The returned string is owned bytensorNameArray
. The returned string is null terminated.
-
M_TensorSpec *M_getModelInputSpecByName(const M_AsyncCompiledModel *model, const char *tensorName, M_Status *status)¶
Gets the specifications for an input tensor by the tensor’s name.
- Parameters:
model – The compiled model.
tensorName – The name of the input tensor.
status – The status used to report errors in the case of failures. The status contains an error only if the given model or
tensorName
is invalid.
- Returns:
A pointer to an
M_TensorSpec
, or aNULL
pointer if the model or index is invalid. IfNULL
, thestatus
parameter contains an error message.
-
M_TensorSpec *M_getModelOutputSpecByName(const M_AsyncCompiledModel *model, const char *tensorName, M_Status *status)¶
Gets the specifications for an output tensor by the tensor’s name.
- Parameters:
model – The compiled model.
tensorName – The name of the output tensor.
status – The status used to report errors in the case of failures. The status contains an error only if the given model or
tensorName
is invalid.
- Returns:
A pointer to an
M_TensorSpec
, or aNULL
pointer if the model or index is invalid. IfNULL
, thestatus
parameter contains an error message.
-
void M_waitForModel(M_AsyncModel *model, M_Status *status)¶
Blocks execution until the model is initialized.
This waits for the model setup to finish in
M_initModel()
.- Parameters:
model – The model.
status – The status used to report errors in the case of failures.
-
M_AsyncTensorMap *M_executeModelSync(const M_RuntimeContext *context, M_AsyncModel *initializedModel, M_AsyncTensorMap *inputs, M_Status *status)¶
Executes a model synchronously.
The inputs and outputs are
M_AsyncTensorMap
objects to allow chaining of inference. This operation is blocking and waits until the output results are ready.For a complete code example, see the guide to Get started in C.
- Parameters:
context – The runtime context.
initializedModel – The model to execute, from
M_initModel()
. Although that function is async, you can pass theM_AsyncModel
here immediately.inputs – The tensor inputs.
status – The status used to report errors in the case of failures. This includes failures encountered while running the model; there is no need for an explicit synchronization point.
- Returns:
A pointer to an
M_AsyncTensorMap
that holds the output tensors. These tensors are in a resolved state. You are responsible for the memory associated with the pointer returned. You can deallocate the memory by callingM_freeAsyncTensorMap()
. In the case that executing the model fails, thestatus
parameter contains an error message.
-
size_t M_getNumModelInputs(const M_AsyncCompiledModel *model, M_Status *status)¶
Gets the number of inputs for the model.
If the model is not yet resolved/ready, this function blocks execution.
You should call
M_compileModel()
before calling this.- Parameters:
model – The compiled model.
status – The status used to report errors in the case of failures.
- Returns:
The number of inputs for the model, or
0
if there is an error in getting the model metadata. If0
, thestatus
parameter contains an error message.
-
size_t M_getNumModelOutputs(const M_AsyncCompiledModel *model, M_Status *status)¶
Gets the number of outputs for the model.
If the model is not yet resolved/ready, this function blocks execution.
You should call
M_compileModel()
before calling this.- Parameters:
model – The compiled model.
status – The status used to report errors in the case of failures.
- Returns:
The number of outputs for the model, or
0
if there is an error in getting the model metadata. If0
, thestatus
parameter contains an error message.
-
void M_validateInputTensorSpec(const M_AsyncCompiledModel *model, M_AsyncTensorMap *tensors, M_Status *status)¶
Validate input tensor specs for compatibility with the compiled model.
The status message shows which validation check failed for the input.
- Parameters:
model – The compiled model.
tensors – The tensors whose specs need to be validated
status – The status used to report errors in the case of failures.
- Returns:
True if the
tensors
has valid specs for themodel
-
void M_freeModel(M_AsyncModel *model)¶
Deallocates the memory for the model. No-op if
model
is NULL.- Parameters:
model – The model to deallocate.
-
void M_freeCompiledModel(M_AsyncCompiledModel *model)¶
Deallocates the memory for the compiled model. No-op if
model
is NULL.- Parameters:
model – The compiled model to deallocate.
-
void M_freeCompileConfig(M_CompileConfig *config)¶
Deallocates the memory for the compile config. No-op if
config
is NULL.- Parameters:
config – The compilation configuration to deallocate.