Model Path, Parameters, & Metadata
Last updated
Last updated
For an experiment to output a model in Gradient, the resulting model files need to be written to the specified or default model path.
Using the PS_MODEL_PATH environment variable is the easiest way to make sure you are outputting to the correct location.
Single-node: --modelPath /artifacts
is currently the default for single-node experiments. Output your model files there to appear in your Model Repository so that you can deploy it using Deployments
Multi-node: The default model path for multi-node experiments is /storage/models/<experiment_id>/
To store Models in the Models list, add the following Model-specific parameters to the Experiment command when running an Experiment.
--modelType
defines the type of model that is being generated by the experiment. For example, --modelType Tensorflow
will ensure that the model checkpoint files being generated are recognized as TensorFlow model files.
--modelPath
defines where in the context of an Experiment the Model checkpoint files will be stored. This is a key argument that enables the evaluation and persistence of the generated model files.
One option is to set --modelPath "/artifacts"
and keep the checkpoint files around only in the context of its Experiment. Another option is to set --modelPath "/storage/models"
to have permanent access to the model generated files in your Paperspace storage.
Enabling Support for Models with GradientCI
You can also specify the model path and model type parameters when running experiments with GradientCI. See GradientCI Models for more info.
When modelType
is not specified, custom model metadata can be associated with the model for later reference by creating a gradient-model-metadata.json file in the modelPath
directory. Any valid JSON data can be stored in this file.
For models of type Tensorflow
, metadata is automatically generated for your experiment, so any custom model metadata will be ignored.
An example of custom model metadata JSON is as follows:
Model Type Values
Description
"Tensorflow"
TensorFlow compatible model outputs
"ONNX"
ONNX model outputs
"Custom"
Custom model type (e.g., a simple flask server)