Arcus Documentation
Back to homepage

Model Serving

Using Arcus Model Enrichment, you connected the model and first-party data to valuable external signals and data that enriched the ML model with additional context and helped it make better predictions. To do so, you first configured the model and ran a trial to understand how the model performs against different external data candidates. Then based on the results of this trial, you selected the external data candidate that best suits your needs.

Once this selection is made, you’re now free to use the enriched model in your serving workflow. Using the Arcus Model SDK, you wrap the model, which keeps the original model and workflow intact while being able to consume external data from the platform.

Using the Pytorch Lightning setup used in the trial, selection and training sections, now you can serve the model with the data selection.


Similar to the previous sections, you define the Arcus Config and wrap your original model with the Arcus Model wrapper.

import arcus

# Initialize the original Pytorch model
my_model = MyModel()

# Provide the configuration
arcus_config = arcus.model.shared.Config(

# Wrap the model with Arcus
arcus_model = arcus.model.torch.Model(my_model, arcus_config)


In our original Pytorch Lightning code, assuming we’ve initialized a Trainer object, you can use this object to make predictions with the model using the predict method.


The same functionality is supported by the trained arcus_trainer object, but this integrates with the Arcus Data Platform to enrich the model during serving


Under the hood, this fetches the appropriate data from the platform using the data selection we made previously and joins your first party data with the data from this data selection to make predictions.

As you use this model for inference, the underlying data that is served will be automatically updated to reflect the most up-to-date data available from the platform, in accordance with the data freshness that was specified during data selection.