Joan Gerard

ML for everyone

Deploy an MLflow model to local environment and Docker

Introduction Azure might be very expensive to deploy predictive models into their servers. For that reason I’ll show you how to deploy a predictive model to a local environment and dockerize it using the artifact created by Azure AutoML. Prerequisites You’re going to need to run AutoML in Azure with some dataset and to download the best model into your compute instance environment. Download the artifact locally An "artifact" is a component that results after training a model. Continue reading