ML model serving and monitoring with FastAPI
Room: Saphire A - Python
Time: 11:00 - 11:25
MLOPs are a collection of practices that enable companies to build, train, deploy, scale, and operate models in production. Model serving is one of the main MLOps tasks. There are multiple ways of running models in production these days: from open-source solutions to enterprise offerings. However, a custom-built serving solution in Python is the most flexible option and can evolve together with your company's needs. The quality of an ML service is defined by its speed, accuracy, and ability to deal with the load. FastAPI is faster than its predecessors. Also, being a part of the Python ecosystem, it supports all the main ML frameworks. Moreover, it supports the OpenAPI standard out of the box and makes data validation much easier. All of this and its concurrency capability make it a great choice for running ML models in production.