![]() In order to monitor this service, we'll need to collect and expose metrics data. We can deploy this server on our Kubernetes cluster using the manifest defined in kubernetes/models/. Dockerfile lists the instructions to package our REST server as a container.app/schemas.py defines the expected schema for the request and response bodies in the model prediction endpoint.app/api.py defines a few routes for our model service including a model prediction endpoint and a health-check endpoint.train.py contains a simple script to produce a serialized model artifact.If you look in the model/ directory of the repo linked previously, you'll see a couple files. All of the instructions to deploy these components on your own cluster are provided in the README.md file. Finally, we'll simulate production traffic using Locust so that we have some data to see in our dashboards.įeel free to clone this Github repository and follow along yourself.Deploy Grafana to visualize the collected metrics.Deploy Prometheus to collect and store metrics.Instrument the server to collect metrics which are exposed via a separate metrics endpoint.Create a containerized REST service to expose the model via a prediction endpoint. ![]() This model is trained to predict a wine's quality on the scale of 0 (lowest) to 10 (highest) based on a number of chemical attributes. Throughout the rest of this blog post, we'll walk through the process of instrumenting and monitoring a scikit-learn model trained on the UCI Wine Quality dataset. Monitoring a wine quality prediction model: a case study.
0 Comments
Leave a Reply. |