Skip to content

Model Monitoring

To grasp the concept of the need to monitor your model we will start with an example:

You are investing in the NASDAQ-100 index and use all of your AI capabilities to do so.

  • You have built a time series model that samples the index value on a daily basis and your model forecasts the daily index value for the next 2 weeks.
  • Your model uses only the past index values and no other information sources.
  • Assume the current date is the 15th of February 2020 and you started running your model today based on training data of the last 6 months.

As can be seen in the plot above, during the training period, the NASDAQ-100 had a pretty stable upward trend, hence, any reasonable model based on the same input and training period would assume the trend will continue. On the 19th of February 2020 the COVID-19 crisis starts to impact and change the trend very quickly and very strongly. This will cause a large deviation between the predicted values and the actual values of the index, which by the end of February have fallen by almost 13%.

The fundamental reason is that the underlying input data statistics have changed relative to the training period.

Due to that reason, even after deploying your model, you must keep monitoring the input data and retrain the model if a sufficiently large change in the input statistics is discovered.


This is where the AI & Analytics Engine's model monitoring capability enters the picture.

Models trained and deployed on the platform will be automatically monitored, and a dashboard will be produced. When the user submits data for predictions, the inputs to the model are monitored and compared with the training data. Diagnostic charts to quantify the comparison are produced in the dashboard.

The model monitoring dashboard also gives the user cues for degradation in model quality, and prompt/suggest to them to submit more labeled data, or re-train the model with the newly received labeled data. The user can automatically re-train the data. When this happens, the user is stepped again through recommendations to see which model promises to offer the most improvement in prediction accuracy. This completes the model development life-cycle.