This talk will cover the common challenges faced in implementing AI models and their interpretation in the wild. Often, deep learning or ensemble models are a black box and can cause issues with their opaqueness.
We will cover 2 different demos during this talk to showcase explaining tabular data using Shap Kernel explainer and image data using Shap Gradient explainer. We will use SHAP (Shapley Additive exPlanations) implementation in Python to explore how we can rate the importance of features for a single prediction, which we can use to identify the underlying drivers the model is working with.
The speakers will be Ali Rizwan from Seha Consulting and Janu Subramanian from Yoda Labs.