The three main challenges in interpretable machine learning are fairness, accountability, and transparency.
The best way to assess risk is to view machine learning models as systems with different factors that interact with each other. This prioritizes experimentation, not just inference or prediction, to determine how different aspects of the model impact each other and the outcome.
As machine learning systems become more efficient and take less time to develop, model interpretability and improvement will become more central to the data scientist role.
The three challenges of interpretable machine learning are fairness, accountability, and transparency. First, fairness addresses discernable biases and discrimination in a model. Second, accountability addresses robustness, consistency, traceability, and privacy, ensuring that models can be relied on over time. Lastly, transparency addresses explainability and understanding of how decisions were made and how the model connects inputs to outputs. You cannot prove a model is robust or fair without transparency, and an explainable model that lacks fairness or reliability is a failed model.
As AI is adopted in different ways, many are so worried about the potential dangers that they fail to identify the potential benefits. People treated the internet similarly, and didn’t realize how data could be exploited, stolen identities, or spread misinformation. But little by little, everything became more robust, such as with the adoption of SSL. Early in the internet’s adoption, people started working to protect the internet as a technology for those that do not want to use it for dangerous means, and while people do still use it inappropriately, it has also enabled amazing societal transformation at a rate exponentially faster than anything seen before it.