Deep learning and support vector machines have been widely used due to their extremely high inference accuracy, even with complex data. On the other hand, it was also a black-box method because we did not know the basis of the reasoning.
Therefore, there was an aspect that it was not possible to check that "Is it making a strange judgment?"
Research on explainability and interpretability is progressing as a method for resolving such anxiety and considering phenomena that are occurring. AI that can do this is called "XAI (Explainable AI)".
In multiple regression analysis , etc., there has long been the idea of ??building a model with only valid variables by selecting variables, instead of building a model using all the variables that have been prepared .
Also, the decision tree is an algorithm that selects variables in order while dividing the data area.
Adopting these methods as highly explainable AI is one approach.
For complex non-linear methods such as deep learning and support vector machines , methods have been devised to clarify the relationship between input and output by giving fluctuations to samples and seeing the magnitude of the effect. I'm here.
XAI also looks like an AI capable of causal inference . In some ways this line of thinking fits, but in many cases it doesn't.
When the AI ??judges "abnormal", it will be easier to develop after that if it is shown where and how abnormal.
This is where XAI is useful in causal inference.
What variables and features represent is just data.
There is a gap between reality and statistical models .
Nonetheless, XAI is a promising approach as a method of quantitative hypothesis exploration and as a hint for thinking about what reality is.
NEXT AutoML (Automated Machine Learning)Tweet