
1.WHAT AI DO?
Artificial intelligence (ai) is being used in many ways, including creating new types of credit scores that go beyond traditional fico scores. while these tools can reliably and accurately predict results, their inner workings are often difficult to explain and interpret. as a result, there is a growing demand for so-called explainable ai (xai) in ethics and regulation, especially in high-risk areas.
2.HOW EUROPEAN UNION REGULATED THE AUTOMATED SYSTEMS?
Recently, us and european union legislators attempted to pass laws regulating automated systems, including explainability. there are some existing laws that impose statutory explainability requirements, especially for credit and lending, which are often difficult to interpret when it comes to ai.
3.WHY XAI IS MUST?
Based on what he calls the evidence of fairness view, the scientist supports xai methods that provide information about counterfactual changes in past conditions. from this perspective, individuals affected by model decisions (model patients) can and should care about explainability as a means to an end. in addition to informing legislative efforts and industry norms of explainability, these ideas can be used in other areas. for example, engineers designing ai models and associated xai methods can be evaluated using the fairness evidence view.