[ad_1]
The explosion in artificial intelligence (AI) and machine learning applications spans almost every industry and part of life.
But his rise does not come without irony. Although AI exists to simplify and/or speed up decision-making or work processes, the methodology for doing so is often extremely complex. Indeed, some black-box machine learning algorithms are so complex and multifaceted that they defy simple explanation, even by the computer scientists who created them.
This can be quite problematic when certain use cases – such as finance and medicine – are defined by industry best practices or government regulations that require transparent explanations of the inner workings of AI solutions. And if these applications are not expressive enough to meet the requirements of the definition, they may become unusable regardless of their overall effectiveness.
To solve this task, our team at the Fidelity Center for Applied Technology (FCAT) – in collaboration with the Amazon Quantum Solutions Lab – proposed and implemented an interpretable machine learning model Explainable AI (XAI) based on expressive logic formulas. Such an approach can include any operator that can be applied to one or more boolean variables, providing higher expressiveness compared to more rigid rule-based and tree-based approaches.
You can read the full paper here for exhaustive details on this project.
Our hypothesis was that because models – such as decision trees – can be deep and difficult to interpret, finding an expressive rule with low complexity but high accuracy was an unsolved optimization problem to be solved. In addition, by simplifying the model with this advanced XAI approach, we can achieve additional benefits such as bias detection, which is important in the context of ethical and responsible use of ML; It will also make it easier to maintain and improve the model.
We proposed an approach based on expressive logic formulas, as they define rules of adjustable complexity (or interpretation) according to which the input data is classified. Such a formula can contain any operator that can be applied to one or more logical variables (such as And or AtLeast), which provides higher expressiveness compared to more rigid rule-based and tree-based methodologies.
In this problem, we have two competing goals: to maximize the performance of the algorithm while minimizing its complexity. Thus, instead of taking the typical approach of using one of two optimization methods—combining multiple objectives into one or constraining one of the objectives—we decided to include both in our formulation. With this, and without loss of generality, we mainly use balanced accuracy as our overarching performance metric.
Also, by including operators like AtLeast, we were motivated by the idea of addressing the need for highly interpretable lists, such as lists of medical symptoms that indicate a specific condition. It is conceivable that the decision will be made using such a list of symptoms, with a minimum number of which should be a positive diagnosis. Similarly, in finance, a bank may decide whether to extend credit to a customer based on a number of factors from a larger list.
We successfully implemented our XAI model and benchmarked it against some public data sets for credit, customer behavior, and medical conditions. We found our model to be generally competitive with other known alternatives. We also found that our XAI model could potentially be powered by special purpose hardware or quantum devices to solve fast integer linear programming (ILP) or quadratic unconstrained binary optimization (QUBO). The addition of QUBO solvers reduces the number of iterations – thus resulting in speed by offering fast non-local moves.
As noted, AI explanatory models using logic formulas can have many applications in healthcare and Fidelity’s finance (such as credit scores or evaluating why some customers chose a product and others didn’t). By creating these interpretable rules, we can reach a higher level that can lead to future improvements in product development or refinement, as well as optimization of marketing campaigns.
Based on our findings, we determined that expressive AI using logical formulas is appropriate and desirable for use cases that require further explanation. Additionally, as quantum computing continues to evolve, we envision the opportunity to gain potential speedup using it and other special-purpose hardware accelerators.
Future work may focus on applying these classifiers to other datasets, introducing new operators, or applying these concepts to other use cases.
[ad_2]
Source link