Bücker, Michael
Refine
Year
Publication Type
- Article (7)
- Book (2)
- Contribution to a Periodical (2)
- Part of a Book (1)
- Conference Proceeding (1)
- Preprint (1)
Language
- English (9)
- German (4)
- Multiple languages (1)
Background
Artificial intelligence (AI) has the capability to analyze vast amounts of data and has been applied in various healthcare sectors. However, its effectiveness in aiding pharmacotherapy decision-making remains uncertain due to the intricate, patient-specific, and dynamic nature of this field.
Objective
This study sought to investigate the potential of AI in guiding pharmacotherapy decisions using clinical data such as diagnoses, laboratory results, and vital signs obtained from routine patient care.
Methods
Data of a previous study on medication therapy optimization was updated and adapted for the purpose of this study. Analysis was conducted using R software along with the tidymodels extension packages. The dataset was split into 74% for training and 26% for testing. Decision trees were selected as the primary model due to their simplicity, transparency, and interpretability. To prevent overfitting, bootstrapping techniques were employed, and hyperparameters were fine-tuned. Performance metrics such as areas under the curve and accuracies were computed.
Results
The study cohort comprised 101 elderly patients with multiple diagnoses and complex medication regimens. The AI model demonstrated prediction accuracies ranging from 38% to 100% for various cardiovascular drug classes. Laboratory data and vital signs could not be interpreted, as the effect and dependence were unclear for the model. The study revealed that the issue of AI lag time in responding to sudden changes could be addressed by manually adjusting decision trees, a task not feasible with neural networks.
Conclusion
In conclusion, the AI model exhibited promise in recommending appropriate medications for individual patients. While the study identified several obstacles during model development, most were successfully resolved. Future AI studies need to include the drug effect, not only the drug, if laboratory data is part of the decision. This could assist with interpreting their potential relationship. Human oversight and intervention remain essential for an AI-driven pharmacotherapy decision support system to ensure safe and effective patient care.
A major requirement for credit scoring models is to provide a maximally accurate risk prediction. Additionally, regulators demand these models to be transparent and auditable. Thus, in credit scoring, very simple predictive models such as logistic regression or decision trees are still widely used and the superior predictive power of modern machine learning algorithms cannot be fully leveraged. Significant potential is therefore missed, leading to higher reserves or more credit defaults. This paper works out different dimensions that have to be considered for making credit scoring models understandable and presents a framework for making ``black box'' machine learning models transparent, auditable and explainable. Following this framework, we present an overview of techniques, demonstrate how they can be applied in credit scoring and how results compare to the interpretability of score cards. A real world case study shows that a comparable degree of interpretability can be achieved while machine learning techniques keep their ability to improve predictive power.
A major requirement for Credit Scoring models is of course to provide a risk prediction that is as accurate as possible. In addition, regulators demand these models to be transparent and auditable. Thus, in Credit Scoring very simple Predictive Models such as Logistic Regression or Decision Trees are still widely used and the superior predictive power of modern Machine Learning algorithms cannot be fully leveraged. A lot of potential is therefore missed, leading to higher reserves or more credit defaults. This talk presents an overview of techniques that are able to make “black box” machine learning models transparent and demonstrate how they can be applied in Credit Scoring. We use the DALEX set of tools to compare a traditional scoring approach with state of the art Machine Learning models and asses both approaches in terms of interpretability and predictive power. Results show that a comparable degree of interpretability can be achieved while machine learning techniques keep their ability to improve predictive power.