TY - JOUR A1 - Bücker, Michael A1 - Hoti, Kreshnik A1 - Rose, Olaf T1 - Artificial intelligence to assist decision-making on pharmacotherapy: A feasibility study JF - Exploratory Research in Clinical and Social Pharmacy N2 - Background Artificial intelligence (AI) has the capability to analyze vast amounts of data and has been applied in various healthcare sectors. However, its effectiveness in aiding pharmacotherapy decision-making remains uncertain due to the intricate, patient-specific, and dynamic nature of this field. Objective This study sought to investigate the potential of AI in guiding pharmacotherapy decisions using clinical data such as diagnoses, laboratory results, and vital signs obtained from routine patient care. Methods Data of a previous study on medication therapy optimization was updated and adapted for the purpose of this study. Analysis was conducted using R software along with the tidymodels extension packages. The dataset was split into 74% for training and 26% for testing. Decision trees were selected as the primary model due to their simplicity, transparency, and interpretability. To prevent overfitting, bootstrapping techniques were employed, and hyperparameters were fine-tuned. Performance metrics such as areas under the curve and accuracies were computed. Results The study cohort comprised 101 elderly patients with multiple diagnoses and complex medication regimens. The AI model demonstrated prediction accuracies ranging from 38% to 100% for various cardiovascular drug classes. Laboratory data and vital signs could not be interpreted, as the effect and dependence were unclear for the model. The study revealed that the issue of AI lag time in responding to sudden changes could be addressed by manually adjusting decision trees, a task not feasible with neural networks. Conclusion In conclusion, the AI model exhibited promise in recommending appropriate medications for individual patients. While the study identified several obstacles during model development, most were successfully resolved. Future AI studies need to include the drug effect, not only the drug, if laboratory data is part of the decision. This could assist with interpreting their potential relationship. Human oversight and intervention remain essential for an AI-driven pharmacotherapy decision support system to ensure safe and effective patient care. KW - Artificial intelligence, Pharmacotherapy, Medication review, Cardiology, Clinical decision support system, Pharmacy practice Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:hbz:836-opus-181429 UR - https://www.sciencedirect.com/science/article/pii/S266727662400088X SN - 2667-2766 VL - 15 SP - 100491 EP - 100491 ER - TY - JOUR A1 - Bücker, Michael A1 - Szepannek, Gero A1 - Gosiewska, Alicja A1 - Biecek, Przemyslaw T1 - Transparency, Auditability and eXplainability of Machine Learning Models in Credit Scoring JF - Journal of the Operational Research Society Y1 - 2021 U6 - http://dx.doi.org/10.1080/01605682.2021.1922098 ER - TY - JOUR A1 - Bücker, Michael A1 - Szepannek, Gero A1 - Gosiewska, Alicja A1 - Biecek, Przemyslaw T1 - Transparency, Auditability and eXplainability of Machine Learning Models in Credit Scoring JF - arXiv N2 - A major requirement for credit scoring models is to provide a maximally accurate risk prediction. Additionally, regulators demand these models to be transparent and auditable. Thus, in credit scoring, very simple predictive models such as logistic regression or decision trees are still widely used and the superior predictive power of modern machine learning algorithms cannot be fully leveraged. Significant potential is therefore missed, leading to higher reserves or more credit defaults. This paper works out different dimensions that have to be considered for making credit scoring models understandable and presents a framework for making ``black box'' machine learning models transparent, auditable and explainable. Following this framework, we present an overview of techniques, demonstrate how they can be applied in credit scoring and how results compare to the interpretability of score cards. A real world case study shows that a comparable degree of interpretability can be achieved while machine learning techniques keep their ability to improve predictive power. Y1 - 2020 UR - https://arxiv.org/abs/2009.13384 VL - 2009.13384 SP - 1 EP - 30 ER - TY - JOUR A1 - Hoops, Christian A1 - Bücker, Michael T1 - Determinants, Moderators and Consequences of Organizational Interaction Orientation JF - Journal of Entrepreneurship Management and Innovation Y1 - 2014 VL - 9 IS - 4 SP - 73 EP - 100 ER - TY - JOUR A1 - Bücker, Michael A1 - van Kampen, Maarten A1 - Krämer, Walter T1 - Reject inference in consumer credit scoring with nonignorable missing data JF - Journal of Banking & Finance Y1 - 2013 U6 - http://dx.doi.org/10.1016/j.jbankfin.2012.11.002 VL - 37 IS - 3 SP - 1040 EP - 1045 ER - TY - JOUR A1 - Bücker, Michael A1 - Krämer, Walter A1 - Arnold, Matthias T1 - A Hausman test for non-ignorability JF - Economics Letters Y1 - 2012 U6 - http://dx.doi.org/10.1016/j.econlet.2011.08.025 VL - 114 IS - 1 SP - 23 EP - 25 ER - TY - JOUR A1 - Krämer, Walter A1 - Bücker, Michael T1 - Probleme des Qualitätsvergleichs von Kreditausfallprognosen JF - AStA Wirtschafts- und Sozialstatistisches Archiv Y1 - 2011 U6 - http://dx.doi.org/10.1007/s11943-011-0096-0 VL - 5 IS - 1 SP - 39 EP - 58 ER -