Refine
Year of publication
Document Type
- Article (5)
- Article in Conference Proceedings (2)
- Part of a Book (1)
- Contribution to a Periodical (1)
- Preprint (1)
Language
- English (10) (remove)
Is part of the Bibliography
- no (10)
Institute
- Wirtschaft (MSB) (10)
A major requirement for credit scoring models is to provide a maximally accurate risk prediction. Additionally, regulators demand these models to be transparent and auditable. Thus, in credit scoring, very simple predictive models such as logistic regression or decision trees are still widely used and the superior predictive power of modern machine learning algorithms cannot be fully leveraged. Significant potential is therefore missed, leading to higher reserves or more credit defaults. This paper works out different dimensions that have to be considered for making credit scoring models understandable and presents a framework for making ``black box'' machine learning models transparent, auditable and explainable. Following this framework, we present an overview of techniques, demonstrate how they can be applied in credit scoring and how results compare to the interpretability of score cards. A real world case study shows that a comparable degree of interpretability can be achieved while machine learning techniques keep their ability to improve predictive power.
A major requirement for Credit Scoring models is of course to provide a risk prediction that is as accurate as possible. In addition, regulators demand these models to be transparent and auditable. Thus, in Credit Scoring very simple Predictive Models such as Logistic Regression or Decision Trees are still widely used and the superior predictive power of modern Machine Learning algorithms cannot be fully leveraged. A lot of potential is therefore missed, leading to higher reserves or more credit defaults. This talk presents an overview of techniques that are able to make “black box” machine learning models transparent and demonstrate how they can be applied in Credit Scoring. We use the DALEX set of tools to compare a traditional scoring approach with state of the art Machine Learning models and asses both approaches in terms of interpretability and predictive power. Results show that a comparable degree of interpretability can be achieved while machine learning techniques keep their ability to improve predictive power.
Open Source Intelligence (OSINT), deriving intelligence from public data, has gained scrutiny since the Russian invasion of Ukraine. Despite numerous attempts at standard definitions, research around technology-driven intelligence gathering and analysis remains ambiguous. This paper uses a Design Science Research (DSR) approach to categorize the technology-driven intelligence construct. Analyzing sixty studies via structured literature review, three domains were identified: maturity, Intelligence Cycle phase, and use case. The resulting framework, developed into a trend radar, was evaluated with expert interviews, revealing technological gaps in planning/direction and dissemination/integration phases. While intelligent support technologies were noted, practical implementation lags behind theory. The human factor remains central to OSINT. Findings suggest future research should develop applications for underserved phases and examine why proven applications are not widely adopted, considering legal, ethical, political, and social factors. This study contributes to technology-driven intelligence literature as a knowledge base, research gap identifier, and guide for further research.