Wirtschaft (MSB)
Multi‐sided platforms are becoming increasingly relevant in understanding industry changes. The literature has focused on the inception and growth of platforms, neglecting how entrants develop and grow disruptive platforms. To address this shortcoming, we study an entrant that was spun off from an established catalog retailer and is steering a multi‐sided disruptive platform in the German fashion retail industry. We conduct a longitudinal study on how the entrant leverages the relationships with its multiple platform sides during 2014–2019 by analyzing secondary data using topic modeling and qualitative content analysis. We propose three levers: (1) “guarded inception,” which is the collaboration with a knowledgeable partner unaffected by disruption to quickly overcome the chicken‐and‐egg problem; (2) “activating force multipliers,” which is the strategic orchestration of complementors being contractually tied to the entrant and working to extend the entrant's value network. Enabled by these two levers, the entrant was (3) “building on others” to develop the platform along a disruptive path while circumventing internal limitations and external resistance. We contribute to the intersection of the literature strands on platform and disruptive innovation by showing how the entrant strategically leveraged its different platform sides over time to develop and grow a disruptive platform.
A major requirement for credit scoring models is to provide a maximally accurate risk prediction. Additionally, regulators demand these models to be transparent and auditable. Thus, in credit scoring, very simple predictive models such as logistic regression or decision trees are still widely used and the superior predictive power of modern machine learning algorithms cannot be fully leveraged. Significant potential is therefore missed, leading to higher reserves or more credit defaults. This paper works out different dimensions that have to be considered for making credit scoring models understandable and presents a framework for making ``black box'' machine learning models transparent, auditable and explainable. Following this framework, we present an overview of techniques, demonstrate how they can be applied in credit scoring and how results compare to the interpretability of score cards. A real world case study shows that a comparable degree of interpretability can be achieved while machine learning techniques keep their ability to improve predictive power.