As AI technologies become increasingly embedded in critical sectors, the need for systems that are both explainable and compliant with regulatory standards is more urgent than ever. Traditional AI often operate as `black boxes', providing little to no insight into their decision-making processes, which poses significant risks especially in high-stakes applications. In these sectors, AI systems have to process vast amounts of data while adhering to stringent applicable regulations such as the EU AI Act, the Machinery Regulation, GDPR, and the NIS 2 directive.
In this talk, we will present the use of Tensor Networks (TN) as novel approach to explainable and trustworthy AI-systems. Since TNs were developed specifically with the aim to offer direct accessibility to key physical information in the simulation of complex quantum systems, training machine learning (ML)-models based on TNs allows us to learn highly complex patterns from data while still maintaining access to key information learned by the ML-model, such as learned feature relevance, learned correlations, or feature contributions to model decisions. In doing so, we can assess the learned patterns when testing the AI system and, in addition, tune the model performance after learning, paving the way to enhanced assessment of the AI components for robust and trustworthy AI-systems.
Concretely, we will present advanced optimization techniques for tensor network algorithms, cutting-edge advantages of TN-based machine learning and concrete application scenarios in high-risk domains.