Konrad Rieck

BIFOLD Machine Learning and Security Research Group, TU Berlin

Attacks on Machine Learning Backends

Thursday, December 18th 2025, time and place tba

The security of machine learning is typically discussed in terms of adversarial robustness and data privacy. Yet beneath every learning- based system lies a complex layer of infrastructure: the machine learning backends. These hardware and software components are critical to inference, ranging from GPU accelerators to mathematical libraries. In this talk, we examine the security of these backends and present two novel attacks. The first implants a backdoor in a hardware accelerator, enabling it to alter model predictions without visible changes. The second introduces Chimera examples—a new class of inputs that produce different outputs from the same model depending on the linear algebra backend. Both attacks reveal a natural connection between adversarial learning and systems security, broadening our notion of machine learning security.
 
Konrad Rieck
CV: Konrad Rieck is a professor at TU Berlin, where he leads the Chair of Machine Learning and Security as part of the Berlin Institute for the Foundations of Learning and Data. Previously, he held academic positions at TU Braunschweig, the University of Göttingen, and Fraunhofer Institute FIRST. His research focuses on the intersection of computer security and machine learning. He has published over 100 papers in this area and serves on the PCs of the top security conferences. He has been awarded the CAST/GI Dissertation Award, a Google Faculty Award, and an ERC Consolidator Grant.