Konrad Rieck
BIFOLD Machine Learning and Security Research Group, TU Berlin
Attacks on Machine Learning Backends
Thursday, December 18th 2025, time and place tba
The security of machine learning is typically discussed in terms of adversarial robustness and data privacy. Yet beneath every learning- based system lies a complex layer of infrastructure: the machine learning backends. These hardware and software components are critical to inference, ranging from GPU accelerators to mathematical libraries. In this talk, we examine the security of these backends and present two novel attacks. The first implants a backdoor in a hardware accelerator, enabling it to alter model predictions without visible changes. The second introduces Chimera examples—a new class of inputs that produce different outputs from the same model depending on the linear algebra backend. Both attacks reveal a natural connection between adversarial learning and systems security, broadening our notion of machine learning security.
