Read the Research No. 3

Read-Papers-to-Learn-How-to-Write-Papers, Part 1

In this subseries of Read the Research, I'll be taking you on a tour of the other side of papers — that side which is rarely visited by researchers while they're busy extracting the content from a paper. The side I mean, to be sure, is the side from which a paper communicates that content.

Now, I hear you thinking, "But a paper is the communication of research content. There can't be a special communication side to papers. The communication is what the paper is."

I like the way you think. I think exactly the same too. A scientific paper is indeed a complete and total unification of research object and research thinking and research work and research communication. Really, if someone put a computer-gun to my head and demanded, "What's beautiful about papers?" — well, this is certainly what I'd blurt: "It's the unification!"

However, for someone like yourself who wants to do your own scientific reading to purpose, it will prove useful to split up this union that all papers are so that you can notice just how that union is achieved. That is the motivation here in this subseries about reading papers in order to learn how to write papers. Basically, I am going to pull apart the text in papers so that you can see the functions which text performs on the research.

In today's Read-Papers-to-Learn-How-to-Write-Papers, I am going to be commenting on the Abstract of Latent Backdoor Attacks on Deep Neural Networks (Yao et al. CCS 2019). The content of the paper has not been changed but enhanced for the purpose of demonstrating just how that content is communicated.

Here is the Abstract in its entirety:

Recent work proposed the concept of backdoor attacks on deep neural networks (DNNs), where misclassification rules are hidden inside normal models, only to be triggered by very specific inputs. However, these "traditional" backdoors assume a context where users train their own models from scratch, which rarely occurs in practice. Instead, users typically customize "Teacher" models already pretrained by providers like Google, through a process called transfer learning. This customization process introduces significant changes to models and disrupts hidden backdoors, greatly reducing the actual impact of backdoors in practice.

In this paper, we describe latent backdoors, a more powerful and stealthy variant of backdoor attacks that functions under transfer learning. Latent backdoors are incomplete backdoors embedded into a "Teacher" model, and automatically inherited by multiple "Student" models through transfer learning. If any Student models include the label targeted by the backdoor, then its customization process completes the backdoor and makes it active. We show that latent backdoors can be quite effective in a variety of application contexts, and validate its practicality through real-world attacks against traffic sign recognition, iris identification of volunteers, and facial recognition of public figures (politicians). Finally, we evaluate 4 potential defenses, and find that only one is effective in disrupting latent backdoors, but might incur a cost in classification accuracy as tradeoff.

Download commentary here.

Please email comments or questions to daniel.shea∂kit.edu

This blog is for you.