Skip to main content
Seminar | Mathematics and Computer Science

Should We Be Perturbed About Deep Learning?

LANS Seminar

Abstract: Many commentators are asking whether current artificial intelligence solutions are sufficiently robust, resilient, and trustworthy  and how such issues should be quantified and addressed. I believe that numerical analysts can contribute to the debate.

In part 1 of this talk, I will look at the common practice of using low-precision floating-point formats to speed up computation time. I will focus on evaluating the softmax and log-sum-exp functions, which play an important role in many classification tools. Here, across widely used packages, we see mathematically equivalent but computationally different formulations; these variations have been designed in an effort to avoid overflow and underflow. I will show that classical rounding error analysis gives insights into their floating-point accuracy and suggests a method of choice.

In part 2, I will look at a bigger picture question concerning sensitivity to adversarial attacks in deep learning. Adversarial attacks are deliberate, targeted perturbations to input data that have a dramatic effect on the output; for example, a traffic stop” sign on the roadside can be misinterpreted as a speed limit sign when minimal graffiti is added. The vulnerability of systems to such interventions raises questions around security, privacy, and ethics, and there has been a rapid escalation of attack and defense strategies. I will consider a related higher level question: Under realistic assumptions, do adversarial examples always exist with high probability? I will also introduce and discuss the idea of a stealth attack: an undetectable, targeted perturbation to the trained network itself.