中央研究院 資訊科學研究所

活動訊息

友善列印

列印可使用瀏覽器提供的(Ctrl+P)功能

TIGP (SNHCC) -- Class-Aware Robust Adversarial Training for Object Detection

:::

Today!

TIGP (SNHCC) -- Class-Aware Robust Adversarial Training for Object Detection

  • 講者陳駿丞 博士 (中央研究院資訊科技創新研究中心)
    邀請人:TIGP (SNHCC)
  • 時間2023-03-27 (Mon.) 14:00 – 16:00
  • 地點資訊所新館106演講廳
摘要
Object detection is an important computer vision task with plenty of real-world applications; therefore, how to enhance its robustness against adversarial attacks has emerged as a crucial issue. However, most of the previous defense methods focused on the classification task and had few analysis in the context of the object detection task. In this work, to address the issue, we present a novel class-aware robust adversarial training paradigm for the object detection task. For a given image, the proposed approach generates an universal adversarial perturbation to simultaneously attack all the occurred objects in the image through jointly maximizing the respective loss for each object. Meanwhile, instead of normalizing the total loss with the number of objects, the proposed approach decomposes the total loss into class-wise losses and normalizes each class loss using the number of objects for the class. The adversarial training based on the class  weighted loss can not only balances the influence of each class but also effectively and  evenly improves the adversarial robustness of trained models for all the object classes  as compared with the previous defense methods. With extensive experiments on the challenging PASCAL-VOC and MS-COCO datasets, the evaluation results demonstrate that the proposed defense methods can effectively enhance the robustness of the object detection models. If time permits, I will also cover our recent work to generate naturalistic physical adversarial patch for object detectors.