您的瀏覽器不支援JavaScript語法,網站的部份功能在JavaScript沒有啟用的狀態下無法正常使用。

Institute of Information Science, Academia Sinica

Events

Print

Press Ctrl+P to print from browser

Seminar

:::

Mutli-agent Patrolling: Approximation algorithms and their connection to Adversarial settings

  • LecturerProf. Hao-Tsung Yang (Department of Information Engineering, National Central University)
    Host: Meng-Tsung, Tsai
  • Time2023-08-08 (Tue.) 14:00 ~ 16:00
  • LocationAuditorium 106 at IIS new Building
Abstract
Multi-agent patrolling generally indicates the scenario where single/multi-mobile agents (i.e., robots, drones, autonomous cars...) persistently cover and scout a physical environment. Efficient assigning resources to ensure security is crucial and unlike typical resource-constraint problems, the challenges have the following characteristics. First, the agents are required to visit/re-visit targets and the feedback may not be obtained in the short run. Secondly, the realistic model naturally combines adversarial settings and the problems are much more complicated and also varied a lot based on different adversary settings.

In this talk, I will briefly introduce the recent trend of Multi-agent patrolling and then show our recent theoretic contributions to one of the close-related objectives: latency. Later, I will show how this objective connects to the realist game-theoretic models and their applications.
 
BIO
Hao-Tsung Yang is an assistant professor at National Central University. Before that, he was a research associate at the School of Informatics in University of Edinburgh, U.K., supervised by Prof. Rik Sarkar. He receives his Ph.D. degree in Computer Science, Stony Brook University, in 2020, advised by Prof. Jie Gao and Prof. Shan Lin.

Hao-Tsung Yang's research theme lies between autonomous systems, data privacy, algorithm, and machine learning. He focuses on new problems and challenges when A.I. comes into human life, including serving humans, interacting & cooperating with humans, or defense from a human-like adversary. For example, an autonomous system such as multi-robot path planning involves multiple works; the control-feedback loop, the algorithm design, privacy, and data misuse. The solutions of these works influence one another, especially considering the human factor in the environment. One can use machine learning techniques to learn and generate good path planning solutions but also may invade people's privacy such as revealing their routine schedule, misusing sensitive data,...etc. On the other hand, the solution may also reveal to the adversary who wants to damage the system and take advantage of it. In a patrol mission, the adversary can predict the arrival time of patrolling robots and launch attacks in vulnerable time slots. These bring new challenges and solutions could be found from algorithmic or machine learning perspectives, and sometimes, combining both together.