TIGP (SNHCC) -- Understanding Endogenous Data Drift in Adaptive Models with Recourse-Seeking Users
- LecturerProf. Hao-Tsung Yang (中央大學資工系)
Host: TIGP (SNHCC) - Time2026-03-09 (Mon.) 14:00 ~ 16:00
- LocationAuditorium 106 at IIS new Building
Abstract
Deep learning models are widely used in decision-making and recommendation systems, where they typically rely on the assumption of a static data distribution between training and deployment. However, real-world deployment environments often violate this assumption. Users who receive negative outcomes may adapt their features to meet model criteria, ie, recourse action. These adaptive behaviors create shifts in the data distribution and when models are retrained on this shifted data, a feedback loop emerges: user behavior influences the model, and the updated model in turn reshapes future user behavior.
In this talk, I first introduce the core principles of recourse and its standard applications. I then explore the systemic consequences of models adapting to recourse-seeking users. We demonstrate, through both theoretical and empirical lenses, that this interaction pushes Logistic and MLP models toward higher decision standards. Over time, this leads to escalating recourse costs and diminished reliability of "optimal" actions. These findings draw critical parallels to economic theories of endogenous barriers to entry, highlighting how algorithmic retraining can unintentionally reinforce higher standards and gatekeep opportunities. Finally, I present algorithmic methods designed to mitigate these challenges and stabilize the long-term interaction between models and users.
BIO
Hao-Tsung Yang is an assistant professor at National Central University. Before that, he was a research associate at the School of Informatics in University of Edinburgh, U.K., supervised by Prof. Rik Sarkar. He receives his Ph.D. degree in Computer Science, Stony Brook University, in 2020, advised by Prof. Jie Gao and Prof. Shan Lin.
Hao-Tsung Yang's research theme lies between autonomous systems, data privacy, algorithm, and machine learning. He focuses on new problems and challenges when A.I. comes into human life, including serving humans, interacting & cooperating with humans, or defense from the human-like adversary. For example, an autonomous system such as multi-robot path planning involves multiple works; the control-feedback loop, the algorithm design, privacy, and data misuse. The solutions of these works influence one another, especially considering the human factor in the environment. One can use machine learning techniques to learn and generate good path planning solutions but also may invade people's privacy such as revealing their routine schedule, misusing sensitive data,...etc. On the other hand, the solution may also reveal to the adversary who wants to damage the system and take advantage of it. In a patrol mission, the adversary can predict the arrival time of patrolling robots and launch attacks in vulnerable time slots. These bring new challenges and solutions could be found from algorithmic or machine learning perspectives, and sometimes, combining both together.