海角社区

Sun Working to Address Discrimination in AI/ML Systems

March 21, 2022

Mingxuan Sun in officeBATON ROUGE, LA 鈥 Artificial intelligence (AI) and machine learning (ML) technologies play an increasing role in our society today, including in high-stakes decision-making systems like lending decisions, employment screenings, and criminal justice sentencing.

However, one growing challenge with AI and ML systems is avoiding the unfairness they might introduce that can lead to discriminatory decisions. Finding a solution to that problem is the aim of a project by 海角社区 Computer Science Associate Professor Mingxuan Sun and University of Iowa Computer Science Associate Professor Tianbao Yang and University of Iowa Associate Professor of Business Analytics Qihang Lin.

The work is part of a grant from the National Science Foundation ($500,000) and Amazon ($300,000). Yang serves as principal investigator on the project, and Sun and Lin are co-principal investigators.

The researchers鈥 objectives are to design new fairness measures and develop numerical algorithms for solving the optimization with fairness guarantee. More specifically, they will develop scalable stochastic optimization algorithms for optimizing a broad family of rank-based, threshold-agnostic objectives.

Learning to rank is to select a set of top-k answers/items with the highest ranking score according to the given scoring function. Ranking algorithms have many applications such as selecting top-k job candidates, predicting top-k crime hotspots, and recommending top-k items.

鈥淢ost current machine-learning approaches are based on optimizing traditional objectives, such as accuracy in the training data, which are insufficient for addressing the minority bias of training data,鈥 Sun said. 鈥淚n many domains, the data is highly skewed over different classes. For example, an historical data bias or stereotype exists that most software engineers are young males. An unfair ML system would recommend a software engineer position to young males only.鈥

Sun added that the project will also include integrating the research team鈥檚 techniques into education analytics to address fairness and ethical concerns of predictive models, in particular, the 鈥減erpetuating biases toward under-represented minority students, first-generation college students, and female students in STEM courses.鈥

鈥淥ur goal is to ensure more fairness between different demographic groups in applications such as recommendations, top-k hotspot predictions, and students鈥 performance predictions.鈥

Like us on Facebook (@lsuengineering) or follow us on Twitter and Instagram (@lsuengineering).鈥

            ###

Contact: Joshua Duplechain
Director of Communications
225-578-5706 (o)
josh@lsu.edu