中文

Faculty & Research

Insights

Over the years, data has exploded, artificial intelligence has been fast developed. The commercial usage of AI is already a common phenomenon in many fields, including education, medical domain, and financial platforms. AI is no longer a technical word talked by programmers or computer scientists only. AI is everywhere in our daily life. However, in practice, we have realized that AI is not always good. It might not satisfy our expectations in realizing its business value. And even worse, it brings unexpectedly a range of social issues. For example, food delivery platforms have established algorithm-based systems with precise routing recommendations. It would satisfy consumers’ needs by delivering foods in a timely manner. But it also pushes food delivery drivers to deliver foods as fast as possible, so as to meet the strict requirement set by the algorithms. This brings a lot of potential issues, including like transportation safety issues.

So we are thinking, anything wrong here? What can we do? We realize that the good performance of AI usage relies on humans’ collaborations, or even humans’ inputs. Then what kinds of extra value could humans generate in addition to a well-developed AI algorithm? And how should we provoke humans to contribute. To answer these questions, my coauthor and I conducted a set of unique experiments on a microloan platform. In our context, there are both machine-based and human evaluators, who decide whether the loan applications should be approved. In our experiments, machines and human make decisions independently and then after observing machines’ suggestions, humans would decide the final approval or rejection decisions. And we test multiple scenarios with different information availability. And in the end, we find that when machines and humans have disagreements, humans tend to follow AI in most cases without additional contributions.

However, when the big data is available and the machine interpretations are present, humans would no longer follow AI but form systematic rethinking. And this is different from their initial thinking process and also different from machines’ decision-making process. This in turn, increases the overall performance by reducing the final default rates from 6% to only 3.8%. And it also shrinks the gender gaps, which are the unintentional outcomes from machine-learning algorithms.

In a word, machines are not always good. We should involve humans in the process.

About the author:Zhang Yingjie  (Assistant Professor of Marketing)