News about biased algorithms has made people realize that decisions made by artificial intelligence systems can be influenced by demographic details like race, ethnicity, and gender. Often, people hear about these decisions—like getting a loan, insurance, or school admission—through friends or family who share similar backgrounds.
For instance, if women hear that other women are being denied credit cards by a company, they might assume that gender plays a role in these decisions and decide not to apply to that company, even if they don’t know why the others were denied. This study explores how these shared stories about acceptance or rejection by services can shape people’s choices, especially when it comes to services that screen applicants.
We find that this specific type of word-of-mouth, which we call service acceptance word-of-mouth, can guide historically marginalized consumers towards companies that use fair algorithms and non-marginalized consumers towards companies using unfair algorithms. Interestingly, we also find that when people from different backgrounds share and value each other’s experiences, they might all choose the same firm—even if the firm's algorithm is not in everyone’s best interest.
We demonstrate that this shared choice can arise from people trying to maximize their own benefits, even without any explicit concern for social justice, fairness, or morality.