A new research suggests that being prejudiced towards others does not necessarily require a high level of cognitive ability and it could easily be demonstrated by artificially intelligent machines. Computer science and psychology experts from Cardiff University and MIT have reported that groups of free, self-governing machines could be prejudiced towards others- simply by identifying, copying and learning this behaviour from one another.
Many have thought that prejudice is a human-specific phenomenon which needs human cognition to develop an opinion of, or to stereotype, a certain person or group. Although some types of computer algorithms have already demonstrated prejudice, such as racism and sexism, based on what they learned from public records and other data generated by humans- this new study explains how there is a possibility of AI evolving prejudicial groups on its own.
The findings were published recently in the journal Scientific Reports. For the research, the research team focused on computer simulations of how individuals who have similar prejudices, or virtual agents, can form a group and how they interact with each other.
These individuals play a game of give and take, in which each individual has to make a decision as to whether they will donate to somebody inside of their own group or someone who is in different group, based on the individual’s reputation and as well as on their own donating strategy, like their levels of prejudice towards people who were ‘outsiders’.
As the game goes on, a supercomputer keeps coming up with thousands of simulations, and each individual starts to learn new strategies by copying others- who can be either from their own group or the entire population.
Professor Roger Whitaker, co-author of the study from Cardiff University’s Crime and Security Research Institute and the School of Computer Science and Informatics, said, “By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it.”
“Our simulations show that prejudice is a powerful force of nature and through evolution, it can easily become incentivised in virtual populations, to the detriment of wider connectivity with others. Protection from prejudicial groups can inadvertently lead to individuals forming further prejudicial groups, resulting in a fractured population. Such widespread prejudice is hard to reverse”, he added.
Individuals also updated their prejudice levels by preferring to copy those that gain a higher short term payoff, which means that these decisions do not really require advanced cognitive abilities.
“It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population,” Professor Whitaker continued.
“Many of the AI developments that we are seeing involve autonomy and self-control, meaning that the behaviour of devices is also influenced by others around them. Vehicles and the Internet of Things are two recent examples. Our study gives a theoretical insight where simulated agents periodically call upon others for some kind of resource.”