IN AI (CAN) WE TRUST?

Artificial intelligence (AI) is the best thing that can happen in our life. It helps us read our emails, complete our sentences, get directions, shop online, get restaurant and entertainment recommendations and even make it easier to connect with old or new friends on social media. Artificial intelligence not only strengthens itself in many human functions; it also makes decisions for us.

The question is whether these decisions are reliable. Expand, facilitate or reject AI recruitment, the right candidate choice? Was the departure of Tinder in the air or by the algorithm? Who will be sent to prison: criminal or innocent predicted by AI bias?

As humans, we come from a wide variety of socio-political, racial, and cultural backgrounds. The idea of   what is right – and the simple question of morality itself – changes depending on the context. How does AI determine what is good and for whom? Who chooses the AI   built into the front to save the driver of a smart car or the pedestrian? How do you make the decision?

A matter of ethics

Before AI can think for humans, humans need to think for AI. In essence, the ethics of AI technology is the embodiment of the ethics of its creators. And this is where the “AI ethics puzzle” begins.

AI is good and bad, but the truth is that the underlying concern that dominates any invention or innovation is human prejudice. There is a lot of evidence pointing in that direction, the newest and most prominent being Apple. In 2019, the company’s new credit card was accused of offering some women a lower limit, even if they have better credit scores than their husbands. Prejudice Steve Wozniak, the co-founder of Apple, noted that his wife had a lower credit limit than his, even though they did not have a separate bank account or credit card, or other segregated assets.

AI is open to prejudice because it makes decisions based on information from human creators and the information contains prejudices. Many of the farmers are men who grew up in the western world, which can expose them to individual communities and geographic areas. There has been much debate about Compas (Correctional Criminal Management Profile for Alternative Sanctions), an algorithm developed by the US courts to predict the likelihood of a repeat crime. The algorithm indicated twice as many false positives for black criminals (45%) than for white criminals (23%).

Garbage inside, garbage outside

TechTarget defines the concept of “garbage entering, garbage leaving” as follows: “The quality of the input determines the quality of the output”. In addition to people, prejudice can also permeate the intelligence of a machine. After all, as B. Nalini pointed out, it is the people who pose the problem, train the model, and implement the system. Even with unbiased data, there is no guarantee of accuracy, as the process by which machine learning models obtain them can produce unbiased results.

Teaching the morality of AI

In a 2001 article, futurist and inventor Raymond Kurzweil argued that our view of progress was linear. The more we adapt to change, the speed of change itself increases exponentially. We can expect 20,000 years of progress in the decades of the 21st century, but while we recognize exponential growth, we also have to accept that AI is a relatively new technology. The word itself first appeared 60 years ago, which means that we are closer to the beginning or perhaps even to the middle, and not the end.

Artificial intelligence is just a child who learns the differences between moral right and wrong and inherits the prejudices of its creators. He still strives for more than just finding statistical patterns in large data sets. Human understanding and intelligence go far beyond static ideas of right and wrong, and the rules change according to socio-cultural and historical contexts. If we, as humans, still struggle with morality, it is presumptuous to expect a machine – which we have created – to reach us in this respect.

As noted by Harvard Business Review, there are two conclusions. The first is to recognize how AI can help improve human decision-making by predicting the results of the available data, generalizing and separating the variables that human decision-makers take, without even realizing their inherent prejudices. The second refers to a more complex need to define and technically measure the ever-elusive idea of   “justice”.

Conclusion

Prejudices are as fundamental as the air we breathe or the environment in which we live and govern among all of us, both as individuals and as a community. At this point in human history, the world is preparing to industrialize and deploy AI technology more widely. Therefore, dealing with AI’s “inherent” prejudices is becoming extremely critical.

Artificial intelligence is just a child who learns the differences between moral right and wrong and inherits the prejudices of its creators. If we, as humans, are still struggling with morality, it is very presumptuous to expect that a machine we have created will overshadow us in this regard.

Just as a pet blindly reflects its handler’s instructions and personality, AI reflects the opinion of its breeders, partial or not. The root of the problem, therefore, goes much deeper than the ethics of AI, but it becomes a matter of human morality and the concept of “justice” and how it can be defined and measured.

Translate »