Is Artificial Intelligence Really That Smart? The Dangers Hidden in AI

Artificial Intelligence (AI) exhibits biases that can be dangerous for society because machines learn from biased data. These biases can have significant social consequences, such as discrimination in hiring and incorrect labeling of images. However, AI is not inherently bad; rather, proper data selection and corrective measures are required to address these biases. It is crucial to have diverse teams in AI development and to work toward responsible AI by applying techniques like explainability and meta-learning.

More and more researchers are detecting biases and discrimination in AI systems that are being massively implemented. What are the key factors to address this problem?

AI has been with us for years. We use it, often without realizing it. Today, several researchers are raising alarms because they claim that many AI programs exhibit biases that can be dangerous for society.

First, it’s important to understand the concept—what we mean by Artificial Intelligence. It refers to systems or machines that mimic human intelligence to perform tasks. These processes can improve their effectiveness and responses based on the information they gather. And this is precisely where the “big issue” with these technological tools lies. Machines learn through the data they are fed and use various patterns to improve their responses.

So, what happens if all the information these systems are fed and trained on is biased? The answer is simple: they will produce biased results. This is not a minor issue because it has already begun to have clear social consequences. Several experts are observing that this technology is very sensitive to bias and shows alarming flaws.

Just think of the countless applications this technological tool has and will have. AI is already being used to approve bank loans, in healthcare systems, and even to streamline talent recruitment.

A clear example is what happened at Amazon in 2014. That year, a team of software engineers at Jeff Bezos’ company developed a program to review resumes. In 2015, they realized that the algorithm discriminated against women for technical positions. Amazon’s goal had been to facilitate the search for the best talent. This tool, based on data from the past 10 years, “learned” that men were “preferable” for these positions, as they were the ones who most frequently applied for such jobs. After starting to use this tool, the Human Resources team realized that they were only receiving male applicants. They discovered that this technological solution was causing the issue and corrected the problem.

Racism

On the other hand, it’s important to note that machines learn to identify shapes, colors, and other visual patterns based on the images they are trained on or added to the system as examples. What happens if the developer only inputs images of white faces? Errors like the one Google encountered, where it mistakenly labeled African American individuals as gorillas, will occur.

There are still few deep learning models where systems truly learn beyond the data sets of their programmers.

Is AI, then, a bad technology?

Technologies are neither inherently good nor bad; rather, the issue may be with inappropriate data selection. It is not the AI systems that are to blame, but rather those who conducted the exploratory analysis to identify underlying problems such as irrelevant variables or data imbalance.

The key is to detect these defects in time and apply measures to correct them. How? By analyzing variable selection, generating artificial examples to balance distribution, or adjusting the training algorithm, among other actions.

Moreover, it is crucial to have diverse and heterogeneous teams in the development area to create representative programs and include diverse data that ensures more transparent, less biased, more accessible, and inclusive AI tools.

Towards Responsible AI

Working towards responsible AI applicable to all types of models is key. There are already several research lines aimed at this goal, including new metrics, applying explainability techniques for advanced models like Deep Learning, and combining techniques and metrics.

At the same time, meta-learning or meta-learning has started to be discussed. What is it about? Unlike conventional AI approaches, where a specific task is solved from scratch using a fixed learning algorithm, meta-learning aims to improve the learning algorithm itself.

Thus, “learning to learn” allows systematic observation of how different machine learning approaches work across various learning tasks. Based on this experience, or metadata, the system should learn to learn new tasks much faster.

In this way, meta-learning not only accelerates and dramatically improves the design of neural architectures but also allows handcrafted algorithms to be replaced by new data-based approaches. With the growing emergence of technologies like meta-learning, AI is becoming less data-intensive.

Lastly, many assert that “real” intelligence is open, where the brain makes synapses and connections to learn and solve cases. Given the transformative and new nature of AI, its openness allows for improving processes and understanding the main issues to accelerate its development. Moreover, several influential actors in the tech industry have already taken steps towards opening up their AI knowledge, revealing an upward trend.

In conclusion, while we currently have data-based AI tools that may have certain biases, advanced models are being developed to address these shortcomings.

By Manuel Allegue – September 19, 2022

Be part of the Cloud world

Subscribe to our periodic Technology News digest.