Do Not Blame Artificial Intelligence

Back
Daniel Merege
Oct 24 2017
Technology
Do Not Blame Artificial Intelligence
Share this article

Although not very new, Artificial Intelligence (AI) has gained prominence in recent years, both positive and negative. As an example, there is an idea that as the machine learns to "think" as humans, it may, in the near future, take humans’ work positions that today employ millions of people around the world. But, is AI really guilty?

The concept and techniques of AI are not so new, at least for computers’ age. In the decade of 1950s, there was a great enthusiasm for scientific research that combined mathematics with computational techniques, aiming to teach machines to make decisions and to infer things from what they learned. That time, however, there was no computer processing power to perform heavy calculations, or a large amount of available data, that could justify the adoption of these techniques by industry and companies.

Within the last five years, this scenario completely changed. We have achieved tremendous computing power to enable these heavy calculations, which are required for machines to "learn" patterns. Also, we now produce and have available a large amount of data, which serves as raw material for machines to learn. These facts were a spark for technology companies, such as Google and Facebook, to give attention to the subject and to develop techniques and products that made AI accessible and feasible. That's why we talk a lot about it nowadays.

The truth is that AI is extremely useful to make computers our allies on analysis and prediction of problems and solutions, simpler to more complex. With these techniques, it is possible to the computer, for example, identify cancer by analyzing images. To perform this activity, a group of human doctors classify thousands of images, saying what is cancer and what is not cancer, what makes computers creating a mathematical model to predict, for a new image, the probability that this indicates a cancer or do not.

Other examples can be found in different sectors, such as urban services management, SPAM filters, and chatbots. The potential applications of AI are endless and can greatly improve the productivity and effectiveness of processes, products and services. The issue is the ethical boundaries of our relationship with machines.

I take the example of the development of chemistry. With the same basic knowledge, we can produce medicines that save lives , but we can also produce chemical weapons that destroy lives. Or with the computer itself, which brings us both as marvels of the digital world as well as cyber threats and the loss of privacy. Likewise, AI’s main concern is in its application, not in the development of its techniques per se.

Therefore, computational ethics here is an essential factor. We must invest in the development of the computer industry, which can bring us many advances in terms of production and quality of life, but we must determine the limit in terms of application of this knowledge. We want machines that help us detect urban problems quickly and proactively, but we do not want machines that identify a person's sexual orientation, based on his/her facial images, what can help malicious people to act destructively and against human rights. We don’t need it.

The point here is that we need to define, as a society, the value we want to extract from all these techniques. It is very important to establish global-level legislation to define ethical rules for AI applications, as we see for medical practices, for example.

Regardless the track AI takes from here, we can not blame it. Decisions, after all, are always human, and it is with them we should be concerned. And even if an application might, for example, lead to human jobs losses, we should look at what action we take now to prepare the affected workers to win new jobs that will emerge from that.

Everything is a matter of evolution and improvement, and we can certainly live in a world where AI brings us comfort, well-being and quality of life. We should not blame it  for decisions that people interested in bad applications of AI will make. These decisions are exclusively human.