Artificial Intelligence Series II FLAVIA COSTA, CONICET RESEARCHER: THE MACHINIC INDIVIDUALS.

Artificial intelligence

May 16, 2023

16 May, 2023

The topic of Artificial Intelligence, with its vast scope and how little we actually know – in many cases, we are still in the intuitive stage – has led to an authentic avalanche of studies, opinions, controversies, and heated debates that practically occur daily.

Our Laboratory believes that one of the best services it can provide to all those people and organizations that follow our work is to offer a carefully selected series of those opinions, positions, and debates, almost up to the day they occur, to keep genuinely informed those who are paying attention to what is happening and to our perspective.

Of course, the Laboratory is working on its Artificial Intelligence Microlab and will appropriately share its conclusions and perceptions, but the urgency of the topic does not allow for many delays. This is why today we are launching a Series on Artificial Intelligence, which we hope will foster analysis, reflection, and conclusions on the projection that a topic of this magnitude forces us to address. No one, neither governments, international organizations, regional organizations, think tanks, nor individuals can remain indifferent to its evolution. As always, we hope that our service can be useful to you.

FLAVIA COSTA, CONICET RESEARCHER: THE MACHINIC INDIVIDUALS.

The researcher from the National Scientific and Technical Research Council of the Argentine Republic, Flavia Costa, presents in her latest works some extremely interesting ideas about “our interlocutor,” Artificial Intelligence. This is a perspective resulting from a long career of research and reflective publications on our relationship as humans with what she calls “machinic individuals.” Additionally, it is very interesting how the author links Alan Turing’s ideas with certain behaviors developed by evolutionary formulas of artificial intelligence. She also once again emphasizes the need for our region to start studying the topic and seeking scientific debate, so we do not fall behind, as is often the case, and become a sort of tail-end follower of developed countries.

The term “machinic individuals” is already being used in relation to artificial intelligence: the works developed by Flavia Costa seek to explain how these individuals relate to humans. Costa argues that machines today can do the same things as humans, but in a different way. The question that immediately arises is: should we be worried?

Shortly before the start of World War II, British genius Alan Turing developed a device that would simply become known as the “Turing Machine.” Not only did it become the key to deciphering the encrypted communications codes of Nazism, but it was also a precursor invention to modern computing.

Many decades later, researcher Flavia Costa referred to that development to bring it into the present and explain artificial intelligence: “When Turing invented his machine, he said, ‘this is not a machine, it is actually a universal machine, it is the machine that can be all machines.’ And something of artificial intelligence is like that: it’s a technology that, because it reproduces language, does all the things that a language can do. And I would say that humans do everything with language.”

Dr. Flavia Costa is the author – among other works – of “Tecnoceno: algoritmos, biohackers y nuevas formas de vida” (Technocene: algorithms, biohackers, and new forms of life). The goal of the work is to try to understand this new technology and the challenges it implies: “Artificial Intelligence is like a big umbrella, but it involves all those technologies that automate the processes we humans do using data, information,” she explains. “In the last 10 to 15 years, there have been two major innovations: the ability to handle huge volumes of data on one hand. And machine learning or machine-based learning, which is the real novelty, on the other. Machines can learn by themselves.”

The combination of these two innovations means that machines, through new results they obtain and algorithms for backpropagation and error recovery, restart the calculation based on new discoveries. Through syntax, they produce meaning. They learn.

This doesn’t mean that machines humanize, because it’s not about – or it shouldn’t be about – imitating the human being, which is a thorny and dangerous issue – duplicating the individual, but rather the result obtained. In simpler terms, it means that machines can do the same things that humans do, but not in the same way.

Among other things, this leads us to one of the strongest and most important debates we are witnessing. Let’s put it this way: if artificial intelligence uses everything that has been written over centuries, if it can combine it in an intelligent way, but it is not really the creator, to whom do we attribute authorship? This is by no means a minor issue because it touches on a central topic that seemed to have been legally and consensually settled for centuries. Indeed, this is one of the issues that the sweeping arrival of Artificial Intelligence brings to our consideration.

In some international scientific journals, co-authorship with ChatGPT, for example, is already being accepted. As long as that source of article formation is attributed, the author is that method, that procedure. We must be more precise, see if it is an author, if it is bibliography, where we place it, or if it is something else entirely. Indeed, the strong unity of the individual as the author starts to dissolve. A new relationship is beginning to emerge between living human individuals and machinic individuals, which are something else entirely.

Consequently, the question we should ask ourselves is: what is a machinic individual, at least in Dr. Costa’s conception? That individual is more than a machinic element, more than a specific tool. An individual is already a more sophisticated tool that is capable of developing and performing tasks autonomously.

These new developments pose challenges in the labor field. Costa believes that “these individuals” should be seen as those performing tasks that replace “the previous machinery,” such as very tedious, physically demanding, and aggressive jobs. The incorporation of technology led to unemployment but also eased the physical impact of tasks like mining. The difference now is that new machines are here to perform tasks that are not necessarily tedious, even tasks that are enjoyed, like writing, translating, researching, and learning.

Another issue, among the many challenges Dr. Costa presents, and which is at the center of current general discussions, is the role of artificial intelligence in education. Indeed, there are “sides,” those who see it as a kind of threat and want to impose strict limits – which certainly ignores the penetration and power of the technology – and those who see it as a tool to rely on, potentially providing additional support to education. In this commentary, we must not forget that the availability of technology for some and not for others, besides being a development element, is also a factor that feeds gaps we need to solve.

Regarding this point, Costa notes that entire countries or cities have banned chat tools for educational tasks. Italy is the most extreme case. New York has banned the use of ChatGPT in all educational institutions, and Hong Kong has also done so. It’s similar to the impact of the calculator on our generation: first, you have to learn to do the math operations, then use the calculator. Now, calculators are introduced in third, fourth, and fifth grades. We need to figure out how this technology will be incorporated into education, to what degree, and how we can manage it – always limited by reality.

In addition to the labor and educational sectors, there is the challenge of regulation, which, so far, doesn’t exist, and it is unclear who or how the new legal framework for these technologies will be determined.

“We have to think about everything,” Costa defined, adding that “we have to work as we always have, by comparison: see what they are doing in the European Union, the United States, or the East. In our region, the discussion must take place quickly. We need to be imaginative.”


[[i] Flavia Costa holds a PhD in Social Sciences from the University of Buenos Aires, where she has been teaching the Seminar on Informatics and Society since 1995, currently as an Associate Professor. She has a degree in Communication Sciences from the same faculty. She is an Adjunct Researcher at the National Council for Scientific and Technical Research (CONICET). Costa is a member of the editorial group of the journal Artefacto. Pensamientos sobre la técnica and the collective Ludion – Argentine Exploratory of Technological Poetics/Politics. In the past decade, she has co-translated the works of Giorgio Agamben into Spanish. Her central research theme is the perspective of modernity as a dual process of technification and politicization of life. In this context, she developed the notion of “technological life forms,” originally coined by British sociologist Scott Lash, to analyze contemporary modes of existence at the intersection of biopolitics and biotechnology.

[ii] Alan Mathison Turing (Paddington, London; June 23, 1912 – Wilmslow, Cheshire; June 7, 1954) was a British mathematician, logician, theoretical computer scientist, cryptographer, philosopher, and theoretical biologist.

He is considered one of the fathers of computer science and a precursor to modern computing. He provided an influential formalization of the concepts of algorithms and computation with the Turing machine. He formulated his own version of what is now widely accepted as the Church-Turing thesis (1936).

During World War II, he worked on deciphering Nazi codes, particularly the Enigma machine, and for a time was the head of the Naval Enigma section at Bletchley Park. It is estimated that his work shortened the duration of the war by two to four years. After the war, he designed one of the first programmable digital electronic computers at the UK’s National Physical Laboratory, and shortly thereafter, he built another of the first machines at the University of Manchester.

In the field of artificial intelligence, Turing is primarily known for the development of the Turing Test (1950), a criterion by which a machine’s intelligence can be judged if its responses in the test are indistinguishable from those of a human.

Autor: Artificial Intelligence Microlab Team of the Laboratory of the Future

Autor: Artificial Intelligence Microlab Team of the Laboratory of the Future

Related articles

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

error: Content is protected !!