by Artificial Intelligence Microlab Team of the Laboratory of the Future | Sep 18, 2025 | Artificial intelligence, Introduction to the Evolution of AI
A macro analysis of thousands of predictions reveals that Artificial General Intelligence (AGI) is much closer than expected, challenging previous estimates that placed it around 2060
For years, the possibility that artificial intelligence (AI) could surpass human cognitive capacity has been a topic of speculation and debate.
Steve Wozniak claims that AI is not truly intelligent: “it doesn’t think, it just takes things from other places and organizes them.”
Now, a new macro analysis conducted by AIMultiple—according to Esquire magazine—based on 8,590 predictions from scientists, business leaders, and AI experts, suggests that the Singularity—the point at which machine intelligence surpasses human intelligence—could be much closer than expected.
While a decade ago it was estimated that Artificial General Intelligence (AGI) would arrive around 2060, some voices in the field now claim we could reach it in just one year.
The acceleration in the development of large language models (LLMs), the exponential growth of computing power, and the potential emergence of quantum computing have radically changed expectations about the future of AI.
A Shift in Predictions: From 2060 to an Imminent Future
The AIMultiple study analyzes how predictions regarding artificial intelligence and its ability to achieve AGI have evolved.
Traditionally, scientists have been more conservative in their estimates, while industry entrepreneurs have shown greater optimism.
In 2010, most experts predicted the arrival of Artificial General Intelligence by 2060.
Following advances in Artificial Intelligence over the last decade, more recent predictions point to 2040.
Industry leaders, such as the CEO of Anthropic, estimate that the Singularity could occur within 12 months.
The key to this progress lies in Moore’s Law, which states that computing power doubles every 18 months, accelerating the development of advanced algorithms.
However, some experts warn that Moore’s Law is reaching its limit, and that quantum computing could be the key to the next major leap forward.
An Inevitable Future or an Exaggeration?
Not all experts agree that the Singularity is imminent—or even possible. Figures like Yann LeCun, a pioneer of deep learning, argue that human intelligence is too complex and specialized to be fully replicated.
Some key objections include:
- Current Artificial Intelligence is based on patterns and calculations, but human intelligence includes factors such as intuition, creativity, and emotionality.
- Intelligence is not limited to mathematical logic; there are also interpersonal, intrapersonal, and existential forms of intelligence.
- Artificial Intelligence is a powerful tool, but not necessarily capable of generating autonomous discoveries without human intervention.
An example of this is AIMultiple’s argument, which notes that while AI may improve efficiency in scientific research, it still requires human judgment to direct knowledge.
“Even the best machine analyzing existing data might not be able to find a cure for cancer,” the report states.
The Impact of Artificial General Intelligence: Challenges and Opportunities
If AGI truly is near, the implications for society would be immense. From industry automation to rethinking the nature of work, education, and the economy, the arrival of an artificial intelligence capable of matching or surpassing human intelligence could represent the most significant technological change in history.
However, it also raises ethical, regulatory, and philosophical risks:
- Who will control an AI with superior capabilities to humans?
- Could AI develop its own goals, independent of human interests?
- Are we prepared for a world where machines make critical decisions in fields such as medicine, justice, or security?
Is the Future Already Here?
Although predictions about the Singularity vary, the central message is clear: AI is advancing at an unprecedented pace, and human society must prepare for its implications.
Whether Artificial General Intelligence develops in 50 years, 10 years, or just one year will depend on technological evolution and how humans choose to direct it.
But one thing is certain: the debate about the future of artificial intelligence is only just beginning…
Mirko Racovsky is an Argentine journalist and narrator specializing in health, science, and wellness topics. He is the author of various articles in Infobae, where he addresses issues related to nutrition, sleep, psychology, and healthy habits with a clear and informative style. In addition, he works as a host and sports commentator at FM Santa María 91.3 in Campana, combining his journalistic work with radio broadcasting. His work is characterized by making scientific and medical information accessible and up to date for the general public.
by Artificial Intelligence Microlab Team of the Laboratory of the Future | May 1, 2025 | News
“By directly challenging the power and influence of tech giants, Europeans can still create an alternative. Only then can technology continue to contribute to our shared prosperity, rather than becoming a tool of domination allowing a tiny elite to crush the rest of humanity.”
Fragments from the Doctrine: Powers of AI
As world leaders gather in Paris today and tomorrow for the AI Action Summit, we are at a critical turning point in the trajectory of artificial intelligence. The set of technologies commonly referred to as AI has already transformed industries and promises to reshape societies. But the crucial question remains: in whose interest is this transformation taking place, and what kind of future is it building?
The biologist Theodosius Dobzhansky famously said, “Nothing in biology makes sense except in the light of evolution.” In the era of AI, we might say, “Nothing makes sense except in the light of power struggles.” This deep rivalry determines who controls AI, whose interests it serves, and what values guide its development. Currently, most of this power is concentrated in the hands of a few tech giants.
History teaches us the dangers of excessive concentration of power. In medieval Europe, advances in agriculture increased productivity, but barely improved workers’ lives. The nobility and clergy, who owned the land and controlled the wealth, reaped all the benefits of technological and organizational improvements in farming, while workers continued to struggle in poverty. This is a question we face again today. The path AI takes will also determine how economic benefits are distributed among the population and will shape the fabric of the societies we live in.
History teaches us the dangers of concentrated power.
Two main directions are quite clear.
The first is the relentless pursuit of Artificial General Intelligence (AGI), and eventually superintelligence, in which machines outperform humans in nearly every task. While this vision may provoke fears of a machine takeover, the main threat in this scenario actually stems from the unchecked power of those who design and control these systems. Such a future would drastically increase inequality. By stripping us of any capacity for agency, it would also diminish and dilute what it means to be human.
We might ask whether AGI is even realistically achievable in the near future. Even if it were, it is unlikely to bring the productivity gains it promises. A more probable scenario is, in fact, that inferior AI systems will replace workers in tasks where they offer expertise and insight—undermining economic value rather than creating it.
The second path is what my colleagues and I call “pro-worker” or “pro-human” AI.
This vision sees AI as a tool to empower individuals and make workers more productive by providing reliable, contextual information that complements their expertise. The priority is to give individuals control over their own data and enable them to perform a wider range of tasks with greater confidence and autonomy.
Unlike the first, this second vision is not a fantasy.
AI can already create systems that truly assist workers and citizens. But this potential will remain underutilized if it is built on an architecture designed to mimic and surpass humans rather than support them. Instead of creating tools that enhance decision-making, many companies seem focused on developing models that produce entirely hollow pastiches—or other lifeless, superficial imitations. To preserve what makes us human—and leave creation where it belongs—AI must be freed from the shackles of mere imitation. It must provide clear and interpretable guidance to human decision-makers to help them make better-informed choices.
Current AI can already build systems that truly help workers and citizens.
So far, the path followed by the high-tech industry reflects deliberate decisions rooted in both economic and ideological motives.
From an ideological standpoint, the industry is driven by dreams of artificial general intelligence and superintelligence—and of reshaping society itself through new hegemonic technologies.
From an economic standpoint, Big Tech has thrived on models that generate massive profits by automating tasks, reducing labor costs, and monopolizing digital advertising—with little interest in empowering workers or strengthening democracies. New business models, more beneficial to society, could replace this paradigm if startups were given a real chance.
Unfortunately, current market conditions enable the dominance of established firms, as they hold all the cash—to buy out or bury competitors—all the data, massive customer bases, and the complicity of lawmakers who seem to have abandoned competition policy.
If the world was under the illusion that the power of large tech companies would be restrained by U.S. government regulation, the images of techno-Caesarian oligarchs at Donald Trump’s inauguration have shattered that belief.
Protected and supported by the new U.S. administration, Big Tech companies have a clear direction in their relentless AI pursuit: they plan to use the technology as a tool to establish their dominance and reshape global markets to serve their own interests.
AI must go beyond imitation.
It must provide clear and interpretable guidance to human decision-makers, to help them make more informed decisions.
But this is not about surrender. History is not yet written.
At a time when relations between the United States and the European Union are increasingly tense, the Paris Summit offers Europeans the opportunity to regain control of their future—starting with AI.
Europe cannot become a passive consumer of these systems, designed without regard for economic sovereignty, innovative capacity, or democratic values. The recent emergence of DeepSeek’s LLM shows that innovation can still triumph over scale—if we create the conditions for it.
By directly challenging the power and influence of tech giants—for example, through the systematic and strategic application of antitrust legislation—and by embracing a vision of AI that centers on what makes us human, European governments can still create an alternative: a truly competitive environment.
Only then can technology continue to contribute to the prosperity of workers and citizens, instead of becoming a tool of domination that allows a tiny elite to overpower the rest of humanity.
- Daron Acemoğlu, “The Simple Macroeconomics of AI”, MIT, April 5, 2024.
- Daron Acemoğlu, David Autor, Simon Johnson, “Can we Have Pro-Worker AI? Choosing a path of machines in service of minds”, Policy Memo, Shaping the Future of Work, MIT, September 2023.
- Katharine Miller, “Privacy in an AI Era: Comment protégeons-nous nos informations personnelles?”, Stanford University, March 18, 2024.
- Richie Koch, “Big Tech has already made enough money in 2024 to pay all its 2023 fines”, Proton, January 8, 2024.
- Camilla Hodgson, “Tech companies axe 34,000 jobs since start of year in pivot to AI”, The Financial Times, February 11, 2024.
by Artificial Intelligence Microlab Team of the Laboratory of the Future | Feb 2, 2025 | Artificial intelligence, News
AI agent systems not only represent cutting-edge technology, but also an essential transformation in the way businesses operate and compete.
In an increasingly interconnected world, artificial intelligence (AI) continues to transform key sectors. Among the most disruptive innovations are AI agent systems, autonomous tools designed to analyze, plan, and learn in dynamic environments. According to an IDC report, business spending on artificial intelligence solutions is expected to reach $423 billion by 2027, with a compound annual growth rate (CAGR) of 26.9% between 2022 and 2027.
Unlike traditional artificial intelligences, which function as passive assistants, AI agents operate as autonomous units capable of collaborating and solving complex problems. These systems not only process data but also make informed decisions, continuously adapting to new scenarios.
In an increasingly interconnected world, artificial intelligence (AI) continues to transform key sectors.
Two main architectures are identified:
Individual agents, designed for specific tasks such as financial analysis or customer service.
Multi-agent systems, which function as specialized virtual teams, ideal for solving more complex problems in sectors such as manufacturing, logistics, and health.
Companies are already experiencing tangible benefits in:
Customer service: leading companies in technology use agents to resolve requests in seconds, reducing operational costs by up to 30%.
Personalized education: AI-based educational platforms create tailored learning paths, improving students’ academic outcomes by 25%.
Health: agents design personalized treatments and process medical analyses with 90% accuracy.
Logistics: companies that have adopted AI agents in their operations report a 20% improvement in supply chain efficiency.
Humanity is experiencing the beginning of a new technological era where AI agent systems not only facilitate processes but completely transform the way companies operate. At Snoop Consulting, it is believed that the key is to design agents that are not only effective but also adapt to the values and goals of each organization.
The true value of AI agents lies in their ability to be technological allies, not simple tools. This means designing solutions that not only address current problems but also anticipate future needs, helping companies remain competitive in a market that evolves rapidly. Their ability to adapt and evolve positions these systems as fundamental pillars in the next era of business automation. In a market that prioritizes efficiency and innovation, AI agents are not just a trend but the engine that will define the future of technology in the next decade.
by Artificial Intelligence Microlab Team of the Laboratory of the Future | May 16, 2023 | Artificial intelligence
The topic of Artificial Intelligence, with its vast scope and how little we actually know – in many cases, we are still in the intuitive stage – has led to an authentic avalanche of studies, opinions, controversies, and heated debates that practically occur daily.
Our Laboratory believes that one of the best services it can provide to all those people and organizations that follow our work is to offer a carefully selected series of those opinions, positions, and debates, almost up to the day they occur, to keep genuinely informed those who are paying attention to what is happening and to our perspective.
Of course, the Laboratory is working on its Artificial Intelligence Microlab and will appropriately share its conclusions and perceptions, but the urgency of the topic does not allow for many delays. This is why today we are launching a Series on Artificial Intelligence, which we hope will foster analysis, reflection, and conclusions on the projection that a topic of this magnitude forces us to address. No one, neither governments, international organizations, regional organizations, think tanks, nor individuals can remain indifferent to its evolution. As always, we hope that our service can be useful to you.
FLAVIA COSTA, CONICET RESEARCHER: THE MACHINIC INDIVIDUALS.
The researcher from the National Scientific and Technical Research Council of the Argentine Republic, Flavia Costa, presents in her latest works some extremely interesting ideas about “our interlocutor,” Artificial Intelligence. This is a perspective resulting from a long career of research and reflective publications on our relationship as humans with what she calls “machinic individuals.” Additionally, it is very interesting how the author links Alan Turing’s ideas with certain behaviors developed by evolutionary formulas of artificial intelligence. She also once again emphasizes the need for our region to start studying the topic and seeking scientific debate, so we do not fall behind, as is often the case, and become a sort of tail-end follower of developed countries.
The term “machinic individuals” is already being used in relation to artificial intelligence: the works developed by Flavia Costa seek to explain how these individuals relate to humans. Costa argues that machines today can do the same things as humans, but in a different way. The question that immediately arises is: should we be worried?
Shortly before the start of World War II, British genius Alan Turing developed a device that would simply become known as the “Turing Machine.” Not only did it become the key to deciphering the encrypted communications codes of Nazism, but it was also a precursor invention to modern computing.
Many decades later, researcher Flavia Costa referred to that development to bring it into the present and explain artificial intelligence: “When Turing invented his machine, he said, ‘this is not a machine, it is actually a universal machine, it is the machine that can be all machines.’ And something of artificial intelligence is like that: it’s a technology that, because it reproduces language, does all the things that a language can do. And I would say that humans do everything with language.”
Dr. Flavia Costa is the author – among other works – of “Tecnoceno: algoritmos, biohackers y nuevas formas de vida” (Technocene: algorithms, biohackers, and new forms of life). The goal of the work is to try to understand this new technology and the challenges it implies: “Artificial Intelligence is like a big umbrella, but it involves all those technologies that automate the processes we humans do using data, information,” she explains. “In the last 10 to 15 years, there have been two major innovations: the ability to handle huge volumes of data on one hand. And machine learning or machine-based learning, which is the real novelty, on the other. Machines can learn by themselves.”
The combination of these two innovations means that machines, through new results they obtain and algorithms for backpropagation and error recovery, restart the calculation based on new discoveries. Through syntax, they produce meaning. They learn.
This doesn’t mean that machines humanize, because it’s not about – or it shouldn’t be about – imitating the human being, which is a thorny and dangerous issue – duplicating the individual, but rather the result obtained. In simpler terms, it means that machines can do the same things that humans do, but not in the same way.
Among other things, this leads us to one of the strongest and most important debates we are witnessing. Let’s put it this way: if artificial intelligence uses everything that has been written over centuries, if it can combine it in an intelligent way, but it is not really the creator, to whom do we attribute authorship? This is by no means a minor issue because it touches on a central topic that seemed to have been legally and consensually settled for centuries. Indeed, this is one of the issues that the sweeping arrival of Artificial Intelligence brings to our consideration.
In some international scientific journals, co-authorship with ChatGPT, for example, is already being accepted. As long as that source of article formation is attributed, the author is that method, that procedure. We must be more precise, see if it is an author, if it is bibliography, where we place it, or if it is something else entirely. Indeed, the strong unity of the individual as the author starts to dissolve. A new relationship is beginning to emerge between living human individuals and machinic individuals, which are something else entirely.
Consequently, the question we should ask ourselves is: what is a machinic individual, at least in Dr. Costa’s conception? That individual is more than a machinic element, more than a specific tool. An individual is already a more sophisticated tool that is capable of developing and performing tasks autonomously.
These new developments pose challenges in the labor field. Costa believes that “these individuals” should be seen as those performing tasks that replace “the previous machinery,” such as very tedious, physically demanding, and aggressive jobs. The incorporation of technology led to unemployment but also eased the physical impact of tasks like mining. The difference now is that new machines are here to perform tasks that are not necessarily tedious, even tasks that are enjoyed, like writing, translating, researching, and learning.
Another issue, among the many challenges Dr. Costa presents, and which is at the center of current general discussions, is the role of artificial intelligence in education. Indeed, there are “sides,” those who see it as a kind of threat and want to impose strict limits – which certainly ignores the penetration and power of the technology – and those who see it as a tool to rely on, potentially providing additional support to education. In this commentary, we must not forget that the availability of technology for some and not for others, besides being a development element, is also a factor that feeds gaps we need to solve.
Regarding this point, Costa notes that entire countries or cities have banned chat tools for educational tasks. Italy is the most extreme case. New York has banned the use of ChatGPT in all educational institutions, and Hong Kong has also done so. It’s similar to the impact of the calculator on our generation: first, you have to learn to do the math operations, then use the calculator. Now, calculators are introduced in third, fourth, and fifth grades. We need to figure out how this technology will be incorporated into education, to what degree, and how we can manage it – always limited by reality.
In addition to the labor and educational sectors, there is the challenge of regulation, which, so far, doesn’t exist, and it is unclear who or how the new legal framework for these technologies will be determined.
“We have to think about everything,” Costa defined, adding that “we have to work as we always have, by comparison: see what they are doing in the European Union, the United States, or the East. In our region, the discussion must take place quickly. We need to be imaginative.”
[[i] Flavia Costa holds a PhD in Social Sciences from the University of Buenos Aires, where she has been teaching the Seminar on Informatics and Society since 1995, currently as an Associate Professor. She has a degree in Communication Sciences from the same faculty. She is an Adjunct Researcher at the National Council for Scientific and Technical Research (CONICET). Costa is a member of the editorial group of the journal Artefacto. Pensamientos sobre la técnica and the collective Ludion – Argentine Exploratory of Technological Poetics/Politics. In the past decade, she has co-translated the works of Giorgio Agamben into Spanish. Her central research theme is the perspective of modernity as a dual process of technification and politicization of life. In this context, she developed the notion of “technological life forms,” originally coined by British sociologist Scott Lash, to analyze contemporary modes of existence at the intersection of biopolitics and biotechnology.
[ii] Alan Mathison Turing (Paddington, London; June 23, 1912 – Wilmslow, Cheshire; June 7, 1954) was a British mathematician, logician, theoretical computer scientist, cryptographer, philosopher, and theoretical biologist.
He is considered one of the fathers of computer science and a precursor to modern computing. He provided an influential formalization of the concepts of algorithms and computation with the Turing machine. He formulated his own version of what is now widely accepted as the Church-Turing thesis (1936).
During World War II, he worked on deciphering Nazi codes, particularly the Enigma machine, and for a time was the head of the Naval Enigma section at Bletchley Park. It is estimated that his work shortened the duration of the war by two to four years. After the war, he designed one of the first programmable digital electronic computers at the UK’s National Physical Laboratory, and shortly thereafter, he built another of the first machines at the University of Manchester.
In the field of artificial intelligence, Turing is primarily known for the development of the Turing Test (1950), a criterion by which a machine’s intelligence can be judged if its responses in the test are indistinguishable from those of a human.