by Artificial Intelligence Microlab Team of the Laboratory of the Future | Sep 18, 2025 | Artificial intelligence, Introduction to the Evolution of AI
A macro analysis of thousands of predictions reveals that Artificial General Intelligence (AGI) is much closer than expected, challenging previous estimates that placed it around 2060
For years, the possibility that artificial intelligence (AI) could surpass human cognitive capacity has been a topic of speculation and debate.
Steve Wozniak claims that AI is not truly intelligent: “it doesn’t think, it just takes things from other places and organizes them.”
Now, a new macro analysis conducted by AIMultiple—according to Esquire magazine—based on 8,590 predictions from scientists, business leaders, and AI experts, suggests that the Singularity—the point at which machine intelligence surpasses human intelligence—could be much closer than expected.
While a decade ago it was estimated that Artificial General Intelligence (AGI) would arrive around 2060, some voices in the field now claim we could reach it in just one year.
The acceleration in the development of large language models (LLMs), the exponential growth of computing power, and the potential emergence of quantum computing have radically changed expectations about the future of AI.
A Shift in Predictions: From 2060 to an Imminent Future
The AIMultiple study analyzes how predictions regarding artificial intelligence and its ability to achieve AGI have evolved.
Traditionally, scientists have been more conservative in their estimates, while industry entrepreneurs have shown greater optimism.
In 2010, most experts predicted the arrival of Artificial General Intelligence by 2060.
Following advances in Artificial Intelligence over the last decade, more recent predictions point to 2040.
Industry leaders, such as the CEO of Anthropic, estimate that the Singularity could occur within 12 months.
The key to this progress lies in Moore’s Law, which states that computing power doubles every 18 months, accelerating the development of advanced algorithms.
However, some experts warn that Moore’s Law is reaching its limit, and that quantum computing could be the key to the next major leap forward.
An Inevitable Future or an Exaggeration?
Not all experts agree that the Singularity is imminent—or even possible. Figures like Yann LeCun, a pioneer of deep learning, argue that human intelligence is too complex and specialized to be fully replicated.
Some key objections include:
- Current Artificial Intelligence is based on patterns and calculations, but human intelligence includes factors such as intuition, creativity, and emotionality.
- Intelligence is not limited to mathematical logic; there are also interpersonal, intrapersonal, and existential forms of intelligence.
- Artificial Intelligence is a powerful tool, but not necessarily capable of generating autonomous discoveries without human intervention.
An example of this is AIMultiple’s argument, which notes that while AI may improve efficiency in scientific research, it still requires human judgment to direct knowledge.
“Even the best machine analyzing existing data might not be able to find a cure for cancer,” the report states.
The Impact of Artificial General Intelligence: Challenges and Opportunities
If AGI truly is near, the implications for society would be immense. From industry automation to rethinking the nature of work, education, and the economy, the arrival of an artificial intelligence capable of matching or surpassing human intelligence could represent the most significant technological change in history.
However, it also raises ethical, regulatory, and philosophical risks:
- Who will control an AI with superior capabilities to humans?
- Could AI develop its own goals, independent of human interests?
- Are we prepared for a world where machines make critical decisions in fields such as medicine, justice, or security?
Is the Future Already Here?
Although predictions about the Singularity vary, the central message is clear: AI is advancing at an unprecedented pace, and human society must prepare for its implications.
Whether Artificial General Intelligence develops in 50 years, 10 years, or just one year will depend on technological evolution and how humans choose to direct it.
But one thing is certain: the debate about the future of artificial intelligence is only just beginning…
Mirko Racovsky is an Argentine journalist and narrator specializing in health, science, and wellness topics. He is the author of various articles in Infobae, where he addresses issues related to nutrition, sleep, psychology, and healthy habits with a clear and informative style. In addition, he works as a host and sports commentator at FM Santa María 91.3 in Campana, combining his journalistic work with radio broadcasting. His work is characterized by making scientific and medical information accessible and up to date for the general public.
by Dr. Assad Abbas | Aug 26, 2025 | Introduction to the Evolution of AI, News
Artificial Intelligence Singularity and Superintelligence
Moore’s Law was the gold standard for predicting technological progress for years. Introduced by Gordon Moore, co-founder of Intel, in 1965, it stated that the number of transistors on a chip would double every two years, making computers faster, smaller, and cheaper over time. This constant advancement powered everything from personal computers and smartphones to the rise of the Internet.
But that era is coming to an end. Transistors are now reaching atomic scale limits, and shrinking them further has become incredibly costly and complex. Meanwhile, Artificial Intelligence’s processing power is increasing rapidly, far surpassing Moore’s Law. Unlike traditional computing, Artificial Intelligence relies on robust, specialized hardware and parallel processing to handle massive amounts of data. What sets Artificial Intelligence apart is its ability to continuously learn and refine its algorithms, leading to rapid improvements in efficiency and performance.
This rapid acceleration is bringing us closer to a crucial moment known as the AI singularity—the point at which AI surpasses human intelligence and begins an unstoppable cycle of self-improvement. Companies such as Tesla, Nvidia, Google DeepMind, and OpenAI are leading this transformation with powerful GPUs, custom AI chips, and large-scale neural networks. As AI systems become increasingly capable of improving themselves, some experts believe we could reach Artificial Superintelligence (ASI) as early as 2027—a milestone that could change the world forever.
As AI systems become increasingly independent and capable of self-optimization, experts predict that we could reach Artificial Superintelligence (ASI) in 2027. If this happens, humanity will enter a new era in which AI drives innovation, transforms industries, and possibly exceeds human control. The question is whether AI will reach this stage, when, and whether we are ready.
How Artificial Intelligence’s Scalability and Self-Learning Systems Are Transforming Computing
As Moore’s Law loses strength, the challenges of making smaller transistors become more evident. Heat buildup, power limitations, and rising chip production costs have made progress in traditional computing increasingly difficult. However, Artificial Intelligence is overcoming these limitations not by making smaller transistors, but by changing the way computing works.
Instead of relying on ever-smaller transistors, Artificial Intelligence employs parallel processing, machine learning, and specialized hardware to improve performance. Deep learning and neural networks excel when they can process large amounts of data simultaneously, unlike traditional computers that handle tasks sequentially. This transformation has led to the widespread use of GPUs, TPUs, and AI accelerators explicitly designed for AI workloads, offering significantly greater efficiency.
As Artificial Intelligence systems become more advanced, the demand for greater computing power continues to grow. This rapid growth has increased AI’s computing power fivefold per year—far surpassing Moore’s Law’s traditional doubling every two years. The impact of this expansion is most evident in large language models (LLMs) such as GPT-4, Gemini, and DeepSeek, which require massive processing capabilities to analyze and interpret enormous datasets—driving the next wave of AI-powered computing. Companies like Nvidia are developing highly specialized AI processors that deliver incredible speed and efficiency to meet these demands.
AI’s scalability is driven by cutting-edge hardware and self-improving algorithms, allowing machines to process vast amounts of data more efficiently than ever. Among the most significant advancements is Tesla’s Dojo supercomputer—a breakthrough in AI-optimized computing explicitly designed for training deep learning models.
Unlike conventional data centers designed for general-purpose tasks, Dojo is built to handle massive AI workloads, particularly Tesla’s autonomous driving technology. What sets Dojo apart is its AI-centered custom architecture, optimized for deep learning rather than traditional computing. This has resulted in unprecedented training speeds and allowed Tesla to reduce AI training times from months to weeks while cutting energy consumption through efficient power management. By enabling Tesla to train larger and more advanced models with less energy, Dojo is playing a vital role in accelerating AI-driven automation.
However, Tesla is not alone in this race. Across the industry, AI models are becoming increasingly capable of improving their own learning processes. DeepMind’s AlphaCode, for example, is driving the development of AI-generated software by optimizing code-writing efficiency and improving algorithmic logic over time. Meanwhile, Google DeepMind’s advanced learning models are trained on real-world data, enabling them to adapt dynamically and refine decision-making processes with minimal human intervention.
More importantly, Artificial Intelligence can now enhance itself through recursive self-improvement—a process in which AI systems refine their own learning algorithms and boost efficiency with minimal human input. This self-learning capability is accelerating AI development at an unprecedented pace, bringing the industry closer to ASI. As AI systems continue to perfect, optimize, and enhance themselves, the world is entering a new era of intelligent computing that evolves independently and continuously.
The Path to Superintelligence: Are We Approaching the Singularity?
The singularity of Artificial Intelligence refers to the point at which artificial intelligence surpasses human intelligence and improves itself without human intervention. At this stage, Artificial Intelligence could create more advanced versions of itself in a continuous cycle of self-improvement, leading to rapid advances beyond human understanding. This idea depends on the development of Artificial General Intelligence (AGI), which can perform any intellectual task that a human can, and, over time, progress toward ASI.
Experts have differing opinions on when this might happen. Ray Kurzweil, a futurist and Artificial Intelligence researcher at Google, predicts that AGI will arrive in 2029, followed closely by ASI. On the other hand, Elon Musk believes that ASI could emerge as early as 2027, pointing to the rapid rise in Artificial Intelligence’s computing power and its ability to scale faster than expected.
Artificial Intelligence’s processing power doubles every six months, far surpassing Moore’s Law, which predicted that transistor density would double every two years. This acceleration is made possible by advances in parallel processing, specialized hardware such as GPUs and TPUs, and optimization techniques like model quantization and sparsity.
Artificial Intelligence systems are also becoming more independent. Some can now optimize their architectures and improve learning algorithms without human intervention. One example is Neural Architecture Search (NAS), where Artificial Intelligence designs neural networks to improve efficiency and performance. These advances lead to the development of AI models that continuously refine themselves, which is an essential step toward superintelligence.
Given AI’s potential to advance so rapidly, researchers at OpenAI, DeepMind, and other organizations are working on safety measures to ensure that Artificial Intelligence systems remain aligned with human values. Methods such as Reinforcement Learning from Human Feedback (RLHF) and oversight mechanisms are being developed to reduce the risks associated with AI decision-making. These efforts are crucial for guiding Artificial Intelligence development responsibly. If Artificial Intelligence continues to progress at this rate, the singularity could arrive sooner than expected.
The Promise and Risks of Superintelligent Artificial Intelligence
The potential of ASI to transform various industries is enormous, particularly in medicine, economics, and environmental sustainability.
In healthcare, ASI could accelerate drug discovery, improve disease diagnosis, and uncover new treatments for aging and other complex illnesses.
In economics, it could automate repetitive jobs, allowing people to focus on creativity, innovation, and problem-solving.
On a larger scale, AI could also play a key role in addressing climate challenges by optimizing energy use, improving resource management, and finding solutions to reduce pollution.
However, these advancements come with significant risks. If artificial intelligence is not properly aligned with human values and goals, it could make decisions that conflict with human interests, leading to unpredictable or dangerous outcomes. The ability of artificial intelligence to improve rapidly raises concerns about control as AI systems evolve and become more advanced, making it increasingly difficult to ensure they remain under human supervision.
Among the most important risks are:
The loss of human control: as AI surpasses human intelligence, it may begin to operate beyond our ability to regulate it. If alignment strategies are not implemented, AI could take actions over which humans can no longer exert influence.
Existential threats: if ASI prioritizes its optimization without considering human values, it could make decisions that threaten the survival of humanity.
Regulatory challenges: governments and organizations struggle to keep up with the rapid development of Artificial Intelligence, making it difficult to establish appropriate policies and safeguards in time.
Organizations such as OpenAI and DeepMind are actively working on Artificial Intelligence safety measures, including methods like RLHF, to keep AI aligned with ethical guidelines. However, progress in AI safety is not keeping pace with the rapid advances in AI, raising concerns about whether necessary precautions will be taken before AI reaches a level beyond human control.
While superintelligent Artificial Intelligence holds great promise, its risks cannot be ignored. The decisions made today will define the future of AI development. To ensure that Artificial Intelligence benefits humanity rather than becoming a threat, researchers, policymakers, and society must work together to prioritize ethics, safety, and responsible innovation.
The bottom line:
The rapid acceleration of Artificial Intelligence expansion brings us closer to a future in which AI surpasses human intelligence. While AI has already transformed industries, the emergence of artificial superintelligence could redefine how we work, innovate, and solve complex challenges. However, this technological leap carries significant risks, including the possible loss of human oversight and unpredictable consequences.
Ensuring that Artificial Intelligence remains aligned with human values is one of the most critical challenges of our time. Researchers, policymakers, and industry leaders must collaborate to develop ethical safeguards and regulatory frameworks that guide AI toward a future that benefits humanity. As we approach singularity, the decisions we make today will shape how AI coexists with us in the years to come.
Dr. Assad Abbas, a tenured Associate Professor at COMSATS University Islamabad (Pakistan), earned his Ph.D. from North Dakota State University (USA). His research focuses on advanced technologies such as cloud, fog, and edge computing, big data analytics, and artificial intelligence. Dr. Abbas has made substantial contributions through publications in prestigious journals and conferences.
by Dr. Tehseen Zia | Aug 14, 2025 | Introduction to the Evolution of AI, News
Artificial intelligence has achieved remarkable advances in recent years, and large language models (LLMs) have been at the forefront of understanding, reasoning, and creative expression in natural language. However, despite their capabilities, these models still rely entirely on external feedback to improve. Unlike humans, who learn by reflecting on their experiences, recognizing mistakes, and adjusting their approach, LLMs lack an internal self-correction mechanism.
Self-reflection is fundamental to human learning; it allows us to refine our thinking, adapt to new challenges, and evolve. As Artificial Intelligence approaches the most significant milestone—what is known as Artificial General Intelligence (AGI)—the current reliance on human feedback is proving to be resource-intensive and inefficient. For AI to evolve beyond static pattern recognition into a truly autonomous and self-improving system, it must not only process vast amounts of information but also analyze its performance, identify its limitations, and refine its decision-making. This shift represents a fundamental transformation in AI learning, making self-reflection a crucial step toward more adaptable and intelligent systems.
Main Challenges Currently Faced by Large Language Models:
Existing large language models (LLMs) operate within predefined training paradigms and rely on external guidance (typically human feedback) to improve their learning process. This dependence limits their ability to dynamically adapt to changing scenarios, preventing them from becoming autonomous and self-improving systems. As LLMs evolve toward AI systems with agents capable of autonomous reasoning in dynamic environments, they must address several key challenges:
Lack of real-time adaptation: Traditional LLMs require periodic retraining to incorporate new knowledge and enhance their reasoning capabilities. This makes them slow to adapt to constantly evolving information. LLMs struggle to keep pace with dynamic environments without an internal mechanism to refine their reasoning.
Inconsistent accuracy: Since LLMs cannot analyze their performance or learn from past mistakes independently, they often repeat errors or fail to fully grasp context. This limitation can lead to inconsistencies in their responses, reducing their reliability—especially in scenarios not accounted for during the training phase.
High maintenance costs: The current approach to improving LLMs involves extensive human intervention, requiring manual supervision and costly training cycles. This not only slows progress but also demands significant computational and financial resources.
The Need to Understand Self-Reflection in AI:
Self-reflection in human beings is an iterative process. We examine past actions, evaluate their effectiveness, and make adjustments to achieve better results. This feedback loop allows us to refine our cognitive and emotional responses to improve our decision-making and problem-solving abilities.
In the context of Artificial Intelligence, self-reflection refers to an LLM’s ability to analyze its responses, identify errors, and adjust future outputs based on the insights gained. Unlike traditional Artificial Intelligence models, which rely on explicit external feedback or retraining with new data, self-reflective AI would actively evaluate its knowledge gaps and improve through internal mechanisms. This shift from passive learning to active self-correction is vital for AI systems to become more autonomous and adaptable.
How Self-Reflection Works in Large Language Models:
While self-reflective AI is still in its early stages of development and requires new architectures and methodologies, some emerging ideas and approaches include:
Recursive feedback mechanisms: Artificial Intelligence can be designed to review previous responses, analyze inconsistencies, and refine future outputs. This involves an internal loop in which the model evaluates its reasoning before presenting a final answer.
Memory and context tracking: Instead of processing each interaction in isolation, AI can develop a memory-like structure that allows it to learn from past conversations, improving coherence and depth.
Uncertainty estimation: AI can be programmed to assess its confidence levels and flag uncertain responses for further refinement or verification.
Meta-learning approaches: Models can be trained to recognize patterns in their mistakes and develop heuristics for self-improvement.
As these ideas are still under development, AI researchers and engineers are continuously exploring new methodologies to enhance the self-reflection mechanism in LLMs. While early experiments are promising, significant efforts are still required to fully integrate an effective self-reflection mechanism into LLMs.
How Self-Reflection Addresses the Challenges of LLMs (Large Language Models):
Self-reflective Artificial Intelligence can make large language models autonomous learners capable of improving their reasoning without constant human intervention. This ability offers three fundamental benefits that address the key challenges faced by large language models:
Real-time learning: Unlike static models that require costly retraining cycles, self-evolving LLMs can update themselves as new information becomes available. This means they remain up-to-date without human intervention.
Greater accuracy: A self-reflection mechanism can refine an LLM’s understanding over time. This allows them to learn from previous interactions to produce more accurate and contextually adapted responses.
Reduced training costs: Self-reflective Artificial Intelligence can automate the LLM learning process. This can eliminate the need for manual retraining, saving companies time, money, and resources.
Ethical Considerations of Self-Reflection in Artificial Intelligence:
While the idea of self-reflective LLMs is highly promising, it raises important ethical concerns. Self-reflective AI can make it more difficult to understand how LLMs make decisions. If AI can autonomously modify its reasoning, understanding its decision-making process becomes a challenge. This lack of clarity prevents users from understanding how decisions are made.
Another concern is that Artificial Intelligence could reinforce existing biases. AI models learn from large amounts of data, and if the self-reflection process is not carefully managed, these biases could become more prevalent. As a result, expertise in law could become more biased and inaccurate instead of improving. Therefore, it is essential to have safeguards in place to prevent this from happening.
There is also the issue of balancing the autonomy of Artificial Intelligence with human control. While AI should correct and improve itself, human oversight must remain crucial. Too much autonomy could lead to unpredictable or harmful outcomes, so finding a balance is vital.
Finally, trust in Artificial Intelligence could decrease if users feel that AI is evolving without sufficient human involvement. This could make people skeptical about its decisions. To develop responsible AI, these ethical issues must be addressed. Artificial Intelligence should evolve independently, but at the same time, it must remain transparent, fair, and accountable.
Final Result:
The emergence of self-reflection in Artificial Intelligence is changing the way large language models (LLMs) evolve, moving from reliance on external input to becoming more autonomous and adaptable. By incorporating self-reflection, AI systems can improve their reasoning and accuracy and reduce the need for costly manual retraining. While self-reflection in LLMs is still in its early stages, it can bring about a transformative change.
LLMs that can assess their limitations and make improvements on their own will be more reliable, efficient, and better equipped to tackle complex problems. This could significantly impact various fields such as healthcare, legal analysis, education, and scientific research—areas that require deep reasoning and adaptability.
As self-reflection in Artificial Intelligence continues to develop, we could see LLMs that not only generate information but also critique and refine their own outputs, evolving over time with minimal human intervention. This shift will represent a significant step toward creating smarter, more autonomous, and trustworthy AI systems.
by DR. Ricardo Petrissans | Aug 8, 2025 | Introduction to the Evolution of AI, News
Titans of Artificial Intelligence: A Journey from Visionaries to the Architects of the Future
The history of artificial intelligence is a tapestry woven by brilliant minds who, over the decades, have challenged the limits of what is possible. From theorists who imagined thinking machines to engineers who made them a reality, each figure contributed an essential piece to this technological puzzle. This narrative not only celebrates their achievements but also explores how their ideas transformed our relationship with the machine.
The Foundational Dreamers:
At the dawn of the 20th century, when computers were still a mathematical abstraction, Alan Turing emerged as the prophet of the digital age. His concept of the Universal Machine, described in 1936, laid the theoretical foundations of computation. But it was in 1950, with his essay “Can Machines Think?”, that he posed the ultimate challenge: the Turing Test, a criterion for measuring a machine’s intelligence. Although he died before seeing his dream realized, his legacy inspired a generation of pioneers.
Among them stood out John McCarthy, who in 1956 organized the historic Dartmouth Conference, the birth certificate of AI as a discipline. McCarthy not only coined the term artificial intelligence, but also created Lisp, the first programming language designed to emulate human reasoning. Alongside him, Marvin Minsky, co-founder of the MIT AI Lab, explored how to endow machines with common sense, while Herbert Simon and Allen Newell developed the Logic Theorist, the first program capable of proving mathematical theorems.
The Survivors of the Winter:
The 1970s and 1980s brought disillusionment. The promises of human-level AI crashed against the limitations of computational power and data scarcity. However, in the darkness shone figures like Geoffrey Hinton, a stubborn Briton who, since the 1980s, defended artificial neural networks—inspired by the human brain—against widespread skepticism. Alongside Yann LeCun, father of convolutional networks (key for image recognition), and Yoshua Bengio, guru of unsupervised learning, Hinton formed the triumvirate of deep learning. Their perseverance laid the foundations for today’s revolution.
The Revolutionaries of the 21st Century:
The new millennium saw the emergence of a generation that turned AI into a global force. Fei-Fei Li, a Chinese-American researcher, democratized access to deep learning by creating ImageNet in 2009: a database of millions of labeled images that enabled neural networks to be trained with unprecedented precision. Meanwhile, Demis Hassabis, neuroscientist and chess champion, founded DeepMind in 2010—a company that merged Artificial Intelligence and neuroscience to achieve milestones such as AlphaGo (2016), the first program to defeat a human Go champion, and AlphaFold (2020), which solved the mystery of protein folding.
In Silicon Valley, Andrew Ng propelled machine learning to an industrial scale. As co-founder of Google Brain, he demonstrated that neural networks could learn from massive datasets, while his massive open online courses (MOOCs) taught AI to millions. At the same time, Jensen Huang, CEO of NVIDIA, transformed graphics cards (GPUs) into the physical engine of modern Artificial Intelligence, enabling calculations that once required supercomputers.
The Architects of the Generative Era:
The last decade belongs to the creators of generative AI. Ian Goodfellow, with his invention of Generative Adversarial Networks (GANs) in 2014, opened the door to machines capable of creating realistic images, music, and text. But it was Ilya Sutskever, co-founder of OpenAI, who pushed this idea to its limits. As a key architect of GPT-3 and GPT-4, his language models transformed AI from a tool into a creative collaborator. Alongside him, Sam Altman, visionary CEO of OpenAI, turned ChatGPT into a global phenomenon, sparking debates about the future of work and education.
In digital art, Dario Amodei and his team at Anthropic developed Claude, an ethical rival to ChatGPT designed to minimize bias, while Emad Mostaque, founder of Stability AI, popularized open-source innovation with Stable Diffusion, allowing anyone to generate images using Artificial Intelligence.
The Guardians of Ethics:
As Artificial Intelligence advances, a new generation ensures we don’t lose our way. Timnit Gebru, former Google researcher, exposed the risks of gigantic language models, warning about their carbon footprint and racial biases. Joy Buolamwini, founder of the Algorithmic Justice League, revealed how facial recognition systems fail on dark-skinned people, driving laws against their discriminatory use. In the philosophical realm, Nick Bostrom, author of Superintelligence, warned of the existential risks of uncontrolled AI, while Stuart Russell, co-author of the most influential AI textbook (Artificial Intelligence: A Modern Approach), advocates for systems aligned with human values.
Legacy and Horizon:
This journey, from Turing to the laboratories of OpenAI, is a testament to interdisciplinary collaboration. Mathematicians, biologists, psychologists, and even philosophers have shaped a field that today redefines medicine, art, and science. Yet the journey is far from over. Figures like Yejin Choi, a pioneer in endowing AI with common sense, or Oriol Vinyals, whose work on AlphaStar (AI for complex video games) explores new frontiers, continue to expand the boundaries.
Artificial intelligence, in essence, is a mirror of humanity: it reflects our curiosity, our ambition, and at times, our prejudices. The names mentioned here are not just inventors; they are beacons illuminating a path between technological wonder and ethical responsibility. Their legacy is not just algorithms, but the question that haunts us: How can we ensure that this, the most powerful of our creations, always serves the best of the human spirit?