Artificial Intelligence VI BBC Note

Author: BBC News Mundo

Artificial intelligence

May 28, 2023

28 May, 2023

The topic of Artificial Intelligence, with its vast reach and how little we actually know – in many cases, we are still in the intuitive stage – has led to a real flood of studies, opinions, controversies, and heated debates that practically happen daily.

Our Laboratory understands that one of the best services it can provide to all those people and organizations following our work is to offer a Selected Series of those opinions, positions, and debates, practically up to the day they occur, to keep genuinely informed those who are attentive to what is happening and to our vision.

By the way, the Laboratory is working on its Artificial Intelligence Microlab and will eventually share its conclusions and perceptions, but the urgency of the topic does not allow for too many delays. That is why today we are launching a Series on Artificial Intelligence, which we hope will be the basis for analysis, reflection, and conclusions about the projection that a topic of this magnitude forces us to address. No one, neither governments, international organizations, regional organizations, think tanks, nor individuals, can remain indifferent to its evolution.

As always, we hope our service proves useful.

BBC: The 3 Stages of Artificial Intelligence (AI), which stage we are in, and why many think the third could be fatal.

Since its launch in late November 2022, ChatGPT, the chatbot that uses artificial intelligence (AI) to answer questions or generate texts on request from users, has become the fastest-growing internet application in history.

The massive adoption of GPT marks a record in Internet tools, and its consequences are palpable:

In just two months, it reached 100 million active users. The popular app TikTok took nine months to reach this milestone. Instagram took two and a half years, according to data from the technology monitoring company Sensor Town.

“In the 20 years we’ve been following the internet, we can’t recall a faster increase of an internet application for consumers,” said analysts from UBS (Union Bank of Switzerland), who reported the record in February 2023.

The massive popularity of ChatGPT, developed by the company OpenAI, with financial backing from Microsoft, has sparked all kinds of discussions and speculations about the impact it is already having and will have on our near future with generative artificial intelligence.

This branch of AI is dedicated to generating original content from existing data (usually extracted from the internet) in response to a user’s instructions.

The texts (from essays, poems, and jokes to computer code) and images (diagrams, photos, works of art in any style, and much more) produced by generative AIs such as ChatGPT, DALL-E, Bard, and AlphaCode – to name just a few of the more well-known – are, in some cases, so indistinguishable from human work, that they have already been used by thousands of people to replace their usual labor.

From students using them to do their homework to politicians commissioning their speeches – Democratic Representative Jake Auchincloss used the tool in the U.S. Congress – or photographers inventing snapshots of things that never happened (and even winning awards for them, such as German Boris Eldagsen, who took first place in the latest Sony World Photography Award for an image created by AI).

This very article could have been written by a machine, and you probably wouldn’t notice.

The phenomenon has led to a revolution in human resources, with companies like the tech giant IBM announcing that it will stop hiring people to fill nearly 8,000 jobs that could be managed by AI.

A report from the investment bank Goldman Sachs estimated in late March 2023 that AI could replace a quarter of all jobs currently performed by humans, though it will also create more productivity and new jobs.

As AI advances, its ability to replace our work increases.

If all these changes overwhelm you, get ready for a fact that might be even more unsettling.

With all its impacts, what we are experiencing now is only the first stage in AI development.

According to experts, what could come next – the second stage – will be much more revolutionary.

The third and final stage, which could occur very shortly after the second, is so advanced that it will completely alter the world, even at the cost of human existence.

The three stages:

Artificial Intelligence technologies are classified by their ability to imitate human characteristics.

Narrow Artificial Intelligence (ANI):

The most basic category of AI is better known by its English acronym: ANI, for Artificial Narrow Intelligence.

It is called this because it focuses narrowly on a single task, performing repetitive work within a range predefined by its creators.

ANI systems are typically trained using a large dataset (e.g., from the internet) and can make decisions or take actions based on that training.

An ANI can match or surpass human intelligence and efficiency, but only in that specific area in which it operates.

An example is chess programs that use AI. They can beat the world champion in that discipline, but they cannot perform other tasks.

ANIs can outperform humans, but only in a specific area.

That’s why it is also known as “weak AI.”

All programs and tools that use AI today, even the most advanced and complex ones, are forms of ANI. And these systems are everywhere.

Smartphones are filled with applications that use this technology, from GPS maps that allow you to locate yourself anywhere in the world or check the weather, to music and video programs that know your preferences and make recommendations.

Virtual assistants like Siri and Alexa are forms of ANI. As is the Google search engine and the robot that cleans your house.

The business world also makes extensive use of this technology. It’s used in internal car computers, in the manufacturing of thousands of products, in finance, and even in hospitals for diagnostics.

Even more sophisticated systems like self-driving cars (or autonomous vehicles) and the popular ChatGPT are forms of ANI, as they cannot operate outside the predefined range set by their programmers, so they cannot make decisions on their own.

They also lack self-awareness, another trait of human intelligence.

However, some experts believe that systems programmed to learn automatically (machine learning) like ChatGPT or AutoGPT (an “autonomous agent” or “intelligent agent” that uses information from ChatGPT to perform certain subtasks autonomously) could move on to the next stage of development.

  1. Artificial General Intelligence (AGI)

This category – Artificial General Intelligence – is achieved when a machine acquires cognitive abilities at the human level.

That is, when it can perform any intellectual task that a person does.

AGI has the same intellectual capacity as a human.

It is also known as “strong AI.”

The belief that we are on the verge of reaching this level of development is so strong that last March, over 1,000 technology experts asked AI companies to stop training programs that are more powerful than GPT-4, the most recent version of ChatGPT, for at least six months.

“AI systems with intelligence competing with human intelligence could pose profound risks to society and humanity,” they warned in an open letter, signed by, among others, Apple co-founder Steve Wozniak and Tesla, SpaceX, Neuralink, and Twitter owner Elon Musk (who was one of the co-founders of OpenAI before resigning from the board due to disagreements with the company’s leadership).

In the letter, published by the non-profit Future of Life Institute, the experts said that if companies do not quickly agree to halt their projects, “governments should intervene and impose a moratorium” so that solid safety measures can be designed and implemented.

Although this has not yet occurred, the U.S. government did summon the owners of major AI companies – Alphabet, Anthropic, Microsoft, and OpenAI – to agree on “new actions to promote responsible AI innovation.”

“AI is one of the most powerful technologies of our time, but to take advantage of the opportunities it presents, we must first mitigate its risks,” the White House said in a statement on May 4.

The U.S. Congress, meanwhile, called OpenAI’s CEO, Sam Altman, this Tuesday to answer questions about ChatGPT.

During the Senate hearing, Altman said it is “crucial” that his industry be regulated by the government as AI becomes “increasingly powerful.”

Carlos Ignacio Gutiérrez, a public policy researcher at the Future of Life Institute, explained to BBC Mundo that one of the major challenges posed by AI is that “there is no collegial body of experts who decide how to regulate it, as happens, for example, with the Intergovernmental Panel on Climate Change (IPCC).”

In the letter from the experts, they outlined their main concerns.

“Should we develop non-human minds that could eventually outnumber us, be more intelligent, make us obsolete, and replace us?” they questioned.

“Should we risk losing control of our civilization?”

Which brings us to the third and final stage of AI.

3.Artificial Superintelligence (ASI):

The concern of these computer scientists is related to a well-established theory that suggests that once we reach AGI, shortly afterward we will arrive at the final stage in the development of this technology: Artificial Superintelligence, which occurs when synthetic intelligence surpasses human intelligence.

The University of Oxford philosopher and AI expert Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.”


[1][1] Nick Bostrom (in Swedish, Niklas Boström) is a Swedish philosopher from the University of Oxford, born in 1973. He is known for his work on the anthropic principle, existential risk, ethics regarding human enhancement, the risks of superintelligence, and consequentialism. He obtained a PhD from the London School of Economics and Political Science in 2000. In 1998, Bostrom co-founded the World Transhumanist Association with David Pearce.

In 2004, he co-founded the Institute for Ethics and Emerging Technologies with James Hughes. In 2006, he became the founding director of the Future of Humanity Institute at the University of Oxford. FHI is a unique multidisciplinary research center where a group of prominent researchers from fields such as mathematics, computer science, philosophy, and other disciplines come together to work on large-scale issues for humanity.

In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology and is the founding director of the Future of Humanity Institute at the University of Oxford.

He is the author of more than 200 publications, including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller, and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002). In 2009 and 2015, Foreign Policy magazine included him in its list of the 100 most influential global thinkers. Bostrom’s work on superintelligence and his concern about the existential risk it poses to humanity in the new century have influenced the thinking of Elon Musk and Bill Gates.

Bostrom was born on March 10, 1973, in Helsingborg, Sweden. From a young age, he disliked school and completed his final years of high school through homeschooling, choosing to educate himself in a wide variety of disciplines such as anthropology, art, literature, and science.

He obtained a B.A. (Bachelor of Arts) in philosophy, mathematics, mathematical logic, and artificial intelligence from the University of Gothenburg, and a Master’s degree in Philosophy and Physics, and computational neuroscience from the University of Stockholm and King’s College London, respectively. While studying at the University of Stockholm, he researched the relationship between language and reality by studying the analytical philosopher Willard Van Orman Quine. In 2000, he earned a PhD in philosophy from the London School of Economics. He spent two years (2000-2002) as an educator at Yale University.

Bostrom has provided policy advice and consultations for a wide range of governments and organizations. He was a consultant for the UK Government Office for Science and an expert member of the Agenda Council for Catastrophic Risks at the World Economic Forum. He is a consulting board member of the Machine Intelligence Research Institute, the Institute for the Future of Life, the Institute for Foundational Questions in Physics and Cosmology, and an external consultant for the Center for the Study of Existential Risk at Cambridge.

An important aspect of Bostrom’s research focuses on the future of humanity and long-term outcomes. He introduced the concept of “existential risk,” which he defines as the risk that “an adverse outcome could either devastate the origin of intelligent life on Earth or severely restrict its permanent potential.” From a transhumanist standpoint, Bostrom warns about the risks this could bring and refers to them as existential risks. He also mentions some of the existential risks (those related to human action).

  • Misuse of technologies.
  • Nuclear wars.
  • Pandemics.
  • Posthuman aristocracy.
  • Poor programming of a superintelligence.
  • Autonomous superintelligence that adopts the values of power.
  • Cryonics and overpopulation.
  • Control of the state, institutions, NGOs, religious movements, etc., that prevent certain applications for human enhancement (transhumanism).
  • Technological difficulties. The inability to implement transhumanism.
  • Exhaustion of natural resources before they can be artificially created.

He recognizes as one of the greatest risks the misuse of technology for hegemonic purposes, consumerism, and militarism, among other factors. This has led to consequences such as pollution, ecosystem degradation, and resource depletion. In the misuse of technologies, human errors are also considered, such as a virus escaping from a laboratory.

In the book Global Catastrophic Risks (2008), editors Nick Bostrom and Milan Ćirković characterized the relationship between existential risk and the increasingly broad range of global-scale catastrophic risks, linking existential risk to the effects of the observer selection and the Fermi paradox. In a 2013 article for the Global Policy journal, Bostrom offers a taxonomy of existential risk and proposes a reconceptualization of sustainability in dynamic terms: as a developmental trajectory aimed at minimizing existential risk.

In 2005, Bostrom founded the Future of Humanity Institute, which researches the distant future of human civilization. He is also an advisor to the Center for the Study of Existential Risk.

In his book Superintelligence: Paths, Dangers, Strategies (2014), Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.” He also argues that with “cognitive performance vastly exceeding that of humans in virtually all areas of interest,” superintelligent agents could promise substantial benefits for society but pose a significant existential risk. Therefore, Bostrom asserts that it is crucial to approach the field of artificial intelligence with caution and take active measures to mitigate the risk we face.

In his book, he also mentions that the real challenge is not so much in the intelligence that machines may be capable of reaching, but in the moral development of our species. In the end, as Jean-Paul Sartre already postulated, we are condemned to be free. And that can be dangerous, but also an excellent opportunity to take another evolutionary leap.

The acceleration of technologies will continue to increase until it reaches a point that exceeds human capabilities (technological singularity). Artificial intelligence can be achieved through brute force, meaning that given the machine’s speed, it can exhaustively analyze all possible solutions. This is the case with chess, where the machine’s intelligence is based on its speed to calculate variations, allowing it to foresee what could happen on the board.

In January 2015, Nick Bostrom, Stephen Hawking, Max Tegmark, Elon Musk, Martin Rees, Jaan Tallinn, among others, signed an open letter from the Institute for the Future of Life warning about the potential dangers of artificial intelligence, in which they acknowledged that “it is important and timely to research how to develop AI systems that are robust and beneficial, and that there are specific research directions that can be pursued today.” Instead of warning of an existential disaster, the letter calls for more research to harvest the benefits of AI “while avoiding possible setbacks.”

This letter is signed not only by individuals unrelated to AI, such as Hawking, Musk, and Bostrom, but also by important computer science researchers (including Demis Hassabis, one of the leading AI researchers), since after all, if they develop an artificial intelligence that does not share the best human values, it will mean they were not intelligent enough to control their own creations.

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy, in which he critiques previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.

Bostrom believes that the mishandling of indicative information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolutionary theory, game theory, and quantum physics) and argues that an anthropic theory is needed to deal with these. Bostrom introduced the “self-sampling assumption” and the “self-indication assumption” and showed how both lead to different conclusions in a number of cases, noting that, in certain thought experiments, each of them is affected by paradoxes or counterintuitive implications. He suggested that in order to make progress, the self-sampling assumption should be extended to a “strong self-sampling assumption,” which replaces “observers” with “observer moments,” allowing the reference class to be relativized (and he created an expression for this in the “observation equation”).

In later works, he described the phenomenon of the “anthropic shadow,” an effect of observation selection that prevents observers from observing certain types of catastrophes in their recent geological and evolutionary past. The types of catastrophes that fall under the anthropic shadow tend to be underestimated unless statistical corrections are made.

Bostrom advocates for achieving “human enhancement,” or “self-improvement and human perfectibility through the ethical application of science,” while criticizing bio-conservative positions.

He proposed, alongside philosopher Toby Ord, the reverse test. Given the irrational bias of the human status quo, how can one distinguish between valid criticism of proposed changes to a human quality and criticism motivated by resistance to change? The reverse test attempts to resolve this by asking whether it would be beneficial to alter that quality in the opposite direction.

In 1998, Bostrom co-founded the World Transhumanist Association, now called Humanity+, with philosopher David Pearce. In 2004, he co-founded the Institute for Ethics and Emerging Technologies with sociologist James Hughes, although he is no longer involved with either of these organizations.

In 2009, Foreign Policy magazine mentioned Bostrom in its list of global thinkers for “not accepting limits on human potential.”

Humanity Plus, formerly known as the World Transhumanist Association, was originally founded by Nick Bostrom and David Pearce. It is an international cultural and intellectual movement with the ultimate goal of transforming the human condition through the development and widespread availability of technology, which in turn would enhance human capabilities, both physically and psychologically or intellectually, based on nanotechnology, genetic engineering, and cybernetics.

Humanity Plus is a nonprofit organization that works to promote the discussion of the possibilities for radical enhancement of human capabilities through technology. Many theorists and supporters of transhumanism seek to apply reason, science, and technology to reduce poverty, diseases, disabilities, and malnutrition worldwide. Transhumanism is distinguished by its particular focus on the application of technologies for the individual enhancement of human bodies.

Humanity Plus has several objectives, those are:

  1. Support the discussion and public awareness of emerging technologies.
  2. Advocate for the right of individuals in free and democratic societies to adopt technologies that expand human capabilities.
  3. Anticipate and propose solutions for the potential consequences of new technologies.
  4. Actively encourage and support the development of emerging technologies that are believed to have a sufficiently probable positive benefit.

Bostrom has suggested that a technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are achieved, proposing the principle of differential technological development. This principle holds that humans should delay the development of dangerous technologies, particularly those that increase existential risk, and accelerate the development of beneficial technologies, especially those that protect us from existential risks posed by nature or other technologies.

Autor: BBC News Mundo

Autor: BBC News Mundo

Related articles

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

error: Content is protected !!