by Laboratory of the Future analysis team | Jun 19, 2023 | Artificial intelligence, News
THE RISE OF ARTIFICIAL INTELLIGENCE. CAN AUTOMATION REDEFINE THE LABOR MARKET WITHOUT LEAVING ANYONE BEHIND?
Experts believe that AI will partially automate the affected jobs while encouraging the emergence of new professional profiles. At the same time, challenges such as professional reskilling or ethical issues will need to be addressed.
ALMOST ALL PROFESSIONAL ACTIVITIES HAVE PROCESSES THAT CAN BE AUTOMATED TO ACHIEVE MORE EFFICIENCY. While the use of technology is nothing new in any sector, there is an increasing generalization of the use of artificial intelligence (AI) to save time and costs, while increasing productivity.
Recently, with the rise of generative AI — such as ChatGPT — this trend is affecting both intellectual jobs and those that require more physical effort. In light of this new reality in the work environment, voices for and against automation are already emerging due to its impact on the professional market. At the same time, reflections on how to train workers or the ethical implications of this coexistence between humans and intelligent machines are gaining ground in the public debate.
The relevance of this historical moment demands continuous exchange of credible opinions, which is why, with this premise in mind, El Confidencial organized a roundtable titled “The Future of Work: Automation for More and Better Jobs.” The expert panel included representatives from companies affected by automation and the use of AI, technology companies, and academics specializing in the subject. The participants were Iñaki Ugarte, General Director of Operations at Primera Milla, Amazon Spain; Belén Martín, Vice President of Hybrid Cloud at IBM Consulting; Manuel Espiñeira, Director of Digital Business Technologies Solutions at Minsait, an Indra company; and Ignacio López Sánchez, Professor of Business Organization at the Complutense University of Madrid (UCM).
To contextualize and understand the origins of the automation boom, Iñaki Ugarte listed “three factors” that have accelerated its penetration: “New technologies alongside the digital era and big data, the Millennial and Generation Z workers as digital natives or migrants, and the international context, which, after the pandemic and the complex geopolitical scenario, is forcing industry relocation.” Regarding whether automation will eliminate jobs, he had a clear answer: “Far from there being less work for people, new jobs are actually being created.”
The logical question is what types of jobs are being created with the advent of AI in the workplace. In this regard, Belén Martín provided examples: “Just in the last three months, two profiles have emerged that are revolutionizing everything. One of them is the prompt engineer, people specialized in asking questions to artificial intelligences — they function as a sort of instructor — and whose Spanish translation could be ‘ingeniero de peticiones’. The other is the ethical algorithm trainer, and their role is to prevent social biases.” The Vice President of Hybrid Cloud at IBM Consulting clarified that “although these profiles seem related only to STEM disciplines — science, technology, engineering, and mathematics — there are also profiles in the humanities such as linguists or philosophers, which opens up an unlimited range of possibilities.”
“In the last three months, profiles like ‘prompt engineer’ and ethical algorithm trainer have appeared,” Belén Martín (IBM)
From Minsait, an Indra subsidiary, they consider that “automation will be partial in most jobs,” as explained by their spokesperson during the roundtable. “Approximately 60% of jobs have the potential for partial automation of their tasks, but only 7% of them can actually be automated in more than 50% of their processes,” clarified Manuel Espiñeira, who then recalled that “in the 1950s, there was a catalog in the U.S. of jobs that were expected to disappear with the automation of the time. The list included 270 jobs, and yet only the elevator operator profession disappeared.” To elaborate further on his future forecast, he specified that “the key is the quality of the information analysis a professional can do with AI tools, as it allows them to make high-level decisions, such as a better diagnosis in the case of a doctor. However, this shows that doctors will continue to exist as a profession,” he assured.
This view was shared by Ignacio López Sánchez. For the Professor of Business Organization at the Complutense University of Madrid, “there are certain jobs with a relatively high percentage of repetitive tasks, and therefore automatable, but others not so much. This will force companies to reorganize and define new profiles that, in some cases, will have AI as copilots,” he emphasized. Furthermore, the professor raised a challenge picked up by the rest of the participants in the discussion: “Will we be able to give workers the proper training to perform these new tasks that are coming? And more importantly: What will we do with the people whose jobs will disappear? Will they be able to be retrained for new profiles?” he asked.
Reskilling, educational flexibility, and ethical oversight:
From Iñaki Ugarte’s perspective, “every time a new disruptive technology appears, the same questions arise, and the answer must be clear: no one should be left behind,” he stressed. “But the issue of reskilling has a handicap — the educational system, as it lacks the agility to adapt to new needs,” he continued. Belén Martín agreed and confirmed that “retraining workers is indeed the only way to maintain those affected jobs, and moreover, investing in them through training generates a sense of belonging, which is really useful within the company,” she pointed out.
“60% of jobs will automate part of their tasks, but only 7% will do so in more than 50% of their processes,” Manuel Espiñeira (Minsait)
To highlight the consensus on this topic, Manuel Espiñeira also pointed out that “the academic curriculum requires more flexibility, especially when it comes to new technologies.” “Until now, the university system teaches people to think, but it is the companies that teach how to apply what has been learned,” he specified. His discussion partner, López Sánchez, further expanded on this when he emphasized that “adaptations should be quick, as well as identifying which positions are in demand. We cannot do this from the academic environment; it is the companies, as generators of wealth and employment, that need to put their needs on the table and communicate them to universities. Even so,” insisted the professor from UCM, “there will still be the problem, at least for now, that formal education is especially difficult to modify in Spain and Europe.”
In the final stretch of the debate, a classic issue in discussions about automation and AI came up: the ethical implications. “It has been shown that when artificial intelligences are trained, they carry over the social biases that we humans have. There are traces of the developer in the technology itself. There are many examples of this in recent decades. One of the biggest challenges is precisely ensuring that this doesn’t happen,” admitted Belén Martín. To address this problem, Iñaki Ugarte advised “using people as tools, that is, forming diverse and representative groups in which social diversity is guaranteed to avoid the transfer of biases to AI.”
Another complementary solution, this time proposed by Ignacio López Sánchez, is “to create supervisory bodies, as already happens in other areas with entities like the National Securities Market Commission or the European Central Bank, for example. However, this would be for supervision, not regulation.” Manuel Espiñeira agreed with his words and added that “excessive regulation could limit the development of AI and other associated technologies.” To conclude, the Minsait expert explained that “the real challenge is to reach a balance point in regulation.”
Team of the Future Lab Analysis. Meeting – Round Table of El Confidencial Newspaper, Spain – Topic: The Rise of AI: Can Automation Redefine the Labor Market Without Leaving Anyone Behind?
by Laboratory of the Future analysis team | Jun 17, 2023 | Artificial intelligence
A PHILOSOPHICAL VISION OF ARTIFICIAL INTELLIGENCE. “WEAK DEMOCRACIES, CAPITALISM, AND ARTIFICIAL INTELLIGENCE ARE A DANGEROUS COMBINATION”
Mark Coeckelbergh: “Weak democracies, capitalism, and artificial intelligence are a dangerous combination.” The philosopher points out that institutions need to rely on experts to regulate technology, but without forgetting the citizens.
Mark Coeckelbergh has focused the attention of an audience unaccustomed to philosophical debates: engineering students filled a room to listen to this expert in technology ethics, invited by the Robotics and Industrial Informatics Institute of the Universitat Politècnica de Catalunya. Coeckelbergh, a prolific author — two of his books are published in Spanish by Cátedra, Ethics of Artificial Intelligence (2021) and Political Philosophy of Artificial Intelligence (2023) — knows how important it is to build bridges between those who develop technologies and those who must think about how to use them.
Question: Do you think that students, engineers, and major tech companies take the ethical aspects of artificial intelligence (AI) into account?
Answer: People are aware that this technology will affect our lives because it’s everywhere, but at the same time, we are confused because the changes are very fast and complex. That’s why I think it’s important that education and research try to find an interdisciplinary path between philosophy, programming, and robotics to address these ethical issues.
Question: And what about politics?
Answer: Yes, we need to create more links between experts and politicians, but not just technical opinions should matter. We need to figure out how we can organize our democracy to consider the vision of experts, yet still make decisions ourselves. Tech companies are gaining more and more power, and this is a problem because the sovereignty of nations and cities is diminishing. How much of our technological future should be left in the hands of private initiatives, and how much should be public and controlled by democracies?
Question: Is artificial intelligence a threat to democracy, or are democracies already weakened?
Answer: Democracy is already vulnerable because we don’t really have complete democracies. It’s like when Gandhi was asked what he thought of Western civilization, and he said it was a good idea. The same goes for democracy: it’s a good idea, but we don’t have it fully. For me, it’s not enough to vote and have majorities, it’s too vulnerable to populism, not sufficiently participatory, and it doesn’t take citizens seriously. There’s a lack of education and knowledge to achieve real democracy, and the same is true for technology. People have to understand that technology is also political, and we need to ask ourselves whether it’s good for democracy that communication infrastructures like Twitter are in private hands.
We use technology uncritically, and while a few benefit, the rest of us are exploited for our data.
Question: In what way does artificial intelligence threaten democracy?
Answer: We deal with technology without thinking; we use it uncritically, but it shapes us and uses us as instruments for power, control, and exploitation of our data. And while a few benefit, the rest of us are exploited for our data. This affects democracies because, not being very resilient, political trends are even more polarized by technology. This combination of weak democracies, capitalism, and artificial intelligence is dangerous. But I do believe it can be used in a more constructive way, to improve life for everyone and not just a few.
Question: Some see artificial intelligence as a way to work less and have more freedom, while others see it as a threat to their jobs.
Answer: I think AI right now empowers those who already have a privileged position or good education: for example, they can use it to start a company. But there will be changes in employment, and there will be some transformation of the economy, so we need to be prepared. On the other hand, the argument that technology makes things easier… Until now, it has led to precarious jobs, like Uber drivers, and to jobs that may be good but stressful. For example, we are all slaves to email, and it came as a solution.
Question: So, the problem is not so much the technology but the system.
Answer: It’s a combination of both things, but indeed, these new technological possibilities force us to question the system more than ever. Today, the political conflict is played out in the realm of technology.
Question: What impact does it have on the media?
Answer: In this environment, the problem isn’t that people believe a lie, but that they don’t know what is a lie and what is truth. Quality journalism is very important to provide context and to try to understand the world. I think it can help people gain more knowledge, even if Artificial Intelligence is used for some tasks in the profession. Philosophers, journalists, educators, we have to provide the tools to interpret the world, because when knowledge is lacking and confusion reigns, it’s easier for a leader to come with a simple, populist solution, as has already happened in some countries in Europe.
Question: Can technology make governments more technocratic?
Answer: Politicians are confused, they feel the pressure from lobbies and create regulatory frameworks, but at no point have citizens had a say. States are becoming more and more bureaucratic because they give power to those who control artificial intelligence. So, who is responsible? This kind of system, as Hannah Arendt said, leads to horrors. We must fight against it, with regulations that allow us to see why algorithms make the decisions they do and that allow us to know who is responsible.
Future Laboratory Analysis Team. Article/Report by Josep Cata Figuls.
by Laboratory of the Future analysis team | Jun 13, 2023 | Artificial intelligence, geopolitics, News, States and technology
THE SWIFT REGULATION OF ARTIFICIAL INTELLIGENCE IN THE PEOPLE’S REPUBLIC OF CHINA
The draft regulation as a mix of sensible restrictions on the risks of AI and a continuation of the strong tradition of Chinese government intervention in the tech industry.
In April (2023), there was a significant development in the Artificial Intelligence space in China. The Chinese internet regulator published a draft regulation on generative Artificial Intelligence. Named “Measures for the Management of Generative Artificial Intelligence Services,” the document does not call out any specific company, but the way it is written makes it clear that it was inspired by the relentless launch of large language model chatbots in China and the United States.
Last week, I participated in the CBC News podcast “Nothing Is Foreign” to discuss the draft regulation and what it means for the Chinese government to take such rapid action on a technology that is still very new.
As I said on the podcast, I see the draft regulation as a mix of sensible restrictions on the risks of AI and a continuation of the strong tradition of Chinese government intervention in the tech industry.
Many of the clauses in the draft regulation are principles that AI critics in the West advocate for: the data used to train generative AI models must not infringe on intellectual property or privacy; algorithms must not discriminate against users based on race, ethnicity, age, gender, and other attributes; AI companies must be transparent about how they obtained the training data and how they hired humans to label the data.
At the same time, there are rules that other countries would likely reject. The government requires people using these generative AI tools to register with their real identity, just as they would on any social platform in China. The content generated by AI software must also “reflect the fundamental values of socialism.”
None of these requirements are surprising. The Chinese government has tightly regulated tech companies in recent years, punishing platforms for lax moderation and incorporating new products into the established censorship regime.
The document makes that regulatory tradition easy to see: there are frequent mentions of other rules that have been passed in China regarding personal data, algorithms, deepfakes, cybersecurity, etc. In a way, it feels as if these discrete documents are slowly forming a network of rules that help the government address new challenges in the technological era.
The fact that the Chinese government can react so quickly to a new technological phenomenon is a double-edged sword. The strength of this approach, which examines each new technological trend separately, “is its precision, creating specific remedies for specific problems,” wrote Matt Sheehan, a fellow at the Carnegie Endowment for International Peace. “The weakness is its fragmented nature, with regulators forced to draft new regulations for new applications or problems.” If the government is busy playing whack-a-mole with new rules, it could miss the opportunity to think strategically about a long-term vision for AI. We can contrast this approach with that of the EU, which has been working on a “hugely ambitious” AI Act for years, as my colleague Melissa recently explained. (A recent review of the AI Act draft included regulations on generative AI).
There’s one point I didn’t mention on the podcast but find fascinating. Despite the restrictive nature of the document, it is also a tacit encouragement for companies to continue working on AI. The proposed maximum fine for violating the rules is 100,000 RMB, around 15,000 dollars, a tiny amount for any company capable of building large language models.
Of course, if a company is fined every time its AI model violates the rules, the amounts could add up. But the size of the fine suggests that the rules are not meant to scare companies into not investing in AI. As Angela Zhang, a law professor at the University of Hong Kong, recently wrote, the government is playing multiple roles: “The Chinese government should not only be seen as a regulator but also as an advocate, sponsor, and investor in AI. The ministries that advocate for the development of AI, along with state sponsors and investors, are prepared to become a powerful counterbalance to strict AI regulation.”
It may take a few months before regulators finalize the draft, and months more before it comes into effect. But I know that many people, including myself, will be watching for any changes.
Who knows? By the time the regulation comes into effect, there might be another new viral AI product that forces the government to come up with even more rules.
Analysis Team of the Future Laboratory / MIT Publication – Massachusetts Institute of Technology (United States). Article by Zeyi Yang.
Translation from English: Translation and Interpretation Team of the Future Laboratory.
by Laboratory of the Future analysis team | Jun 11, 2023 | Artificial intelligence
Pausing AI developments is not enough. We need to shut it all down.
Yudkowsky is a U.S. decision theorist and leads research at the Machine Intelligence Research Institute. He has been working on Artificial General Intelligence alignment since 2001 and is widely regarded as one of the founders of the field.
An open letter published at the end of May 2023 calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped forward and signed it. It is a marginal improvement.
I refrained from signing because I believe the letter underestimates the severity of the situation and asks very little to resolve it.
The key issue is not “human-level intelligence” (as the open letter says); it’s what happens after AI reaches an intelligence greater than human. The key thresholds there may not be obvious, we certainly cannot calculate in advance what will happen and when, and it currently seems imaginable that a research lab could cross critical lines unknowingly.
Many researchers immersed in these topics, including myself, expect that the most likely outcome of building a superhumanly intelligent AI, under any remotely similar circumstances to the current ones, is that literally everyone on Earth will die. Not like “maybe possibly some remote chance,” but like “that’s the obvious thing that would happen.” It’s not that you can’t, in principle, survive by creating something much smarter than you; it’s that it would require precision and preparation and new scientific knowledge, and probably not having AI systems made up of giant, inscrutable sets of fractional numbers.
Without that precision and preparation, the most likely outcome is an AI that doesn’t do what we want and doesn’t care about us or sentient life in general. That kind of care is something that, in principle, could be imbued into an AI, but we are not ready and currently do not know how to do it.
In the absence of that care, we get “AI doesn’t love you, nor does it hate you, and you are made of atoms it can use for something else.”
The likely outcome of humanity facing an opposing superhuman intelligence is total loss. Valid metaphors include “a 10-year-old trying to play chess against Stockfish 15,” “the 11th century trying to fight against the 21st century,” and “Australopithecus trying to fight against Homo sapiens.”
To visualize a hostile superhuman AI, don’t imagine a lifeless, intelligent thinker living inside the Internet and sending malicious emails. Imagine an entire alien civilization, thinking millions of times faster than humans, initially confined to computers, in a world of creatures that are, from its perspective, very stupid and slow. A sufficiently intelligent AI will not stay confined to computers for long. In today’s world, it can send DNA strands through email to laboratories that will produce proteins on demand, enabling an AI initially confined to the Internet to build artificial life forms or directly start post-biological molecular manufacturing.
If someone builds an AI that is too powerful, under the current conditions, I expect that all human members of the species and all biological life on Earth will die soon after.
There is no proposed plan for how we could do such a thing and survive. OpenAI’s openly declared intent is to have some future AI do our AI alignment task. Just hearing that this is the plan should be enough to make any sensible person panic. The other leading AI lab, DeepMind, has no plan.
A side note: none of this danger depends on whether AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize and calculate results that meet sufficiently complicated outcome criteria. That said, it would be negligent in my moral duties as a human if I didn’t also mention that we have no idea how to determine if AI systems are self-aware, as we have no idea how to decode anything that happens in the giant inscrutable matrices, and therefore, at some point, unknowingly, we may create digital minds that are truly self-aware and should have rights and should not be property.
The rule that most people aware of these problems would have backed 50 years ago was that if an AI system can speak fluently and says it is self-aware and demands human rights, that should be a barrier to people simply owning that AI and using it beyond that point. We’ve already passed that old line in the sand. And that was probably correct. I agree that current AIs are probably just imitating conversation about self-awareness from their training data. But I point out that, with the little understanding we have of the internal parts of these systems, we actually don’t know.
If that’s our state of ignorance for GPT-4, and GPT-5 is the same giant leap in capability as GPT-3 to GPT-4, I think we will no longer be able to justifiably say “probably not self-aware” if we allow people to build GPT-5. It will just be “I don’t know; no one knows.” If you can’t be sure whether you’re creating a self-aware AI, this is alarming not just because of the moral implications of the “self-aware” part, but because not being sure means you have no idea what you’re doing. And that’s dangerous, and you should stop.
On February 7, Satya Nadella, CEO of Microsoft, publicly boasted that the new Bing would make Google “come out and show it can dance.” “I want people to know we made them dance,” he said.
This is not how a Microsoft CEO speaks in a sane world. It shows an overwhelming gap between the seriousness with which we take the problem and the seriousness with which we needed to take it 30 years ago.
We’re not going to close that gap in six months.
More than 60 years passed from when the notion of artificial intelligence was first proposed and studied until we reached current capabilities. Solving the safety of superhuman intelligence, not perfect safety, safety in the sense of “not literally killing everyone,” could reasonably take at least half of that time. And what’s tricky about trying this with superhuman intelligence is that if you mess up the first attempt, you can’t learn from your mistakes because you’re dead. Humanity doesn’t learn from error and dusts itself off and tries again, as with other challenges we’ve overcome in our history, because we’re all gone.
Trying to do something right on the first truly critical attempt is an extraordinary task, both in science and engineering. We’re not coming in with anything like the approach that would be required to do it successfully. If we applied the lesser engineering rigor standards that apply to a bridge designed to carry a couple thousand cars to the emerging field of Artificial General Intelligence, the whole field would be shut down tomorrow.
We are not prepared. We are not on track to be prepared in a reasonable time window. There is no plan. Progress in AI capabilities is massive, far ahead of the progress in AI alignment or even understanding what the hell is going on inside those systems. If we really do this, we are all going to die.
Many researchers working on these systems think we are rushing toward a catastrophe, and more of them dare to say it privately than publicly; but they think they can’t unilaterally stop the forward fall, that others will continue even if they personally quit their jobs. And so everyone thinks they might as well keep going too. This is a stupid state of affairs and an undignified way for Earth to die, and the rest of humanity should intervene at this point and help the industry solve its collective action problem.
Some of my friends have recently informed me that when people outside the AI industry first hear about the extinction risk of Artificial General Intelligence, their reaction is “maybe we shouldn’t build AGI then.”
Hearing this gave me a small flicker of hope, because it is a simpler, more sensible, and frankly more reasonable reaction than what I’ve heard in the last 20 years of trying to get someone in the industry to take things seriously. Anyone who speaks like this deserves to hear how serious the situation actually is, and not be told that a six-month moratorium will solve it.
On March 16, my partner sent me this email. (Later they gave me permission to share it here).
“Nina lost a tooth! The usual way kids do it, not by carelessness! Seeing GPT-4 pass those standardized tests on the same day Nina reached a childhood milestone triggered an emotional wave that made me lose my head for a minute. Everything is moving too fast. I’m worried that sharing this might increase your own pain, but I’d rather you know than each of us suffer alone.”
When the internal conversation is about the pain of seeing your child lose their first tooth and thinking they won’t get the chance to grow up, I think we are past the point of playing political chess over a six-month moratorium.
If there were a plan for Earth to survive, if we just approved a six-month moratorium, I would support that plan. There is no such plan.
This is what would actually need to be done:
The moratorium on new large-scale training must be indefinite and global. There can be no exceptions, even for governments or the military. If the policy starts with the US, then China must see that the US is not seeking an advantage but is trying to avoid a terribly dangerous technology that cannot have a true owner and will kill everyone in the US, China, and Earth. If I had infinite freedom to write laws, I might make a single exception for AI to be trained solely to solve problems in biology and biotechnology, not trained on Internet text, and not at the level where they begin to talk or plan; but if that were remotely complicating the problem, I would immediately discard that proposal and say to simply shut everything down.
Turn off all large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all large training races. Put a limit on the amount of computing power anyone can use to train an AI system and move it downward over the coming years to offset the more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent prohibited activities from shifting to other places. Track all GPUs sold. If intelligence says a country outside the agreement is building a GPU cluster, be less afraid of a shooting conflict between nations than of violating the moratorium; be willing to destroy a rogue data center through an airstrike.
Do not frame anything as a conflict between national interests, make it clear that anyone talking about an arms race is a fool. That we all live or die as one, in this, is not policy but a fact of nature. Make explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a total nuclear exchange, and that allied nuclear countries are willing to take some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training races.
That is the kind of policy shift that would make my partner and I hug and tell each other that a miracle happened, and now there’s a chance Nina will live. Sane people who hear about this for the first time and sensibly say “maybe we shouldn’t” deserve to hear, honestly, what would be needed for that to happen. And when their policy request is that big, the only way it will pass is if lawmakers realize that if they do business as usual and do what’s politically easy, that means their own children are going to die too.
We are not ready. We are not on track to be significantly more prepared in the foreseeable future. If we proceed with this, everyone will die, including the children who didn’t choose this and did nothing wrong.
Future Lab Analysis Team / Time Magazine (England). Article by Eliezer Yudkowsky.
Translation from English: Translation and Interpretation Team of the Future Lab.
by Laboratory of the Future analysis team | Jun 7, 2023 | States and technology
Bremmer is a foreign affairs columnist and editor-at-large of TIME. He is the president of Eurasia Group, a political risk consultancy, and of GZERO Media, a company focused on providing smart and engaging coverage of international affairs. He teaches applied geopolitics at the School of International and Public Affairs at Columbia University, and his most recent book is The Power of Crisis.
The growing development of artificial intelligence will produce medical breakthroughs that will save and improve billions of lives. It will become the most powerful engine of prosperity in history. It will give an incalculable number of people, including generations yet to be born, powerful tools that their ancestors never imagined. But the risks and challenges posed by Artificial Intelligence are also becoming clear, and now is the time to understand and address them. Here are the biggest ones.
The health of democracy and free markets depends on access to accurate and verifiable information. In recent years, social media has made it harder to distinguish fact from fiction, but advances in AI will unleash legions of bots that seem much more human than those we’ve encountered to date. Much more sophisticated deepfakes of audio and video will undermine our (already diminished) trust in those who serve in government and those who report the news. In China, and later in its client states, AI will take facial recognition and other tools that can be used for state surveillance to exponentially higher levels of sophistication.
This problem extends beyond our institutions because the production of “generative AI,” artificial intelligence that generates sophisticated written, visual, and other types of content in response to user prompts, is not limited to large tech companies. Anyone with a laptop and basic programming skills already has access to much more powerful AI models than existed just a few months ago and can produce unprecedented volumes of content. This proliferation challenge is about to grow exponentially as millions of people will have their own GPT running on real-time data available on the Internet. The AI revolution will enable criminals, terrorists, and other wrongdoers to code malware, create biological weapons, manipulate financial markets, and distort public opinion with astonishing ease.
Artificial intelligence can also exacerbate inequality, both within societies among small groups with wealth, access, or special skills, and between wealthier and poorer nations.
Artificial Intelligence will create disruption in the workforce. While technological advances in the past have mainly created more jobs than they have eliminated and increased productivity and prosperity in general, there are crucial caveats. Jobs created by major technological changes in the workplace require skill sets different from those they have destroyed, and the transition is never easy. Workers need to be retrained. Those who cannot retrain must be protected by a social safety net that varies in strength from place to place. Both issues are costly, and it will never be easy for governments and private companies to agree on how to share this burden.
More fundamentally, the displacement caused by Artificial Intelligence will happen on a much broader and much faster scale than past transitions. The disruption of the transition will generate economic and therefore political upheaval worldwide.
Finally, the AI revolution will also impose an emotional and spiritual cost. Humans are social animals. We thrive on interaction with others and wither in isolation. Too often, bots will replace humans as companions for many people, and by the time scientists and doctors understand the long-term impact of this trend, our growing dependence on artificial intelligence, even for companionship, may be irreversible. This may be the most important challenge of AI.
The response:
Challenges like these will require a global response. Today, artificial intelligence is not regulated by government officials but by tech companies. The reason is simple: you cannot create rules for a game you do not understand. But relying on tech companies to regulate their products is not a sustainable plan. They primarily exist to make a profit, not to protect consumers, nations, or the planet. It’s a bit like letting energy companies lead strategies to combat climate change, except that warming and its dangers are already understood in ways that the risks of AI are not, leaving us without lobbying groups that can help push for the adoption of smart and healthy policies.
So, where are the solutions? We will need national action, global cooperation, and some common-sense cooperation from the governments of the United States and China, in particular.
It will always be easier to achieve well-coordinated policy within national governments than at the international level, but political leaders have their own priorities. In Washington, policymakers have focused mainly on winning a race with China to develop the technological products that best support 21st-century security and prosperity, and this has encouraged them to give tech companies serving the national interest something akin to freedom. Chinese lawmakers, fearful that AI tools could undermine their political authority, have regulated much more aggressively. European lawmakers have focused less on security or profits and more on the social impact of AI advances.
But all will have to establish rules in the coming years that limit AI bots’ ability to undermine political institutions, financial markets, and national security. This means identifying and tracking bad actors, as well as helping people separate real information from false. Unfortunately, these are big, costly, and complicated steps that lawmakers are unlikely to take until they face AI-generated (but real) crises. That cannot happen until the discussion and debate on these issues begin.
Unlike climate change, the world’s governments have yet to agree that the AI revolution poses an existential cross-border challenge. Here, the United Nations has a role to play as the only institution with the convening power to develop a global consensus. A UN-led AI approach will never be the most efficient answer, but it will help achieve a consensus on the nature of the problem and marshal international resources.
By forging an agreement on which risks are most likely, most impactful, and emerging most rapidly, an AI-focused equivalent of the Intergovernmental Panel on Climate Change can regulate meetings and the production of agreements on the “State of AI” that increasingly delve into the heart of AI-related threats. Just like with climate change, this process will also need to include the participation of public policy officials, scientists, technologists, private sector delegates, and individual activists representing the majority of member states to create a COP (Conference of the Parties) process to address threats to biosafety, information freedom, workforce health, etc. There could also be an artificial intelligence agency inspired by the International Atomic Energy Agency to help monitor AI proliferation.
That said, there is no way to address the rapidly metastasizing risks created by the AI revolution without a much-needed infusion of common sense into U.S.-China relations. After all, it is the technological competition between the two countries and their major tech companies that creates the greatest risk of war, especially as AI plays an increasingly larger role in weapons and military planning.
Beijing and Washington must develop and maintain high-level discussions about the emerging threats to both countries (and the world) and the best ways to contain them. And they cannot wait for an AI version of the Cuban Missile Crisis to force them into genuine transparency in handling their competition. To create an “AI arms control agreement” with mutual monitoring and verification, each government must listen not only to each other but also to the technologists on both sides who understand the risks that need to be contained.
Crazy? Absolutely. The timing is terrible because these advancements are coming at a time of intense competition between two powerful countries that truly do not trust each other.
But if the Americans and Soviets were able to build a functioning arms control infrastructure in the 1970s and 1980s, the U.S. and China can build an equivalent for the 21st century. Let’s hope they realize they have no other option before a catastrophe makes it inevitably obvious.
Team of the Future Lab Analysis / Time Magazine (England). Article by Ian Bremmer. Translation from English: Translation and Interpretation Team of the Future Lab.