Bremmer is a foreign affairs columnist and editor-at-large of TIME. He is the president of Eurasia Group, a political risk consultancy, and of GZERO Media, a company focused on providing smart and engaging coverage of international affairs. He teaches applied geopolitics at the School of International and Public Affairs at Columbia University, and his most recent book is The Power of Crisis.
The growing development of artificial intelligence will produce medical breakthroughs that will save and improve billions of lives. It will become the most powerful engine of prosperity in history. It will give an incalculable number of people, including generations yet to be born, powerful tools that their ancestors never imagined. But the risks and challenges posed by Artificial Intelligence are also becoming clear, and now is the time to understand and address them. Here are the biggest ones.
The health of democracy and free markets depends on access to accurate and verifiable information. In recent years, social media has made it harder to distinguish fact from fiction, but advances in AI will unleash legions of bots that seem much more human than those we’ve encountered to date. Much more sophisticated deepfakes of audio and video will undermine our (already diminished) trust in those who serve in government and those who report the news. In China, and later in its client states, AI will take facial recognition and other tools that can be used for state surveillance to exponentially higher levels of sophistication.
This problem extends beyond our institutions because the production of “generative AI,” artificial intelligence that generates sophisticated written, visual, and other types of content in response to user prompts, is not limited to large tech companies. Anyone with a laptop and basic programming skills already has access to much more powerful AI models than existed just a few months ago and can produce unprecedented volumes of content. This proliferation challenge is about to grow exponentially as millions of people will have their own GPT running on real-time data available on the Internet. The AI revolution will enable criminals, terrorists, and other wrongdoers to code malware, create biological weapons, manipulate financial markets, and distort public opinion with astonishing ease.
Artificial intelligence can also exacerbate inequality, both within societies among small groups with wealth, access, or special skills, and between wealthier and poorer nations.
Artificial Intelligence will create disruption in the workforce. While technological advances in the past have mainly created more jobs than they have eliminated and increased productivity and prosperity in general, there are crucial caveats. Jobs created by major technological changes in the workplace require skill sets different from those they have destroyed, and the transition is never easy. Workers need to be retrained. Those who cannot retrain must be protected by a social safety net that varies in strength from place to place. Both issues are costly, and it will never be easy for governments and private companies to agree on how to share this burden.
More fundamentally, the displacement caused by Artificial Intelligence will happen on a much broader and much faster scale than past transitions. The disruption of the transition will generate economic and therefore political upheaval worldwide.
Finally, the AI revolution will also impose an emotional and spiritual cost. Humans are social animals. We thrive on interaction with others and wither in isolation. Too often, bots will replace humans as companions for many people, and by the time scientists and doctors understand the long-term impact of this trend, our growing dependence on artificial intelligence, even for companionship, may be irreversible. This may be the most important challenge of AI.
The response:
Challenges like these will require a global response. Today, artificial intelligence is not regulated by government officials but by tech companies. The reason is simple: you cannot create rules for a game you do not understand. But relying on tech companies to regulate their products is not a sustainable plan. They primarily exist to make a profit, not to protect consumers, nations, or the planet. It’s a bit like letting energy companies lead strategies to combat climate change, except that warming and its dangers are already understood in ways that the risks of AI are not, leaving us without lobbying groups that can help push for the adoption of smart and healthy policies.
So, where are the solutions? We will need national action, global cooperation, and some common-sense cooperation from the governments of the United States and China, in particular.
It will always be easier to achieve well-coordinated policy within national governments than at the international level, but political leaders have their own priorities. In Washington, policymakers have focused mainly on winning a race with China to develop the technological products that best support 21st-century security and prosperity, and this has encouraged them to give tech companies serving the national interest something akin to freedom. Chinese lawmakers, fearful that AI tools could undermine their political authority, have regulated much more aggressively. European lawmakers have focused less on security or profits and more on the social impact of AI advances.
But all will have to establish rules in the coming years that limit AI bots’ ability to undermine political institutions, financial markets, and national security. This means identifying and tracking bad actors, as well as helping people separate real information from false. Unfortunately, these are big, costly, and complicated steps that lawmakers are unlikely to take until they face AI-generated (but real) crises. That cannot happen until the discussion and debate on these issues begin.
Unlike climate change, the world’s governments have yet to agree that the AI revolution poses an existential cross-border challenge. Here, the United Nations has a role to play as the only institution with the convening power to develop a global consensus. A UN-led AI approach will never be the most efficient answer, but it will help achieve a consensus on the nature of the problem and marshal international resources.
By forging an agreement on which risks are most likely, most impactful, and emerging most rapidly, an AI-focused equivalent of the Intergovernmental Panel on Climate Change can regulate meetings and the production of agreements on the “State of AI” that increasingly delve into the heart of AI-related threats. Just like with climate change, this process will also need to include the participation of public policy officials, scientists, technologists, private sector delegates, and individual activists representing the majority of member states to create a COP (Conference of the Parties) process to address threats to biosafety, information freedom, workforce health, etc. There could also be an artificial intelligence agency inspired by the International Atomic Energy Agency to help monitor AI proliferation.
That said, there is no way to address the rapidly metastasizing risks created by the AI revolution without a much-needed infusion of common sense into U.S.-China relations. After all, it is the technological competition between the two countries and their major tech companies that creates the greatest risk of war, especially as AI plays an increasingly larger role in weapons and military planning.
Beijing and Washington must develop and maintain high-level discussions about the emerging threats to both countries (and the world) and the best ways to contain them. And they cannot wait for an AI version of the Cuban Missile Crisis to force them into genuine transparency in handling their competition. To create an “AI arms control agreement” with mutual monitoring and verification, each government must listen not only to each other but also to the technologists on both sides who understand the risks that need to be contained.
Crazy? Absolutely. The timing is terrible because these advancements are coming at a time of intense competition between two powerful countries that truly do not trust each other.
But if the Americans and Soviets were able to build a functioning arms control infrastructure in the 1970s and 1980s, the U.S. and China can build an equivalent for the 21st century. Let’s hope they realize they have no other option before a catastrophe makes it inevitably obvious.
Team of the Future Lab Analysis / Time Magazine (England). Article by Ian Bremmer. Translation from English: Translation and Interpretation Team of the Future Lab.
0 Comments