Artificial Intelligence Series II Senate Committee

Author: Dr. David Tanz

Artificial intelligence | geopolitics

May 17, 2023

17 May, 2023

The topic of Artificial Intelligence, with its enormous scope and how little we actually know – in many cases we are still in the intuitive stage – has led to an authentic flood of studies, opinions, controversies, and heated debates that practically occur daily.

Our Laboratory understands that one of the best services it can provide to all the people and organizations following our work is to offer a carefully selected series of those opinions, positions, and debates, brought practically to the day they occur, in order to genuinely keep informed those who are attentive to what is happening and to our vision.

By the way, the Laboratory is working on its Artificial Intelligence Microlab and will eventually share its conclusions and perceptions, but the urgency of the topic does not allow too many delays. That is the reason why we are launching a Series today, the Artificial Intelligence series, which we hope will be the catalyst for analysis, reflection, and conclusions on the projection that such a significant topic forces us to address. No one, neither governments, nor international organizations, nor regional organizations, think tanks, nor individuals can remain indifferent to its evolution. As always, we hope our service can be useful to you.

In what has already become a classic situation in the United States, the meeting between the founder of OpenAI, Samuel Altman, and the Senate Subcommittee, which we mentioned in previous posts, ended in a pleasant meeting where no resolutions were made, there were loud statements, and the effectiveness of the millennia-old technique of lobbying was demonstrated in these times of technological disruption. But it also demonstrated something much more worrying: the poor understanding of politics regarding the content and scope of the technology, which is as dangerous as the potential risks of technological application in certain areas. Unfortunately, the saying that technology goes up in an elevator (upwards) and politics goes down the stairs (downwards) seems to be confirmed once again. By the way, the United States is one of the developed countries with the greatest delay in regulating many technological aspects, which have been resolved, for example, by the European Union or Japan.

Carlos Ares, an economist from the City Council of Barcelona and one of the main references in technology, has said, with his usual sense of humor but also with his sharp perception, one of the most accurate diagnoses made about Altman: “The CEO of OpenAI calls for more regulation for the artificial intelligence industry. ‘My worst fear is that this technology goes wrong. And if it goes wrong, it can go very wrong.’ (Let’s remember that Sam Altman is one of those preppers we are used to seeing in movies who live in the mountains and have a whole arsenal of weapons, cans of food, and all sorts of gadgets to survive a zombie attack, an alien invasion, or their own artificial intelligence).” Knowing the tech leaders, the perception seems more than accurate.

Altman and the legislators agreed that new artificial intelligence systems should be regulated, but it is still unclear how that would happen.

The tone of Congressional hearings with tech industry executives in recent years can be best described as antagonistic. Mark Zuckerberg, Jeff Bezos, and other prominent tech company leaders have been criticized in Capitol Hill by legislators frustrated with their companies. This meeting seems to have changed that confrontational tone, especially because of the proactive and “concerned” attitude of the witness, which differentiated him from previous presentations, usually handled defensively and with a certain tone of denial, which led to unfriendly exchanges.

Altman made his public debut in Capitol Hill when interest in AI skyrocketed. The tech giants have invested efforts and billions of dollars in what they say is a transformative technology, even amid growing concerns about AI’s role in spreading misinformation, job destruction, and eventually human intelligence.

But Altman, CEO of OpenAI, a San Francisco-based startup, testified before members of a Senate Subcommittee and largely agreed with them on the need to regulate the increasingly powerful AI technology created within his company and others like Google and Microsoft (and along the way, he shines the spotlight on the toughest competition. Undoubtedly, Altman and his advisors form an extraordinary team of strategists).

In his first testimony before Congress, Altman urged legislators to regulate artificial intelligence while Committee members showed a nascent understanding of the technology (we will develop this in a section placed later in this article). The hearing underscored the deep concern that both technologists and the government have about the potential harms of AI. But that concern didn’t extend to Altman, who had a friendly hearing from the Subcommittee members. The hearing lasted three hours and was truly educational for a group of politicians with limited knowledge of the depth of the problem. This does not mean that legislators aren’t worried, but it is more of a concern influenced by the surrounding environment rather than a real understanding of the phenomenon.

Within the lobbying activities, Altman also spoke about his company’s technology at a dinner with dozens of House members the night before the legislative hearing and also met privately with several senators before the hearing. He offered a flexible framework to manage what happens next with the rapidly developing systems that some believe could fundamentally change the economy. It was clear that the Senate Subcommittee members on privacy, technology, and law were not planning a tough interrogation for Altman, as they thanked him for his private meetings with them and for agreeing to appear at the hearing. Cory Booker, a Democrat from New Jersey, repeatedly referred to Altman by his first name. Given the context, this does not seem surprising.

“If something can go wrong, it can go very wrong”:

“I think that if this technology goes wrong, it can go very wrong. And we want to speak out about it,” he said. “We want to work with the government to prevent that from happening.” This statement from Altman is particularly worrying. If we interpret this carefully, we’ll see that we don’t actually know if something can go wrong, which in itself is serious. So it’s hard to understand why OpenAI is announcing that it will soon connect ChatGPT to the Internet. Users who pay for ChatGPT Plus, which uses the GPT-4 model, will have access to a web browsing feature that will provide updated information.” We don’t know what could go wrong, but let’s move forward! Wow!

The widespread access to OpenAI and Google’s new AI tools indicates that the AI war is only intensifying as large tech companies compete to create the most powerful and user-friendly AI.

The concern has reached President Joseph Biden and his advisers, which seems to suggest that this issue will not be left to just this brief appearance. This has placed the technology at the center of attention in Washington. Biden said this month in a meeting with a group of AI company executives that “what you’re doing has enormous potential and danger.”

Suggestions presented to the Senate and responses:

Altman was joined at the hearing by Christina Montgomery, IBM’s Director of Privacy and Trust, and Gary Marcus, a well-known professor and frequent critic of Artificial Intelligence technology. The most interesting thing, as we will see later, is that the main problem for OpenAI’s head was not any of the legislators, but rather the relentless logic of Dr. Marcus.

Altman said that his company’s technology could destroy some jobs, but also create new ones, and that it would be important for “the government to figure out how we want to mitigate that.” Echoing an idea suggested by Dr. Marcus, he proposed the creation of an agency that would issue licenses for the development of large-scale AI models, safety standards, and tests that AI models must pass before being released to the public.

“We believe that the benefits of the tools we’ve implemented so far far outweigh the risks, but ensuring their safety is vital to our work,” said Altman.

But it was not clear how legislators would respond to the call for AI regulation. The Congress’s track record on technology regulations is discouraging. Dozens of bills on privacy, speech, and safety have failed over the last decade due to partisan disputes and fierce opposition from tech giants.

The United States has been lagging behind the world in terms of privacy, speech, and child protection regulations. It is also behind on Artificial Intelligence regulations. European Union legislators are set to introduce rules for the technology later this year. And China has created AI laws that align with its censorship laws, as expected from the Kingdom of Big Brother.

Senator Richard Blumenthal, a Democrat from Connecticut and Chairman of the Senate Panel, said that the hearing was the first in a series to learn more about the potential benefits and harms of AI to eventually “write the rules.”

He also acknowledged Congress’s failure to keep up with the introduction of new technologies in the past. “Our goal is to demystify and hold those new technologies accountable to avoid some of the mistakes of the past,” Blumenthal said. “Congress did not rise to the moment with social media.” And, by the way, it still hasn’t.

Members of the Subcommittee suggested an independent agency to oversee AI; rules requiring companies to disclose how their models work and the datasets they use; and antitrust regulations to prevent companies like Microsoft and Google from monopolizing the emerging market.

“The devil will be in the details,” said Sarah Myers West, Executive Director of the AI Now Institute, a research center for technology-related policies and Artificial Intelligence. She said that Altman’s suggestions for regulations don’t go far enough and should include limits on how AI is used in surveillance and the use of biometric data. She pointed out that Altman showed no signs of slowing down the development of OpenAI’s ChatGPT tool. “It’s a great irony to see concern about the harms coming from people who are quickly pushing the commercial use of the system responsible for those very harms,” said Ms. West.

The gap between political understanding and technological progress:

Some legislators at the hearing still showed the persistent gap in technological knowledge between Washington and Silicon Valley. Lindsey Graham, Republican from South Carolina, repeatedly asked witnesses if a speech liability shield for online platforms like Facebook and Google also applies to Artificial Intelligence.

Altman, calm and serene, tried several times to establish a distinction between AI and social media. “We need to work together to find a completely new approach.”

Some Subcommittee members also seemed reluctant to take drastic actions against an industry with significant economic promise for the U.S. and one that competes directly with adversaries like China. Certainly, U.S. politicians are struggling to clearly establish what the dangers of a technology are and what is their country’s technological geopolitics.

The Chinese are creating AI that “reinforces the core values of the Chinese Communist Party and the Chinese system,” said Chris Coons, a Democrat from Delaware. “And I am concerned about how we promote AI that reinforces and strengthens open markets, open societies, and democracy.”

Some of the toughest questions and comments to Altman came from Dr. Marcus, who pointed out that OpenAI has not been transparent about the data it uses to develop its systems. He expressed doubts about Altman’s prediction that new jobs would replace those lost to Artificial Intelligence.

“We have unprecedented opportunities here, but we are also facing a perfect storm of corporate irresponsibility, widespread deployment, lack of proper regulation, and little inherent reliability,” said Dr. Marcus.

Tech companies have argued that Congress should be cautious with broad rules that group different types of AI. In Tuesday’s hearing, Ms. Montgomery from IBM called for an AI law similar to the regulations proposed by Europe, which outlines various risk levels. She advocated for rules that focus on specific uses, rather than regulating the technology itself.

“Essentially, AI is just a tool, and tools can serve different purposes,” she said, adding that Congress should adopt a “precision regulation approach for AI.”

Meanwhile, the competition is intensifying:

By the way, OpenAI is not the only company that has launched what is currently being discussed. The most comprehensive overview at this time, considering those seen as the most important projects, shows us:

ChatGPT. ChatGPT, the AI language model from the research lab OpenAI, has been in the headlines since November for its ability to answer complex questions, write poetry, generate code, plan vacations, and translate languages. GPT-4, the latest version released in mid-March, can even respond to images (and pass the Uniform Bar Exam).

Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s main investor and partner, added a similar chatbot capable of having open-text conversations about practically any topic to its Bing internet search engine. However, it was the bot’s occasionally inaccurate, misleading, and strange answers that garnered much of the attention after its launch.

Bard. Google’s chatbot, named Bard, was launched in March for a limited number of users in the United States and the United Kingdom. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts, and answer questions with facts or opinions.

Ernie. Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a failure after it was revealed that a promised "live" demonstration of the bot had been recorded.

Autor: Dr. David Tanz

Autor: Dr. David Tanz

Related articles

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

error: Content is protected !!