by Dr. David Tanz | Aug 14, 2023 | Space Exploration
Space exploration has been an endeavor that has captivated human imagination for decades. In this context, Mars, the closest planetary neighbor to Earth, has emerged as an ambitious target for establishing human colonies. The desire to expand our presence beyond our home planet has led to the consideration of Mars as a viable destination for colonization. This article will address the scientific and exploratory importance of placing human colonies on Mars, highlighting the main drivers behind this bold undertaking.
The Search for Cosmic Answers:
Mars, with its unique geological history and atmospheric conditions, offers scientists an unprecedented opportunity to unravel the mysteries of the solar system. Robotic exploration of Mars, carried out through probes and rovers, has revealed a series of features indicating the past presence of liquid water on its surface. This evidence suggests that Mars may have once hosted conditions suitable for life. Establishing human colonies on Mars would allow scientists to study in-depth the traces of water and search for signs of past or present life.
Humanity as an Intergalactic Explorer:
Since the dawn of civilization, exploration has been a constant in the human spirit. Establishing colonies on Mars would mark a monumental milestone in our ability to explore and colonize worlds beyond our home planet. This endeavor would not only expand humanity’s horizons but also provide a unique laboratory to test advanced survival and sustainability technologies in a planetary environment.
Advancing Towards Interplanetary Survival:
Colonizing Mars is not just about exploration and discovery, but also about ensuring humanity’s long-term survival. In the event of global disasters on Earth, the existence of colonies on Mars could act as a life insurance policy for our species. Diversifying the human population across multiple planets would significantly reduce the risk of total extinction in the event of catastrophic events on our home planet.
The Main Drivers of Mars Colonization:
Several organizations and prominent figures are actively driving the idea of colonizing Mars. One of the most notable initiatives is SpaceX, founded by Elon Musk. SpaceX has proposed the development of the Interplanetary Transport System (ITS), a large-capacity spacecraft designed to transport people to Mars and establish a self-sustaining colony. Musk’s bold vision is to make Mars habitable for a million people in the future.
NASA is also involved in Mars exploration through its Artemis program and its goal of returning to the Moon before advancing to Mars. The space agency has carried out successful missions to Mars with rovers such as Curiosity and Perseverance and is considering human missions in the future.
Conclusion: A Pivotal Step for Humanity:
In summary, the scientific and exploratory importance of establishing human colonies on Mars is undeniable. Mars represents a unique opportunity to study the geological history and the possibility of life on another planet. Additionally, the colonization of Mars aligns with the human spirit of exploration and the quest to ensure interplanetary survival. The main drivers of this endeavor, such as SpaceX and NASA, are paving the way for a future where humans can transcend Earth’s limitations and extend their presence to new cosmic horizons. The establishment of human colonies on Mars would undoubtedly be a pivotal step in humanity’s evolution as explorers and settlers.ra intergaláctica.
by Dr. David Tanz | Aug 11, 2023 | geopolitics, Globalization, States and technology
As humanity continues to advance in the field of technology, we find ourselves facing an increasingly complex landscape of ethical dilemmas that accompany these advancements. From biotechnology to digital privacy and genetic engineering, scientific progress places us at a crossroads where ethical decisions play a crucial role in shaping our future. In this article, we will rigorously and precisely address the ethical dilemmas that may arise in a world of technological advancements, analyzing the implications of the decisions we face.
Biotechnology: Between Curing and Genetic Modification:
Biotechnology has revolutionized the way we treat diseases and improve the quality of life. However, this field also raises intricate ethical questions. On one hand, gene therapies promise to cure hereditary diseases and genetic disorders, offering hope to those who suffer. On the other hand, genetic modification for enhancement purposes could open the door to the selection of specific traits and the creation of “designer babies.” Where do we draw the line between curing and genetic alterations?
Digital Privacy: The Challenge of Data Collection:
The digital era has brought with it a proliferation of personal data that is collected and stored in massive quantities. As artificial intelligence and data analysis become more advanced, digital privacy becomes a critical issue. The collection and use of personal data raise ethical questions about consent, transparency, and the risk of abuse. How do we balance technological innovation with the protection of individual privacy?
Genetic Engineering: Potential and Responsibility:
Genetic engineering offers the possibility of altering the DNA of plants, animals, and humans to achieve specific results. While this can have beneficial applications, such as improving crops to combat food shortages, it also presents ethical challenges. The genetic editing of human embryos raises fundamental issues about hereditary modification and germline alterations. How do we ensure that genetic engineering is used responsibly and for the benefit of society?
Artificial Intelligence: Bias and Autonomy:
Artificial intelligence (AI) has advanced impressively, but it also faces ethical dilemmas. One of the most prominent issues is bias in algorithms, which can perpetuate social inequalities and discrimination. Additionally, AI raises questions about autonomy and decision-making. As AI becomes integrated into areas such as healthcare and autonomous driving, how do we ensure that decisions made by machines are ethical and aligned with human values?
The Role of Ethics in the Technological Future:
Ethics plays a fundamental role in humanity’s technological future. As we advance, we must consider the ethical principles that will guide our actions and decisions. Formulating solid ethical codes and regulatory frameworks is essential to ensuring that technological progress does not undermine the fundamental values of society. Interdisciplinary collaboration among scientists, ethicists, and government leaders is crucial to effectively address the ethical dilemmas that arise on the technological horizon.
Conclusion: Ethical Responsibility in an Evolving World:
Ultimately, the intersection of technology and ethics challenges us to consider what we want our future to look like. As biotechnology, digital privacy, genetic engineering, and artificial intelligence continue to transform our world, we must approach these areas with both a scientific and ethical perspective. The importance of making informed and ethical decisions has never been more evident. As we face complicated decisions on the technological horizon…
by Dr. David Tanz | Aug 8, 2023 | Biotechnology, Transhumanism
We explore how advancements in medicine could allow us to live longer and healthier through genetic therapies and innovative technologies.
Modern medicine stands at a fascinating crossroads, where the convergence of genetics and technology is opening doors to an unprecedented future of health and longevity. As medical science progresses, the possibility of radically transforming the human experience through genetic therapies and innovative technologies arises. In this article, we will explore advances in genetic therapies and their potential to prolong and improve the quality of life, delving into the exciting horizon of future medicine.
The Gene Therapy Revolution:
Gene therapy, which involves the direct modification of an individual’s genetic material to treat genetic and acquired diseases, has evolved from a promising theory to a scientific reality. Through techniques like CRISPR-Cas9, it is possible to precisely edit defective genes, correcting genetic disorders at their source. This genetic revolution has paved the way for more direct and personalized treatments for diseases that previously lacked a solution.
The Fight Against Aging:
One of the most ambitious goals of future medicine is to address aging. As we better understand the molecular and cellular processes underlying aging, new possibilities emerge to intervene and slow this process. Gene therapy could also play a key role in this fight by acting on genes involved in longevity and cellular health.
Boosting Cellular Regeneration:
Another promising field is cellular regeneration, where regenerative medicine and gene therapy overlap. Genetic therapies could stimulate the regeneration of damaged tissues and organs, offering new hope for patients with degenerative diseases or severe injuries. The ability to reprogram cells to adopt different cellular identities, known as cellular reprogramming, could revolutionize medicine by allowing the generation of new, functional tissues.
Personalizing Medicine:
The medicine of the future will be based on an increasingly personalized and precision approach. Human genome sequencing has opened the door to treatments designed specifically for each individual’s unique genetic makeup. Advances in genetic editing allow for the correction of inherited genetic defects, and regenerative medicine enables the cultivation of tissues that perfectly match the patient. This personalization has the potential to improve treatment efficacy and reduce side effects.
Ethical and Regulatory Challenges:
While advancements in genetic therapies are exciting, they also pose ethical and regulatory challenges. Genetic editing has the potential to permanently alter the germline, affecting future generations. Moreover, equitable access to these technologies must be carefully considered to avoid deepening health inequalities. The scientific community and regulators must work together to ensure that these advances are implemented responsibly and safely.
In Conclusion: A Horizon of Medical Possibilities:
In summary, the medicine of the future stands on the threshold of an era of unprecedented advances in genetic therapies and longevity. Gene therapy, cellular regeneration, and the personalization of medicine have the potential to radically transform the way we approach health and aging. As medical science advances, it is essential that we continue to consider the ethical and regulatory aspects to ensure these breakthroughs benefit humanity as a whole. We are approaching the cusp of a medical revolution that could offer a longer and healthier life for generations to come, and it is our duty to embrace this exciting frontier of future medicine.
by Dr. David Tanz | May 21, 2023 | Artificial intelligence
The topic of Artificial Intelligence, with its vast reach and how little we actually know – in many cases, we are still in an intuitive stage – has led to an authentic avalanche of studies, opinions, controversies, and heated debates that practically occur daily.
Our Laboratory understands that one of the best services it can provide to all those people and organizations who follow our work is to offer a carefully selected series of opinions, positions, and debates, almost in real-time, to genuinely keep informed those who are interested in what is happening and our perspective.
By the way, the Laboratory is working on its Artificial Intelligence Microlab and will share its conclusions and perceptions in due time. However, the urgency of the issue does not allow for too many delays. This is why today we are launching a Series on Artificial Intelligence, which we hope will be a catalyst for analysis, reflection, and conclusions on the projections that such an important topic forces us to address. No one—neither governments, nor international organizations, nor regional bodies, think tanks, or individuals—can remain indifferent to its evolution.
As always, we hope our service will be useful to you.
Yuval Noah Harari argues that AI has hacked the operating system of human civilization.
Computers that tell stories will change the course of human history, says the historian and philosopher.
Stay calm with prophecies and speculations:
Within our series on Artificial Intelligence, today we have decided to analyze the latest statements and articles by Yuval Harari. As is well known, Dr. Harari is a historian, philosopher, and author of “Sapiens,” “Homo Deus,” and the children’s series “Unstoppable Us.” He is a professor at the Department of History at the Hebrew University of Jerusalem and co-founder of Sapienship, a social impact company. He is generally regarded as an important authority on the interpretation of the future, from a sociological and philosophical point of view.
Harari has said in The Guardian that “AI has hacked the operating system of human civilization.” This is a very strong statement that undoubtedly grabs attention and may justify merely warning about a possibility that no one knows whether it will happen. With absolute frankness and respect for Dr. Harari, in some cases, this can be taken as a reckless statement, which may be grounded in the prestige of an important academic career, but it justifies certain statements that sound somewhat apocalyptic. We do not know the probability of it happening. Sometimes, as with Harari here, it is also incomprehensible. But saying it in 2023 is striking and serves, just like viral TikToks that exaggerate, to hack our attention, which is what gets hacked the quickest.
It is logical to think that AI can bring serious problems, but there are no certainties. If the “godfather of AI” (Geoffrey Hinton, 2018 Turing Award winner, whom we will analyze in another article in this series) was wrong in his prediction six years ago, he can be wrong now. Harari also included in an article in March 2023 in The New York Times that half of “AI researchers” believe there is a 10% chance that AI will kill us all. But that’s not true. That article has placed him in the discussion and projected him to other media and networks, ensuring “charts” to maintain the position of a super-expert. That should, in any case, call every commentator to prudence, because what is true is that we don’t know.
What is certain is that we must all be very cautious on this issue. Just as we have said that we must be careful with Altman, we must also be cautious with other potential prophets, and focus on what is actually happening and, when we are unsure – and possibly very unsure – regulate the advancement of certain facts that we can have some certainties about, which can potentially be dangerous.
Harari’s ideas on AI hacking civilization:
According to Harari, the issues surrounding artificial intelligence have pursued humanity since the beginning of the computer era. Until now, these fears have focused on machines using physical means to kill, enslave, or replace people. However, in recent years, new artificial intelligence tools have emerged that seem to threaten the survival of human civilization from an unexpected direction. It is true that Artificial Intelligence seems to have acquired some remarkable abilities to manipulate and generate language, whether through words, sounds, or images. From there, Harari argues that Artificial Intelligence has thus hacked the operating system of our civilization. Maybe, or maybe not, let’s proceed with caution. It is true that manifestations have appeared that we do not understand and even certain biases that we have yet to explain, but from that to claim that we are in the hands of HAL, the wonderful machine from 2001: A Space Odyssey, there is quite a gap.
The point is not to rebut Dr. Harari or Dr. Hinton. The goal is to insert moderation into a debate that is heating up too much, where there are somewhat reckless statements, where there are many things we still don’t know and should proceed with caution. But one thing is this, and another thing is to predict that Artificial Intelligence is about to take over the control of human civilization. In the end, some of us still have quite a bit of trust in human intelligence, at least for now, even despite those who seem to be walking a complicated path regarding technological development.
Harari’s argument is very interesting: he tells us that language is the material from which almost all human culture is made. Human rights, for example, are not inscribed in our DNA. Rather, they are cultural artifacts that we create by telling stories and writing laws. Gods are not physical realities. Rather, they are cultural artifacts that we create by inventing myths and writing scriptures.
Money is also a cultural artifact. Banknotes are just pieces of colored paper, and today more than 90% of money isn’t even banknotes; it’s just digital information in computers. What gives money its value are the stories that bankers, finance ministers, and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes, and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers. Certainly, we can add to these comments that money, which is a commodity, has its value residing in a purely matter of faith.
What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think of ChatGPT and other new Artificial Intelligence tools, they are often drawn to examples like schoolchildren using it to write their essays (by the way, there are people who have a notorious tendency to apply the law of least effort; now, depending on the local phrase “when the law is made, the trick is made,” we know that every action has a reaction). With that, what will happen to the school system when children do this? But this type of question misses the bigger picture. Forget the school essays. Think about the next U.S. Presidential race in 2024 and try to imagine the impact of AI tools that can be used to produce political content in bulk, fake news, and scriptures for new cults. All these statements placed at the end of the sentence are elements for reflection. Certainly, influence—and we have already seen it in the data manipulations of Facebook, for example—is possible, especially when human intelligence is tempted to rest on somewhat unreliable substitutes.
Cults created by Artificial Intelligence?
In recent years, the QAnon cult has merged around anonymous online messages known as “Q drops.” Followers collected, venerated, and interpreted these Q drops as a sacred text. While, as far as we know, all previous Q drops were composed by humans, and bots simply helped spread them, in the future we could see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their sacred books. Soon, that could become a reality. It is certainly a possibility. And a disturbing one.
On a more prosaic level, we may soon find ourselves engaging in long online discussions about abortion, climate change, or the Russian invasion of Ukraine with entities we believe to be human, but that are actually artificial intelligence. The problem is that it makes no sense for us to waste time trying to change the declared opinions of an AI bot, when it could perfect its messages so precisely that it has a good chance of influencing us.
Through its mastery of language, AI could even form intimate relationships (we have evidence this has happened, in isolated acts, but it has happened) with people and use the power of intimacy to change our opinions and worldviews. Although there are no signs, for now (these are things we need to work on seriously) that AI has its own consciousness or feelings, to foster false intimacy with humans, it is enough if it can make them feel emotionally attached to it. In June 2022, Blake Lemoine, a Google engineer, publicly claimed that the chatbot he was working on had become sentient. The controversial claim cost him his job. The most interesting thing about this episode wasn’t Mr. Lemoine’s claim, which was probably dubious or at least influenced by an emotional perception that lacked rational weight to justify the claim. Now, if AI can influence people to risk their jobs for it, what else could it induce them to do?
A battle to fight:
Harari says that we are facing a political battle for minds and hearts, intimacy is the most efficient weapon, and AI has just gained the ability to produce intimate relationships en masse with millions of people. We all know that, during the last decade, social media has become a battleground for controlling human attention, and that, by the way, it has become the main place where people, in this particular evolution of human relationships, seek their affectionate relationships. And here, I strongly agree with Harari, with the new generation of artificial intelligence, the battleground is shifting from attention to intimacy. What will happen—then—with human society and psychology when AI competes with emotional intelligence in a battle to fake intimate relationships with us, which can later be used to convince us to vote for certain politicians or buy certain products? And even something that could be much more complex, which is the delicacy of relationships between humans. By the way, this needs to be researched and done with the utmost emphasis.
Even without creating “false intimacy,” the new tools of artificial intelligence would have an immense influence on our opinions and worldviews. People might end up using a single AI advisor as an omniscient one-stop oracle (there are too many people waiting for this, and this is not a gratuitous statement, as we can observe it every day on social media, particularly on some of them. It is not necessary to individualize them at this moment. That moment will come). It is no wonder that some of Google’s top executives are terrified (though they don’t want to miss out on the massive business behind all of this). Why bother searching when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can simply ask the oracle to tell me the latest news? And what’s the purpose of ads when I can just ask the oracle what to buy? Let’s see, Amazon and others figured this out long ago, it’s just about enhancing it.
Harari, the end of human history, and the need to move cautiously with certain claims:
Even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. It’s not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex. We are here treading on dangerous ground, and this is the point where I estimate Harari wanted to go from the beginning. The shift from human control to AI control, and, why not ask it?, what human control could be behind AI…? By the way, this alone could give rise to a Treatise.
Harari continues, assuming that this inversion of control by intelligences indeed takes place: What will happen to the course of history when artificial intelligence takes over culture and begins to produce stories, melodies, laws, and religions? The previous tools, like the printing press and the radio, helped spread human cultural ideas, but they never created new cultural ideas of their own. AI is fundamentally different. AI can create completely new ideas, an entirely new culture.
At first, AI will likely imitate the human prototypes it was trained on during its infancy. But with each passing year, the culture of this type of intelligence will boldly go where no human has gone before. For millennia, humans have lived within the dreams of other humans. In the coming decades, we could find ourselves living within the dreams of an alien intelligence. By the way, the reference to “alien” here doesn’t align with common concepts—we are not talking about extraterrestrial beings, but non-human intelligences even though they have been paradoxically created by humans.
What we’ve seen so far is of great interest. What follows, closely related to dreams and storytelling, is Harari’s take. I don’t fully agree. It’s about subordinating the human mind to myths and dreams, and that is a matter worth discussing at length. Harari states that the fear of artificial intelligence has only haunted humanity for the last few decades (and we would be assuming that a broad segment of humanity knows and fears AI, which frankly seems artificial to me). But for thousands of years, humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and create illusions. Consequently, humans have feared being trapped in a world of illusions since ancient times (the fear of dreams? When dreams have been the driving force behind human advancement. Frankly, this seems excessive… with all due respect to the constructs and thinking of Master Harari).
In the 17th century, René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. In ancient Greece, Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave throughout their lives, facing a blank wall. A screen. On this screen, various shadows are projected. The prisoners confuse the illusions they see there with reality. Undoubtedly, but we are referring to Masters, the man and woman of the people, who fight daily for bread, a matter that hasn’t changed for centuries, which is somewhat distant from these philosophical deliberations.
Harari continues, building on his theory: In ancient India, Buddhist and Hindu sages pointed out that all humans lived trapped within Maya, the world of illusions. What we normally take as reality is often just fiction in our own minds. People can undertake entire wars, killing others and wishing to be killed themselves, due to their belief in this or that illusion. Another element we can discuss at length.
Some good news seems to arrive:
Of course, the new power of artificial intelligence could also be used for good purposes. I won’t dwell on this because the people developing it talk quite a bit about it. The work of historians and philosophers like me is to point out the dangers. But certainly, AI can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to ensure these new tools are used for good and not for evil. To do that, we must first appreciate the true capabilities of these tools.
Since 1945, we’ve known that nuclear technology could generate cheap energy for the benefit of humans, but it could also physically destroy human civilization. Therefore, we reshaped the entire international order to protect humanity and ensure that nuclear technology was used primarily for good. Now we have to deal with a new weapon of mass destruction that could annihilate our mental and social world.
We can still regulate the new AI tools, but we must act quickly. While nuclear weapons cannot invent more powerful nuclear weapons, artificial intelligence can create exponentially more powerful intelligence. The first crucial step is to demand rigorous safety controls before these powerful tools are released to the public domain. Just as a pharmaceutical company cannot release new drugs before testing their short- and long-term side effects, tech companies should not release new AI tools before they are safe. We need an equivalent of the Food and Drug Administration for new technologies, and we need it yesterday. Which, by the way, is a completely shareable and combative idea in the sense of advocating for it.
Harari’s next question is more than interesting. Will the slowdown in the public deployment of AI make democracies fall behind more ruthless authoritarian regimes? Quite the opposite. Unregulated deployments of that technology would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations are based on language. When technology hacks language, it could destroy our ability to have meaningful conversations, thus destroying democracy. In this sense—and not attributing it to Harari—the clearest case is that of the People’s Republic of China.
Finally, Harari ends with an apocalyptic but very plausible statement. We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it could destroy our civilization. We should put an end to the irresponsible deployment of AI tools in the public sphere and regulate them before they regulate us. And the first regulation I would suggest is to make it mandatory for AI to reveal that it is AI. “If I’m having a conversation with someone and I can’t tell if they’re a human or an artificial intelligence, that’s the end of democracy.”
The QAnon conspiracy theory has been a far-reaching and controversial phenomenon, particularly in the United States, although it has also gained followers in other countries such as Japan, the United Kingdom, and Germany. The theory is based on the idea that there is a “deep state” of powerful actors who manipulate institutions and are involved in criminal activities, such as the sexual trafficking of minors, and that Donald Trump was fighting to expose and stop them. QAnon followers believe that Trump is investigating figures like Hillary Clinton, Barack Obama, and George Soros, whom they consider the leaders of this conspiracy.
One of the distinctive features of QAnon is its decentralized organization: it started with an anonymous post from someone using the pseudonym “Q,” who claimed to have access to high-level classified information about the Trump administration. This person or group of people posted messages on internet forums, which fostered the creation of an online community that interpreted and spread these posts, known as “Q drops.”
The movement has been largely driven by social media and online platforms, which have served to promote the QAnon messages and organize its followers. This influence was reflected in the appearance of QAnon supporters at political rallies and the active promotion of the theory on social networks. Despite the movement’s great visibility, it is difficult to determine the exact number of its followers, as QAnon activity mainly takes place on digital platforms, including the more obscure ones like 8kun (formerly known as 8chan).
Furthermore, the theory has been linked to violence and acts of domestic terrorism, which led the FBI to classify it as a potential source of danger. It has also been documented that political figures, including members of the Trump administration, have amplified QAnon messages, sometimes without realizing the harm this could cause.
Major tech platforms like Twitter and Facebook have taken steps to limit the spread of the theory, blocking accounts and restricting content. However, QAnon followers have migrated to other, less monitored platforms to continue spreading their ideas.
The slogan “Where we go one, we go all” (#WWG1WGA) has been adopted by followers as a symbol of unity and purpose. This slogan, originally from the movie White Squall, reflects the strong sense of camaraderie among community members, many of whom see themselves as digital soldiers fighting against what they perceive as a greater evil.
The global reach and digital nature of QAnon have made it a phenomenon that is difficult to control, with significant implications for politics, security, and social cohesion. However, its conspiratorial nature and lack of verifiable evidence have led many to consider it a dangerous breeding ground for misinformation and extremism.
It is a topic that invites reflection on how ideas can shape reality and how tech platforms influence the spread of extreme beliefs.
by Dr. David Tanz | May 17, 2023 | Artificial intelligence, geopolitics
The topic of Artificial Intelligence, with its enormous scope and how little we actually know – in many cases we are still in the intuitive stage – has led to an authentic flood of studies, opinions, controversies, and heated debates that practically occur daily.
Our Laboratory understands that one of the best services it can provide to all the people and organizations following our work is to offer a carefully selected series of those opinions, positions, and debates, brought practically to the day they occur, in order to genuinely keep informed those who are attentive to what is happening and to our vision.
By the way, the Laboratory is working on its Artificial Intelligence Microlab and will eventually share its conclusions and perceptions, but the urgency of the topic does not allow too many delays. That is the reason why we are launching a Series today, the Artificial Intelligence series, which we hope will be the catalyst for analysis, reflection, and conclusions on the projection that such a significant topic forces us to address. No one, neither governments, nor international organizations, nor regional organizations, think tanks, nor individuals can remain indifferent to its evolution. As always, we hope our service can be useful to you.
THE UNITED STATES CONGRESS SHOWS CONFUSION ABOUT ARTIFICIAL INTELLIGENCE REGULATION.
In what has already become a classic situation in the United States, the meeting between the founder of OpenAI, Samuel Altman, and the Senate Subcommittee, which we mentioned in previous posts, ended in a pleasant meeting where no resolutions were made, there were loud statements, and the effectiveness of the millennia-old technique of lobbying was demonstrated in these times of technological disruption. But it also demonstrated something much more worrying: the poor understanding of politics regarding the content and scope of the technology, which is as dangerous as the potential risks of technological application in certain areas. Unfortunately, the saying that technology goes up in an elevator (upwards) and politics goes down the stairs (downwards) seems to be confirmed once again. By the way, the United States is one of the developed countries with the greatest delay in regulating many technological aspects, which have been resolved, for example, by the European Union or Japan.
Carlos Ares, an economist from the City Council of Barcelona and one of the main references in technology, has said, with his usual sense of humor but also with his sharp perception, one of the most accurate diagnoses made about Altman: “The CEO of OpenAI calls for more regulation for the artificial intelligence industry. ‘My worst fear is that this technology goes wrong. And if it goes wrong, it can go very wrong.’ (Let’s remember that Sam Altman is one of those preppers we are used to seeing in movies who live in the mountains and have a whole arsenal of weapons, cans of food, and all sorts of gadgets to survive a zombie attack, an alien invasion, or their own artificial intelligence).” Knowing the tech leaders, the perception seems more than accurate.
A carefully prepared and choreographed presentation:
Altman and the legislators agreed that new artificial intelligence systems should be regulated, but it is still unclear how that would happen.
The tone of Congressional hearings with tech industry executives in recent years can be best described as antagonistic. Mark Zuckerberg, Jeff Bezos, and other prominent tech company leaders have been criticized in Capitol Hill by legislators frustrated with their companies. This meeting seems to have changed that confrontational tone, especially because of the proactive and “concerned” attitude of the witness, which differentiated him from previous presentations, usually handled defensively and with a certain tone of denial, which led to unfriendly exchanges.
Altman made his public debut in Capitol Hill when interest in AI skyrocketed. The tech giants have invested efforts and billions of dollars in what they say is a transformative technology, even amid growing concerns about AI’s role in spreading misinformation, job destruction, and eventually human intelligence.
But Altman, CEO of OpenAI, a San Francisco-based startup, testified before members of a Senate Subcommittee and largely agreed with them on the need to regulate the increasingly powerful AI technology created within his company and others like Google and Microsoft (and along the way, he shines the spotlight on the toughest competition. Undoubtedly, Altman and his advisors form an extraordinary team of strategists).
In his first testimony before Congress, Altman urged legislators to regulate artificial intelligence while Committee members showed a nascent understanding of the technology (we will develop this in a section placed later in this article). The hearing underscored the deep concern that both technologists and the government have about the potential harms of AI. But that concern didn’t extend to Altman, who had a friendly hearing from the Subcommittee members. The hearing lasted three hours and was truly educational for a group of politicians with limited knowledge of the depth of the problem. This does not mean that legislators aren’t worried, but it is more of a concern influenced by the surrounding environment rather than a real understanding of the phenomenon.
Within the lobbying activities, Altman also spoke about his company’s technology at a dinner with dozens of House members the night before the legislative hearing and also met privately with several senators before the hearing. He offered a flexible framework to manage what happens next with the rapidly developing systems that some believe could fundamentally change the economy. It was clear that the Senate Subcommittee members on privacy, technology, and law were not planning a tough interrogation for Altman, as they thanked him for his private meetings with them and for agreeing to appear at the hearing. Cory Booker, a Democrat from New Jersey, repeatedly referred to Altman by his first name. Given the context, this does not seem surprising.
“If something can go wrong, it can go very wrong”:
“I think that if this technology goes wrong, it can go very wrong. And we want to speak out about it,” he said. “We want to work with the government to prevent that from happening.” This statement from Altman is particularly worrying. If we interpret this carefully, we’ll see that we don’t actually know if something can go wrong, which in itself is serious. So it’s hard to understand why OpenAI is announcing that it will soon connect ChatGPT to the Internet. Users who pay for ChatGPT Plus, which uses the GPT-4 model, will have access to a web browsing feature that will provide updated information.” We don’t know what could go wrong, but let’s move forward! Wow!
The widespread access to OpenAI and Google’s new AI tools indicates that the AI war is only intensifying as large tech companies compete to create the most powerful and user-friendly AI.
The concern has reached President Joseph Biden and his advisers, which seems to suggest that this issue will not be left to just this brief appearance. This has placed the technology at the center of attention in Washington. Biden said this month in a meeting with a group of AI company executives that “what you’re doing has enormous potential and danger.”
Suggestions presented to the Senate and responses:
Altman was joined at the hearing by Christina Montgomery, IBM’s Director of Privacy and Trust, and Gary Marcus, a well-known professor and frequent critic of Artificial Intelligence technology. The most interesting thing, as we will see later, is that the main problem for OpenAI’s head was not any of the legislators, but rather the relentless logic of Dr. Marcus.
Altman said that his company’s technology could destroy some jobs, but also create new ones, and that it would be important for “the government to figure out how we want to mitigate that.” Echoing an idea suggested by Dr. Marcus, he proposed the creation of an agency that would issue licenses for the development of large-scale AI models, safety standards, and tests that AI models must pass before being released to the public.
“We believe that the benefits of the tools we’ve implemented so far far outweigh the risks, but ensuring their safety is vital to our work,” said Altman.
But it was not clear how legislators would respond to the call for AI regulation. The Congress’s track record on technology regulations is discouraging. Dozens of bills on privacy, speech, and safety have failed over the last decade due to partisan disputes and fierce opposition from tech giants.
The United States has been lagging behind the world in terms of privacy, speech, and child protection regulations. It is also behind on Artificial Intelligence regulations. European Union legislators are set to introduce rules for the technology later this year. And China has created AI laws that align with its censorship laws, as expected from the Kingdom of Big Brother.
Senator Richard Blumenthal, a Democrat from Connecticut and Chairman of the Senate Panel, said that the hearing was the first in a series to learn more about the potential benefits and harms of AI to eventually “write the rules.”
He also acknowledged Congress’s failure to keep up with the introduction of new technologies in the past. “Our goal is to demystify and hold those new technologies accountable to avoid some of the mistakes of the past,” Blumenthal said. “Congress did not rise to the moment with social media.” And, by the way, it still hasn’t.
Members of the Subcommittee suggested an independent agency to oversee AI; rules requiring companies to disclose how their models work and the datasets they use; and antitrust regulations to prevent companies like Microsoft and Google from monopolizing the emerging market.
“The devil will be in the details,” said Sarah Myers West, Executive Director of the AI Now Institute, a research center for technology-related policies and Artificial Intelligence. She said that Altman’s suggestions for regulations don’t go far enough and should include limits on how AI is used in surveillance and the use of biometric data. She pointed out that Altman showed no signs of slowing down the development of OpenAI’s ChatGPT tool. “It’s a great irony to see concern about the harms coming from people who are quickly pushing the commercial use of the system responsible for those very harms,” said Ms. West.
The gap between political understanding and technological progress:
Some legislators at the hearing still showed the persistent gap in technological knowledge between Washington and Silicon Valley. Lindsey Graham, Republican from South Carolina, repeatedly asked witnesses if a speech liability shield for online platforms like Facebook and Google also applies to Artificial Intelligence.
Altman, calm and serene, tried several times to establish a distinction between AI and social media. “We need to work together to find a completely new approach.”
Some Subcommittee members also seemed reluctant to take drastic actions against an industry with significant economic promise for the U.S. and one that competes directly with adversaries like China. Certainly, U.S. politicians are struggling to clearly establish what the dangers of a technology are and what is their country’s technological geopolitics.
The Chinese are creating AI that “reinforces the core values of the Chinese Communist Party and the Chinese system,” said Chris Coons, a Democrat from Delaware. “And I am concerned about how we promote AI that reinforces and strengthens open markets, open societies, and democracy.”
Some of the toughest questions and comments to Altman came from Dr. Marcus, who pointed out that OpenAI has not been transparent about the data it uses to develop its systems. He expressed doubts about Altman’s prediction that new jobs would replace those lost to Artificial Intelligence.
“We have unprecedented opportunities here, but we are also facing a perfect storm of corporate irresponsibility, widespread deployment, lack of proper regulation, and little inherent reliability,” said Dr. Marcus.
Tech companies have argued that Congress should be cautious with broad rules that group different types of AI. In Tuesday’s hearing, Ms. Montgomery from IBM called for an AI law similar to the regulations proposed by Europe, which outlines various risk levels. She advocated for rules that focus on specific uses, rather than regulating the technology itself.
“Essentially, AI is just a tool, and tools can serve different purposes,” she said, adding that Congress should adopt a “precision regulation approach for AI.”
Meanwhile, the competition is intensifying:
By the way, OpenAI is not the only company that has launched what is currently being discussed. The most comprehensive overview at this time, considering those seen as the most important projects, shows us:
ChatGPT. ChatGPT, the AI language model from the research lab OpenAI, has been in the headlines since November for its ability to answer complex questions, write poetry, generate code, plan vacations, and translate languages. GPT-4, the latest version released in mid-March, can even respond to images (and pass the Uniform Bar Exam).
Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s main investor and partner, added a similar chatbot capable of having open-text conversations about practically any topic to its Bing internet search engine. However, it was the bot’s occasionally inaccurate, misleading, and strange answers that garnered much of the attention after its launch.
Bard. Google’s chatbot, named Bard, was launched in March for a limited number of users in the United States and the United Kingdom. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts, and answer questions with facts or opinions.
Ernie. Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a failure after it was revealed that a promised "live" demonstration of the bot had been recorded.