Neuralink is making rapid progress in brain engineering. In the short term, hope; in the medium term, a concern.

Neuralink is making rapid progress in brain engineering. In the short term, hope; in the medium term, a concern.

The company began in 2022 the process with the pharmaceutical regulator to conduct clinical trials of its chip. In the short term, it opens hope for the relief of a range of medical problems, while in the medium term, it represents fuel for transhumanism, a matter that has sparked deep discussions, as it could radically alter human life and the mind.

Neuralink, Elon Musk’s brain chip company, announced it has received the green light from the pharmaceutical regulator to conduct its first human trial. The controversial entrepreneur predicted in December that approval from the Food and Drug Administration (FDA), the agency overseeing products, medicines, and surgical procedures in the U.S., would arrive during the first half of this year. He was right, although the approval wasn’t easy, as they were rejected last year. The company, founded in 2016, reported that this is the first step that will allow its technology to “help many people.”

“Recruitment is not yet open for our clinical trial,” the company posted on Twitter, promising more information in the coming days. Neuralink has been raising expectations about its advancements for several years. In 2020, Musk stated in a presentation that the chips manufactured by the company could cure some types of paralysis and certain cases of insomnia. The controversial magnate, who has often been careless with his words, even claimed that the device could give users “superhuman” vision. At that time, they showcased one of their first implants, in a pig.

A year later, in 2021, Neuralink made one of its most viral presentations. A monkey, Pager, appeared in front of a TV and attentively watched what was happening on the screen, a game of Pong. The primate controlled the game just by looking, thanks to a pair of semiconductors the size of a 25-cent coin implanted in both hemispheres of its brain.

Musk said a few months ago that they had started “extremely careful” paperwork with the FDA and were working with the agency. “I think probably in six months we will be able to put our first Neuralink in a human,” said the controversial billionaire, who recently helped Florida Governor Ron DeSantis launch his 2024 U.S. presidential campaign on Twitter.

Before this, Musk had claimed at least three times since 2019 that he was seeking FDA approval for clinical trials in humans. But it wasn’t until 2022 that the company started the legal process with the regulator. According to Reuters, this first request was rejected by the FDA authorities shortly after being submitted. The regulator was concerned about the safety of the battery used in the semiconductor, which is made of lithium. There were also worries that the small cables extending from the brain could be invasive in other areas of the skull. Finally, the regulators also raised questions about the implications of removing the chip and whether this process could damage brain tissue.

A report from the British agency cited experts who doubted whether Neuralink could quickly address the concerns raised by the government agency, which had the final say in 85% of human trials conducted in the last three years. “Neuralink doesn’t seem to have the necessary experience or mindset to launch this on the market soon,” said a neural engineer quoted in the piece published in March.

Neuralink is not the only company preparing to conduct the first human trials of its technology. One of its main competitors, Paradromics, is also seeking approval. Founded in 2015, the Austin-based company has made giant strides with its implants and has grown its team to become a rising player with around fifty researchers. Its product, called Connexus Direct Data, promises patients with paralysis to regain some communication skills.

The promising profile of its technology led the FDA to include it in its select program for cutting-edge devices, where 32 initiatives receive a faster review process, as they could benefit patients in their treatments and diagnoses. Another company competing in the emerging brain implant industry is Synchron. The companies differ in the size, weight, and functioning of their semiconductors and in the surgical methods for implantation. However, all see the future and the benefits they could bring to millions of people with optimism.

Elon Musk’s transhumanist company receives the green light to test brain implants on humans.

Elon Musk’s transhumanist company receives the green light to test brain implants on humans.

Neuralink, Elon Musk’s company, announced that it has received approval from the U.S. Food and Drug Administration (FDA) to conduct human studies of its brain implants, which have so far been tested only on animals.

The company revealed the FDA’s green light for the first human trials on its Twitter account. “This represents an important first step that will someday allow our technology to help many people,” the company wrote.

In early December, Musk had assured that Neuralink, a company that has not been without controversy due to its animal experiments, was ready to conduct brain implants in humans within six months.

At that time, Musk noted that the FDA had expressed concerns about the potential overheating of the implant (which includes microcables in the brain tissue), as it could lead to chemical leakage from the implant into the brain mass.

The implant’s function will be to “read” brain activity to transmit commands that could help restore some severely damaged brain functions after a stroke or amyotrophic lateral sclerosis, which result in serious communication impairments.

So far, brain implants have been developed in one direction: from the brain to the outside (usually a computer that processes the signals), but Neuralink’s project aims to also transfer information in the other direction, toward the brain.

Neuralink is developing two types of implants in parallel: one to restore vision “even in those who have never had it” and another to restore basic bodily functions in people with paralysis due to spinal cord damage.

Now, let’s talk a little about Neuralink, the company that is pioneering all of this and that we will need to watch closely in the future to observe its evolution and particularly its developments.

Neuralink Corporation is an American neurotechnology company specialized in the development of implantable brain-computer interfaces, also known as Brain-Machine Interfaces (BMI), founded by Elon Musk. They are currently developing a device capable of treating patients suffering from disabilities caused by neurological disorders through direct brain stimulation. And, according to Musk’s statements, the technology developed by Neuralink aims, in the long term, to achieve total symbiosis with artificial intelligence. Currently, it is conducting experiments on animals in collaboration with the University of California, Davis.

Neuralink was founded in 2016 by Elon Musk, Ben Rapoport, Dongjin Seo, Max Hodak, Paul Merolla, Philip Sabes, Tim Gardner, Tim Hanson, and Vanessa Tolosa.

In April 2017, the blog Wait But Why reported that the company aimed to manufacture devices to treat serious brain diseases in the short term, with the ultimate goal of human enhancement, sometimes called transhumanism. Musk said his interest in the idea partly stemmed from the science fiction concept of the “neural lace” in the fictional universe of The Culture, a series of 10 novels by Iain M. Banks.

Musk defined the neural lace as a “digital layer above the cortex” that would not necessarily require extensive surgical insertion, but ideally an implant through a vein or artery. Musk explained that the long-term goal is to achieve “symbiosis with artificial intelligence,” which he perceives as an existential threat to humanity if not controlled. As of 2017, some neural prosthetics can interpret brain signals and allow disabled individuals to control their prosthetic arms and legs. Musk spoke of aiming to link that technology with implants that, instead of activating movement, could interact at broadband speed with other types of software and external devices.

As of 2020, Neuralink is headquartered in the Mission District of San Francisco, sharing the old Pioneer factory building with OpenAI, another company co-founded by Musk. Musk was the majority owner of Neuralink in September 2018, but did not hold an executive position. The role of CEO was held by Jared Birchall, who has also been described as the financial director and president of Neuralink, as well as an executive for several other companies that Musk founded or co-founded. The trademark “Neuralink” was purchased from its previous owners in January 2017.

By August 2020, only two of the eight founding scientists remained with the company, according to an article in Stat News, which reported that Neuralink had faced “years of internal conflict in which rushed timelines clashed with the slow, incremental pace of science.” With Musk in the picture, this should not have been surprising.

Since its founding, the Neuralink team has been characterized by its high level of discretion in revealing information, as the company’s existence was not announced to the public until 2017, and information about the technology they were developing was not revealed until 2019.

The company has received $158 million in funding, of which $100 million has been invested by Musk himself, and it currently has 90 employees.

The “Neuralink” trademark was acquired from its previous owners in January 2017.

The company is made up of experts from various fields such as neuroscience, biochemistry, robotics, applied mathematics, machinery, among others. It is currently seeking experts in various scientific areas to form its team.

Its founding members are:

  • Elon Musk.
  • Max Hodak, President of the company. Previously worked on brain-computer interfaces at Duke University.
  • Matthew McDougall, Head of Neurosurgery at Neuralink and neurosurgeon at the California Pacific Medical Center. He was previously employed at Stanford, where he worked in labs that implanted and designed brain-computer interfaces.
  • Vanessa Tolosa, Director of Neural Interfaces. She previously led a neurotechnology team at the Lawrence Livermore National Laboratory, working with a wide range of technological prosthesis technology used in both clinical and academic settings.
  • DJ Seo, Director of the Implantation System. He was the co-inventor of “neural dust,” a technology he developed while studying at UC Berkeley.
  • Philip Sabes, Senior Scientist. He was previously a Professor of Physiology at UC San Francisco and led a lab studying how the brain processes sensory and motor signals.
  • Tim Gardner, Professor of Biology at Boston University, who worked on implanting brain-computer interfaces in birds.
  • Ben Rapoport, Neurosurgeon with a PhD in Electrical Engineering and Computer Science from MIT.
  • Tim Hanson, Researcher at the Berkeley Sensor and Actuator Center.

Neuralink aims in the short term to create brain-computer interfaces that can treat various diseases caused by neurological disorders. These interfaces have the potential to help people with a wide range of clinical disorders. Researchers have demonstrated that, using these interfaces, patients have been able to control computer cursors, robotic prosthetics, and speech synthesizers. This demonstrates their potential use in the medical field to treat patients with disabilities due to neurological disorders. All studies experimenting with brain-computer interfaces have been carried out using systems that do not have more than 256 electrodes.

Neuralink is building a fully integrated Brain-Computer Interface (BCI) system, also known as BMI (Brain-Machine Interface). BCIs can be used to treat neurological disorders and reveal information about brain functions. Karageorgos et al. have introduced HALO (Hardware Architecture for Low-Power BCIs), an architecture for implantable BCIs, which enables the treatment of disorders such as epilepsy. HALO also records and processes data that can be used for a better understanding of the brain.

Epilepsy is characterized by epileptic seizures defined by uncontrolled and excessive electrical activity of neurons. Neural signals are processed to predict seizures. When an increase in brain excitation occurs, the brain requires inhibitory synapses to attenuate and regulate the activity of other cells. BCIs then electrically stimulate neurons to mitigate the severity of seizures. However, the time between the onset of the seizure and stimulation must be in tens of milliseconds. Additionally, low-power hardware is needed for safe and chronic implantation.

Although such studies have demonstrated that information transfer between machines and the brain is possible, the development of brain-computer interfaces has been limited by their inability to collect information from a greater number of neurons. For this reason, Neuralink’s team seeks to develop a device capable of increasing the order of magnitude of neurons from which information can be extracted and stimulated safely and durably through a simple and automated procedure. In other words, collecting information and selectively stimulating as many neurons as possible across various areas of the brain.

The long-term goal is for brain-computer interfaces to be available to the general public and integrated as essential technology in daily life, similar to how technologies like mobile phones or laptops are currently essential in everyday life.

Musk has repeatedly stated his belief that artificial intelligence poses a risk to humans due to the possibility that it may surpass human abilities. For him, the best solution to the problem would be, instead of continuing to develop AI systems external to humans, to achieve total symbiosis with artificial intelligence so that it can be controlled. This could be achieved by creating a layer of artificial intelligence over the cerebral cortex, a system that is being developed with Neuralink.

Musk’s interest in brain-computer interfaces began, in part, due to the influence of a science fiction concept called “Neural Lace,” which is part of the fictional universe described in The Culture, a series of novels written by Iain Banks.

People could become telepathic and communicate without words by accessing thoughts. Beyond thoughts, sensory experiences could be communicated from human to human, like neural postmen, where listening, seeing, and tasting something could be possible. Alternatively, life experiences such as enjoying a meal or skydiving could be virtually lived and offer sensations as if they were real. It is more plausible that within the next 20 years, it will be possible to create images of what people are thinking.

BCIs might also offer opportunities to enhance the brain itself, whether invasive or non-invasive. BCIs could help us remember more and better, learn faster, make better decisions, and solve problems without bias, in exchange for having to go through hard training.

Currently, artificial intelligence (AI) “is an important technological tool that enables the operation of many neural interfaces.” BMIs use AI to convert neural signals into digital data, for example, to interpret instructions from the brain to move a prosthetic arm. In the future, a more complex relationship between BCIs and AI could emerge. Computers and brains are different, but they could be seen as complementary. Humans have decision-making ability and emotional intelligence, while computers have the capacity to process a considerable amount of data quickly. This is why several technology experts believe that beneficial impacts for people could arise by linking human and artificial intelligence through BMIs.

In 2019, during a live presentation at the California Academy of Sciences, Neuralink’s team revealed to the public the technology behind the first prototype they had been working on. This system involves ultrathin probes that will be inserted into the brain, a neurosurgical robot that will perform the operations, and a high-density electronic system capable of processing information from neurons.

According to Neuralink’s team, the system they are developing will use biocompatible probes that will be inserted into the brain through an automated process performed by a surgical robot. The goal of these probes is to locate electrical signals in the brain using a series of electrodes connected to them. This experiment has already been performed with a monkey, which was given the ability to play Pong telepathically. Elon Musk hopes that this invention will serve future humanity in communicating telepathically.

The probes developed by Neuralink are designed to be biocompatible, minimizing the possibility of the body rejecting them. These probes are mainly made of polyamide, a flexible and durable material, and coated with a thin layer of gold, making them compatible with the brain’s biological environment. The combination of these materials reduces the likelihood of the brain perceiving them as foreign objects and rejecting them, which is a common concern with long-term brain implants.

Each probe consists of a set of thin threads containing electrodes capable of detecting the brain’s electrical signals. These threads interact with an electronic system that amplifies and acquires brain signals, enabling the collection of highly precise data. Each probe may have 48 or 96 threads, and each of these threads contains 32 independent electrodes, allowing a configuration with up to 3072 electrodes. This large number of electrodes provides much more detailed and comprehensive signal capture across several areas of the brain, which is crucial for Neuralink’s goals in restoring or enhancing brain functions.

One of the main challenges of this type of technology is the rigidity of the probes. When inserted into the brain, rigid materials can be recognized as foreign bodies, triggering an immune response from the body, creating scar tissue around the implant. This can lead to the probes becoming ineffective over time. To mitigate this problem, Neuralink has developed a surgical robot capable of inserting multiple flexible probes quickly and precisely, reducing brain trauma and the possibility of an immune response.

The robot features an extremely thin insertion needle, with a diameter of only 40 micrometers, made of tungsten-rhenium, a highly durable and resistant material. This needle is designed to hook onto the loops of the probes and place them with great precision in the areas of the brain necessary for signal recording. This process is automated, increasing accuracy and reducing the risks associated with traditional surgical interventions.

In summary, Neuralink is developing a technology that, thanks to the flexibility of its probes and the precision of the surgical robot, has the potential to transform the way we treat neurological disorders, and could even open the door to more advanced future applications like telepathic communication or cognitive enhancement through integration with artificial intelligence.

Artificial Intelligence XV Roundtable of El Confidencial – Work.

Artificial Intelligence XV Roundtable of El Confidencial – Work.

Experts believe that AI will partially automate the affected jobs while encouraging the emergence of new professional profiles. At the same time, challenges such as professional reskilling or ethical issues will need to be addressed.

ALMOST ALL PROFESSIONAL ACTIVITIES HAVE PROCESSES THAT CAN BE AUTOMATED TO ACHIEVE MORE EFFICIENCY. While the use of technology is nothing new in any sector, there is an increasing generalization of the use of artificial intelligence (AI) to save time and costs, while increasing productivity.

Recently, with the rise of generative AI — such as ChatGPT — this trend is affecting both intellectual jobs and those that require more physical effort. In light of this new reality in the work environment, voices for and against automation are already emerging due to its impact on the professional market. At the same time, reflections on how to train workers or the ethical implications of this coexistence between humans and intelligent machines are gaining ground in the public debate.

The relevance of this historical moment demands continuous exchange of credible opinions, which is why, with this premise in mind, El Confidencial organized a roundtable titled “The Future of Work: Automation for More and Better Jobs.” The expert panel included representatives from companies affected by automation and the use of AI, technology companies, and academics specializing in the subject. The participants were Iñaki Ugarte, General Director of Operations at Primera Milla, Amazon Spain; Belén Martín, Vice President of Hybrid Cloud at IBM Consulting; Manuel Espiñeira, Director of Digital Business Technologies Solutions at Minsait, an Indra company; and Ignacio López Sánchez, Professor of Business Organization at the Complutense University of Madrid (UCM).

To contextualize and understand the origins of the automation boom, Iñaki Ugarte listed “three factors” that have accelerated its penetration: “New technologies alongside the digital era and big data, the Millennial and Generation Z workers as digital natives or migrants, and the international context, which, after the pandemic and the complex geopolitical scenario, is forcing industry relocation.” Regarding whether automation will eliminate jobs, he had a clear answer: “Far from there being less work for people, new jobs are actually being created.”

The logical question is what types of jobs are being created with the advent of AI in the workplace. In this regard, Belén Martín provided examples: “Just in the last three months, two profiles have emerged that are revolutionizing everything. One of them is the prompt engineer, people specialized in asking questions to artificial intelligences — they function as a sort of instructor — and whose Spanish translation could be ‘ingeniero de peticiones’. The other is the ethical algorithm trainer, and their role is to prevent social biases.” The Vice President of Hybrid Cloud at IBM Consulting clarified that “although these profiles seem related only to STEM disciplines — science, technology, engineering, and mathematics — there are also profiles in the humanities such as linguists or philosophers, which opens up an unlimited range of possibilities.”

“In the last three months, profiles like ‘prompt engineer’ and ethical algorithm trainer have appeared,” Belén Martín (IBM)

From Minsait, an Indra subsidiary, they consider that “automation will be partial in most jobs,” as explained by their spokesperson during the roundtable. “Approximately 60% of jobs have the potential for partial automation of their tasks, but only 7% of them can actually be automated in more than 50% of their processes,” clarified Manuel Espiñeira, who then recalled that “in the 1950s, there was a catalog in the U.S. of jobs that were expected to disappear with the automation of the time. The list included 270 jobs, and yet only the elevator operator profession disappeared.” To elaborate further on his future forecast, he specified that “the key is the quality of the information analysis a professional can do with AI tools, as it allows them to make high-level decisions, such as a better diagnosis in the case of a doctor. However, this shows that doctors will continue to exist as a profession,” he assured.

This view was shared by Ignacio López Sánchez. For the Professor of Business Organization at the Complutense University of Madrid, “there are certain jobs with a relatively high percentage of repetitive tasks, and therefore automatable, but others not so much. This will force companies to reorganize and define new profiles that, in some cases, will have AI as copilots,” he emphasized. Furthermore, the professor raised a challenge picked up by the rest of the participants in the discussion: “Will we be able to give workers the proper training to perform these new tasks that are coming? And more importantly: What will we do with the people whose jobs will disappear? Will they be able to be retrained for new profiles?” he asked.

Reskilling, educational flexibility, and ethical oversight:

From Iñaki Ugarte’s perspective, “every time a new disruptive technology appears, the same questions arise, and the answer must be clear: no one should be left behind,” he stressed. “But the issue of reskilling has a handicap — the educational system, as it lacks the agility to adapt to new needs,” he continued. Belén Martín agreed and confirmed that “retraining workers is indeed the only way to maintain those affected jobs, and moreover, investing in them through training generates a sense of belonging, which is really useful within the company,” she pointed out.

“60% of jobs will automate part of their tasks, but only 7% will do so in more than 50% of their processes,” Manuel Espiñeira (Minsait)

To highlight the consensus on this topic, Manuel Espiñeira also pointed out that “the academic curriculum requires more flexibility, especially when it comes to new technologies.” “Until now, the university system teaches people to think, but it is the companies that teach how to apply what has been learned,” he specified. His discussion partner, López Sánchez, further expanded on this when he emphasized that “adaptations should be quick, as well as identifying which positions are in demand. We cannot do this from the academic environment; it is the companies, as generators of wealth and employment, that need to put their needs on the table and communicate them to universities. Even so,” insisted the professor from UCM, “there will still be the problem, at least for now, that formal education is especially difficult to modify in Spain and Europe.”

In the final stretch of the debate, a classic issue in discussions about automation and AI came up: the ethical implications. “It has been shown that when artificial intelligences are trained, they carry over the social biases that we humans have. There are traces of the developer in the technology itself. There are many examples of this in recent decades. One of the biggest challenges is precisely ensuring that this doesn’t happen,” admitted Belén Martín. To address this problem, Iñaki Ugarte advised “using people as tools, that is, forming diverse and representative groups in which social diversity is guaranteed to avoid the transfer of biases to AI.”

Another complementary solution, this time proposed by Ignacio López Sánchez, is “to create supervisory bodies, as already happens in other areas with entities like the National Securities Market Commission or the European Central Bank, for example. However, this would be for supervision, not regulation.” Manuel Espiñeira agreed with his words and added that “excessive regulation could limit the development of AI and other associated technologies.” To conclude, the Minsait expert explained that “the real challenge is to reach a balance point in regulation.”


Team of the Future Lab Analysis. Meeting – Round Table of El Confidencial Newspaper, Spain – Topic: The Rise of AI: Can Automation Redefine the Labor Market Without Leaving Anyone Behind?

Artificial Intelligence XIV: A Philosophical Vision

Artificial Intelligence XIV: A Philosophical Vision

A PHILOSOPHICAL VISION OF ARTIFICIAL INTELLIGENCE. “WEAK DEMOCRACIES, CAPITALISM, AND ARTIFICIAL INTELLIGENCE ARE A DANGEROUS COMBINATION”

Mark Coeckelbergh: “Weak democracies, capitalism, and artificial intelligence are a dangerous combination.” The philosopher points out that institutions need to rely on experts to regulate technology, but without forgetting the citizens.

Mark Coeckelbergh has focused the attention of an audience unaccustomed to philosophical debates: engineering students filled a room to listen to this expert in technology ethics, invited by the Robotics and Industrial Informatics Institute of the Universitat Politècnica de Catalunya. Coeckelbergh, a prolific author — two of his books are published in Spanish by Cátedra, Ethics of Artificial Intelligence (2021) and Political Philosophy of Artificial Intelligence (2023) — knows how important it is to build bridges between those who develop technologies and those who must think about how to use them.

Question: Do you think that students, engineers, and major tech companies take the ethical aspects of artificial intelligence (AI) into account?

Answer: People are aware that this technology will affect our lives because it’s everywhere, but at the same time, we are confused because the changes are very fast and complex. That’s why I think it’s important that education and research try to find an interdisciplinary path between philosophy, programming, and robotics to address these ethical issues.

Question: And what about politics?

Answer: Yes, we need to create more links between experts and politicians, but not just technical opinions should matter. We need to figure out how we can organize our democracy to consider the vision of experts, yet still make decisions ourselves. Tech companies are gaining more and more power, and this is a problem because the sovereignty of nations and cities is diminishing. How much of our technological future should be left in the hands of private initiatives, and how much should be public and controlled by democracies?

Question: Is artificial intelligence a threat to democracy, or are democracies already weakened?

Answer: Democracy is already vulnerable because we don’t really have complete democracies. It’s like when Gandhi was asked what he thought of Western civilization, and he said it was a good idea. The same goes for democracy: it’s a good idea, but we don’t have it fully. For me, it’s not enough to vote and have majorities, it’s too vulnerable to populism, not sufficiently participatory, and it doesn’t take citizens seriously. There’s a lack of education and knowledge to achieve real democracy, and the same is true for technology. People have to understand that technology is also political, and we need to ask ourselves whether it’s good for democracy that communication infrastructures like Twitter are in private hands.

We use technology uncritically, and while a few benefit, the rest of us are exploited for our data.

Question: In what way does artificial intelligence threaten democracy?

Answer: We deal with technology without thinking; we use it uncritically, but it shapes us and uses us as instruments for power, control, and exploitation of our data. And while a few benefit, the rest of us are exploited for our data. This affects democracies because, not being very resilient, political trends are even more polarized by technology. This combination of weak democracies, capitalism, and artificial intelligence is dangerous. But I do believe it can be used in a more constructive way, to improve life for everyone and not just a few.

Question: Some see artificial intelligence as a way to work less and have more freedom, while others see it as a threat to their jobs.

Answer: I think AI right now empowers those who already have a privileged position or good education: for example, they can use it to start a company. But there will be changes in employment, and there will be some transformation of the economy, so we need to be prepared. On the other hand, the argument that technology makes things easier… Until now, it has led to precarious jobs, like Uber drivers, and to jobs that may be good but stressful. For example, we are all slaves to email, and it came as a solution.

Question: So, the problem is not so much the technology but the system.

Answer: It’s a combination of both things, but indeed, these new technological possibilities force us to question the system more than ever. Today, the political conflict is played out in the realm of technology.

Question: What impact does it have on the media?

Answer: In this environment, the problem isn’t that people believe a lie, but that they don’t know what is a lie and what is truth. Quality journalism is very important to provide context and to try to understand the world. I think it can help people gain more knowledge, even if Artificial Intelligence is used for some tasks in the profession. Philosophers, journalists, educators, we have to provide the tools to interpret the world, because when knowledge is lacking and confusion reigns, it’s easier for a leader to come with a simple, populist solution, as has already happened in some countries in Europe.

Question: Can technology make governments more technocratic?

Answer: Politicians are confused, they feel the pressure from lobbies and create regulatory frameworks, but at no point have citizens had a say. States are becoming more and more bureaucratic because they give power to those who control artificial intelligence. So, who is responsible? This kind of system, as Hannah Arendt said, leads to horrors. We must fight against it, with regulations that allow us to see why algorithms make the decisions they do and that allow us to know who is responsible.


Future Laboratory Analysis Team. Article/Report by Josep Cata Figuls.

Artificial Intelligence XIII China Regulation Laboratory

Artificial Intelligence XIII China Regulation Laboratory

THE SWIFT REGULATION OF ARTIFICIAL INTELLIGENCE IN THE PEOPLE’S REPUBLIC OF CHINA

The draft regulation as a mix of sensible restrictions on the risks of AI and a continuation of the strong tradition of Chinese government intervention in the tech industry.

In April (2023), there was a significant development in the Artificial Intelligence space in China. The Chinese internet regulator published a draft regulation on generative Artificial Intelligence. Named “Measures for the Management of Generative Artificial Intelligence Services,” the document does not call out any specific company, but the way it is written makes it clear that it was inspired by the relentless launch of large language model chatbots in China and the United States.

Last week, I participated in the CBC News podcast “Nothing Is Foreign” to discuss the draft regulation and what it means for the Chinese government to take such rapid action on a technology that is still very new.

As I said on the podcast, I see the draft regulation as a mix of sensible restrictions on the risks of AI and a continuation of the strong tradition of Chinese government intervention in the tech industry.

Many of the clauses in the draft regulation are principles that AI critics in the West advocate for: the data used to train generative AI models must not infringe on intellectual property or privacy; algorithms must not discriminate against users based on race, ethnicity, age, gender, and other attributes; AI companies must be transparent about how they obtained the training data and how they hired humans to label the data.

At the same time, there are rules that other countries would likely reject. The government requires people using these generative AI tools to register with their real identity, just as they would on any social platform in China. The content generated by AI software must also “reflect the fundamental values of socialism.”

None of these requirements are surprising. The Chinese government has tightly regulated tech companies in recent years, punishing platforms for lax moderation and incorporating new products into the established censorship regime.

The document makes that regulatory tradition easy to see: there are frequent mentions of other rules that have been passed in China regarding personal data, algorithms, deepfakes, cybersecurity, etc. In a way, it feels as if these discrete documents are slowly forming a network of rules that help the government address new challenges in the technological era.

The fact that the Chinese government can react so quickly to a new technological phenomenon is a double-edged sword. The strength of this approach, which examines each new technological trend separately, “is its precision, creating specific remedies for specific problems,” wrote Matt Sheehan, a fellow at the Carnegie Endowment for International Peace. “The weakness is its fragmented nature, with regulators forced to draft new regulations for new applications or problems.” If the government is busy playing whack-a-mole with new rules, it could miss the opportunity to think strategically about a long-term vision for AI. We can contrast this approach with that of the EU, which has been working on a “hugely ambitious” AI Act for years, as my colleague Melissa recently explained. (A recent review of the AI Act draft included regulations on generative AI).

There’s one point I didn’t mention on the podcast but find fascinating. Despite the restrictive nature of the document, it is also a tacit encouragement for companies to continue working on AI. The proposed maximum fine for violating the rules is 100,000 RMB, around 15,000 dollars, a tiny amount for any company capable of building large language models.

Of course, if a company is fined every time its AI model violates the rules, the amounts could add up. But the size of the fine suggests that the rules are not meant to scare companies into not investing in AI. As Angela Zhang, a law professor at the University of Hong Kong, recently wrote, the government is playing multiple roles: “The Chinese government should not only be seen as a regulator but also as an advocate, sponsor, and investor in AI. The ministries that advocate for the development of AI, along with state sponsors and investors, are prepared to become a powerful counterbalance to strict AI regulation.”

It may take a few months before regulators finalize the draft, and months more before it comes into effect. But I know that many people, including myself, will be watching for any changes.

Who knows? By the time the regulation comes into effect, there might be another new viral AI product that forces the government to come up with even more rules.


Analysis Team of the Future Laboratory / MIT Publication – Massachusetts Institute of Technology (United States). Article by Zeyi Yang.

Translation from English: Translation and Interpretation Team of the Future Laboratory.

error: Content is protected !!