Artificial Intelligence VII Note by Open AI.

States and technology

June 02, 2023

2 Jun, 2023

Should artificial intelligence be regulated?

The creators and developers of ChatGPT believe that new superintelligence could surpass human experts in most disciplines within the next 10 years and are calling for supra-regulation. However, its main responsible, Sam Altman, does not find the regulation proposed by the European Union acceptable.

The creators of ChatGPT have published a note (which can be seen almost below) where they warn that in ten years, AI systems could surpass human experts in most areas. This superintelligence, they say, will be more powerful than other technologies humanity has faced in the past and poses an existential risk to our civilization. Therefore, they urge authorities to think about how to manage it. This is material to examine and, above all, to consider why the creators of the technology are theoretically so alarmed…

Sam Altman, Greg Brockman, Ilya Sutskever, three of the co-founders of OpenAI, the company behind ChatGPT, believe that future superintelligence will be even more capable than general artificial intelligence—a form of synthetic intelligence comparable to human intelligence—and will be as productive as today’s largest companies.

But, it’s important that we look at the content of that note:

The note from the three founders is from May 22, 2023, and it reads as follows:
Responsible Artificial Intelligence, Safety, and Alignment. Given the current state of affairs, it is conceivable that within the next ten years, artificial intelligence systems could surpass the skill level of experts in most domains and perform as much productive activity as one of today’s largest corporations. In terms of potential advantages and disadvantages, superintelligence will be more powerful than other technologies humanity has had to deal with in the past. We could have a dramatically more prosperous future, but we must manage the risk to get there. Given the possibility of existential risk, we cannot simply be reactive. Nuclear energy is a historical example of a commonly used technology with this property; synthetic biology is another example. We must also mitigate the risks of current AI technology, but superintelligence will require special treatment and coordination.
A starting point: There are many ideas that matter to us for having a good chance of successfully navigating this development; here we present our initial thoughts on three of them. First, we need some degree of coordination among the main development efforts to ensure that the development of superintelligence happens in a way that allows us to maintain safety and help the smooth integration of these systems with society. There are many ways this could be implemented: major governments worldwide could establish a project that involves many of the current efforts, or we could collectively agree (with the backing power of a new organization as suggested below) that the growth rate of AI capabilities at the frontier is limited to a certain rate per year. And, of course, individual companies must be subject to an extremely high standard of responsible conduct.
Second, we will likely eventually need something like the IAEA (International Atomic Energy Agency) for superintelligence efforts; any effort above a certain capability threshold (or resources like computing) should be subject to an international authority that can inspect systems, require audits, test compliance with safety standards, impose restrictions on implementation degrees and safety levels, etc. Monitoring the use of computing and energy could be very helpful and give us some hope that this idea could be implemented. As a first step, companies could voluntarily agree to start implementing elements of what such an agency might one day require, and as a second step, individual countries could implement it. It would be important for such an agency to focus on reducing existential risk and not on issues that should be left to individual countries, such as defining what an AI should be allowed to say.
Third, we need the technical ability to make superintelligence safe. This is an open research question that we and others are putting a lot of effort into.
What is not within scope: We believe it is important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here (including burdensome mechanisms like licenses or audits). Today’s systems will create enormous value in the world and, while they carry risks, the level of those risks feels in line with other internet technologies, and the probable societal approaches seem appropriate. In contrast, the systems we are concerned about will have power beyond anything created so far, and we must be careful not to dilute the focus on them by applying similar standards to technology far below this bar.
Public Opinion and Potential: However, governance of the most powerful systems, as well as decisions related to their deployment, must have strong public oversight. We believe people around the world should democratically decide the limits and default values of AI systems. We still don’t know how to design such a mechanism, but we plan to experiment with its development. We still think that, within these broad limits, individual users should have a lot of control over how the AI they use behaves.
Given the risks and difficulties, it’s worth considering why we are building this technology. At OpenAI, we have two fundamental reasons. First, we believe it will lead to a much better world than we can imagine today (we’re already seeing early examples of this in areas like education, creative work, and personal productivity). The world faces many problems that we will need much more help to solve; this technology can improve our societies, and the creative capacity of everyone to use these new tools will surely astonish us. Economic growth and improved quality of life will be astounding.
Second, we believe that stopping the creation of superintelligence would be unintuitive, risky, and difficult. Because the benefits are so enormous, the cost of building it decreases each year, the number of actors building it is rapidly increasing, and it is inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that is not guaranteed to work. So we have to do it right.

La visión de OpenAI, respaldada por el millonario apoyo de Microsoft, es optimista respecto al futuro de la inteligencia artificial, considerando que las futuras generaciones de IA podrían traer consigo un mundo mucho más próspero. Sin embargo, también reconocen el gran desafío de gestionar el riesgo existencial que implican estas tecnologías, sobre todo en su etapa de superinteligencia, que podría superar las capacidades humanas en múltiples áreas.

Los creadores de ChatGPT y otros sistemas de inteligencia artificial coinciden en que, aunque la superinteligencia es una meta atractiva, detener el desarrollo en este momento sería arriesgado y difícil, dado que las ventajas potenciales son enormes y los actores involucrados en su creación se están multiplicando rápidamente. En lugar de detener su progreso, prefieren abordar el problema mediante un enfoque proactivo y controlado para minimizar riesgos y asegurar una transición segura a esta nueva era.

En cuanto a las medidas propuestas para mitigar el riesgo existencial, plantean tres puntos clave:

  1. Desarrollo Coordinado de la Tecnología: Es esencial que los esfuerzos de desarrollo estén bien coordinados entre gobiernos y empresas para integrar la IA de forma responsable en la sociedad, asegurando que se respeten estándares mínimos de seguridad. OpenAI sugiere dos formas de hacerlo: una, liderada por gobiernos globales con participación de los desarrolladores clave; y otra, mediante un acuerdo colectivo respaldado por una nueva organización que limite el ritmo de crecimiento de la IA.
  2. Organismo Regulador Global Similar al OIEA: El segundo paso sería crear una organización internacional para supervisar el desarrollo de la superinteligencia, similar al OIEA para la energía nuclear. Esta entidad debería supervisar cualquier esfuerzo que supere un umbral significativo de capacidad, asegurando que las tecnologías avanzadas sean seguras y no representen un riesgo existencial. De forma preliminar, OpenAI sugiere que las compañías ya podrían empezar a implementar estas medidas de forma voluntaria.
  3. Capacidades Técnicas para la Seguridad de la Superinteligencia: Aseguran que es fundamental avanzar en la investigación para lograr una superinteligencia segura, a lo que OpenAI y otros están dedicando muchos recursos.

A pesar de estas sugerencias, OpenAI no está contenta con la regulación propuesta por la Unión Europea, que ha adoptado un enfoque más restrictivo sobre la inteligencia artificial. Sam Altman, CEO de OpenAI, ha expresado que la compañía podría incluso dejar de operar en Europa si las leyes europeas sobre IA se aplican tal y como están, pues considera que algunas de las restricciones son demasiado estrictas. La ley de la UE establece tres niveles de riesgo para las tecnologías de IA: uno que prohíbe ciertos usos (como los sistemas de puntuación social), otro que impone requisitos legales específicos, y un tercero para sistemas de IA que no se consideran de alto riesgo y que quedarían en gran medida sin regular.

Este conflicto pone de manifiesto la tensión entre la necesidad de regulación para garantizar la seguridad y la flexibilidad que las empresas tecnológicas exigen para innovar rápidamente. En este contexto, OpenAI reclama que, si bien el desarrollo de IA debe ser responsable y seguro, no debe haber restricciones que puedan frenar el progreso y los beneficios potenciales de la inteligencia artificial.

En resumen, OpenAI se enfrenta a un dilema: avanzar hacia un futuro de superinteligencia con cautela, pero sin frenar su desarrollo, mientras que al mismo tiempo intenta lidiar con las presiones regulatorias que surgen a nivel global.

Autor: Laboratory of the Future analysis team

Autor: Laboratory of the Future analysis team

Related articles

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

error: Content is protected !!