The Chinese startup DeepSeek offers advanced AI models that match (and even surpass) those from OpenAI, Google, or Meta, but at a fraction of the cost.
• These are the companies, people, and concepts you need to know to understand what artificial intelligence is and not get lost in its dizzying development.
• Chatbots like ChatGPT, from OpenAI, are changing how information is searched on the internet, how images are generated, and much more.
It is becoming increasingly difficult to ignore artificial intelligence (AI).
Since OpenAI launched ChatGPT at the end of 2022, people have gotten used to using this chatbot and its many competitors for everything from automating work tasks to planning vacations.
And with countries like China advancing their own AI capabilities, the United States is fighting for dominance in this industry as the AI arms race intensifies.
Even if you don’t use AI in your daily life, this rapidly evolving technology is increasingly shaping the world around you, creating a growing need to understand what it is and how it might affect you, now and in the future.
To help understand its impact, here’s a brief list of the people, companies, and terms you need to know to speak properly about artificial intelligence.
The Main Leaders and Companies in AI:
Sam Altman: Co-founder and CEO of OpenAI, the company behind ChatGPT. In 2023, Altman was dismissed by OpenAI’s board of directors, only to return to the company as CEO days later.
Dario Amodei: CEO and co-founder of Anthropic, a major rival to OpenAI, where he previously worked. The AI startup is behind a chatbot called Claude 2. Google and Amazon are investors in Anthropic.
Demis Hassabis: Co-founder of DeepMind and now CEO of Google DeepMind, Hassabis leads AI efforts at Alphabet.
Jensen Huang: CEO and co-founder of Nvidia, the tech giant behind the specialized microchips companies use to power their AI ventures.
Elon Musk: CEO of Tesla and SpaceX, founded the AI startup xAI in 2023. The valuation of this new company had dramatically increased by the end of last year, reaching about $50 billion. Musk also co-founded OpenAI and, after leaving the company in 2018, has maintained a bitter dispute with Altman.
Satya Nadella: CEO of Microsoft, the software giant behind Bing, the AI-powered search engine, and Copilot, a suite of generative AI tools. Microsoft also invests in OpenAI.
Mustafa Suleyman: Co-founder of DeepMind, Google’s AI division, who left the organization in 2022. He co-founded Inflection AI before joining Microsoft as chief of AI in March 2024.
Liang Wenfeng: Hedge fund manager, founded the Chinese AI startup DeepSeek in 2023. A few weeks ago, the startup made waves in the AI sector with its flagship model, R1, which rivals major competitors in capability while operating at a fraction of the cost.
Mark Zuckerberg: Founder of Facebook and CEO of Meta, he has been investing heavily to improve Meta’s AI capabilities, including training its own models and integrating the technology into its platforms.
Key AI Terms:
Agential: A type of artificial intelligence that can make proactive and autonomous decisions with limited human intervention. Unlike generative AI models like ChatGPT, agential AI does not need a human prompt to act; for example, it can carry out complex tasks and adapt when its goals change. Gemini 2.0 focuses on agential AI that can solve multi-step problems on its own.
AGI: Stands for Artificial General Intelligence, meaning the ability of AI to perform complex cognitive tasks—such as demonstrating self-awareness and critical thinking—similar to how humans do.
Alignment: A field of research focused on AI safety that aims to ensure the goals, decisions, and behaviors of AI systems are consistent with human values and intentions. In July 2023, OpenAI announced the creation of a superalignment team dedicated to making its AI safe. That team was later dissolved, and in May of that year, the company created a Safety and Security Committee to advise its board on “critical safety and security decisions.”
Bias: Because AI models are trained on data created by humans, they can also inherit the same biases present in that data. There are several different types of biases AI models can fall prey to, including prejudice bias, measurement bias, cognitive bias, and exclusion bias, all of which can distort their outputs.
Biden’s Executive Order on Artificial Intelligence: Former U.S. President Joe Biden signed this landmark executive order in 2023. It established a series of measures to try to regulate AI development, including requiring more transparency from tech companies developing it, setting new safety and security standards, and adopting policies to ensure the U.S. remains competitive in AI research and development.
As promised during his election campaign, new U.S. President Donald Trump rescinded Biden’s AI order during his first week in office and signed his own executive order calling for the country to “maintain and enhance America’s global dominance in artificial intelligence.”
Compute: AI computing resources needed to train models and perform tasks, including data processing. This can include GPUs, servers, and cloud services.
Deepfake: An image, video, or voice generated by AI designed to appear real and often used to deceive viewers or listeners. Deepfakes have been used to create non-consensual pornography and to extort people for money.
Distillation: The process of extracting the reasoning and learned knowledge from a larger, pre-existing AI model into a new, smaller one—in other words, copying an AI model to create your own.
Effective Altruists: Broadly, a social movement based on the idea that all lives are equally valuable, and those with resources should use them to help as many people as possible. In the context of AI, effective altruists are interested in how the technology can be safely used to reduce suffering caused by social ills like climate change or poverty. Business leaders such as Elon Musk, Sam Bankman-Fried, and Peter Thiel identify as effective altruists.
Frontier Models: The most advanced examples of artificial intelligence. The Frontier Model Forum, a nonprofit industry organization created by Microsoft, Google, OpenAI, and Anthropic in 2023, defines frontier models as “large-scale machine learning models that exceed the capabilities currently present in the most advanced existing models and can perform a wide variety of tasks.”
GPU: Computer chip, short for graphics processing unit, used by companies to train and deploy their AI models. Microsoft and Meta use Nvidia GPUs to run their artificial intelligence models.
Hallucination: A phenomenon in which a large language model (see below) generates inaccurate information and presents it as fact. For example, during one of its early demonstrations, Google’s AI chatbot Bard hallucinated by generating a factual error about the James Webb Space Telescope.
Large Language Model (LLM): A complex computer program designed to understand and generate human-like text. The model is trained on massive amounts of data and produces responses by drawing information from across the internet. Examples of LLMs include OpenAI’s GPT-4, Meta’s Llama 3, and Google’s Gemini.
Machine Learning: Also known as deep learning, machine learning refers to AI systems that can adapt and learn on their own, without following explicit instructions or programming from humans.
Multimodal: The ability of AI models to process text, images, and audio to generate a result. ChatGPT users, for example, can write, speak, and upload images to the AI chatbot.
Natural Language Processing (NLP): A broad term that covers a variety of methods for interpreting and understanding human language. LLMs are a tool for interpreting language within the field of NLP.
Neural Network: A machine learning program, also known as deep learning, designed to think and learn like the human brain. Facial recognition systems, for example, are built using neural networks to identify people by analyzing their facial features.
Open Source: A trait used to describe a computer program that anyone can access, use, and modify freely without needing permission. Some AI experts have called for the models behind AIs like ChatGPT to be open source so the public can know exactly how they’re trained.
Optical Character Recognition (OCR): A technology that can recognize text within images, such as scanned documents, text in photos, or read-only PDFs, and extract it into machine-readable text format.
Prompt Engineering: The process of crafting questions (prompts) for AI chatbots to get desired responses. As a profession, prompt engineers specialize in refining AI models on the back end to improve results.
Rationalists: People who believe the most effective way to understand the world is through logic, reason, and scientific evidence. They draw conclusions through gathering evidence and critical thinking instead of relying on personal feelings.
In the context of artificial intelligence, rationalists aim to answer questions like how AI can become more intelligent, how it can solve complex problems, and how it can better process information about risks. This contrasts with empiricists, who in the AI context may prefer advances backed by observational data.
Responsible Scaling Policies: Guidelines that AI developers must follow, designed to mitigate safety risks and ensure responsible development of AI systems, their societal impact, and the resources they will consume, such as energy and data. These policies help ensure the technology remains ethical, beneficial, and sustainable as systems become more powerful.
Singularity: A hypothetical moment when artificial intelligence advances so far that the technology surpasses human intelligence. Think of a science fiction scenario where an AI robot develops its own personality and takes over the world.
0 Comments