Anthropic PBC is an American artificial intelligence company headquartered in San Francisco that operates as a public benefit corporation. Founded in 2021 by former executives from OpenAI, including siblings Dario and Daniela Amodei, the firm was established following directional differences regarding AI safety and commercialization. The company’s core mission is to research and develop AI systems at the technological frontier while prioritizing their safety and reliability.
The organization’s flagship product is the Claude family of large language models (LLMs), which are accessible via web interfaces, APIs, and dedicated applications. These models are distinctive for their use of Constitutional AI (CAI), a training framework where the AI is aligned with human values by adhering to a specific "constitution" of principles rather than relying solely on human feedback. In 2025 and 2026, the company expanded its lineup with Claude 4 and subsequent updates like Claude Opus 4.6, introducing improved coding capabilities and real-time web search. Other specialized products include Claude Code, a command-line agent for software development, and Claude Cowork, a graphical interface for enterprise collaboration.
Financially, the company has seen explosive growth, reaching a valuation of $380 billion by February 2026. It has secured nearly 64billioninfundingsinceitsinception,withmajorinvestmentsfromtechnologygiantssuchas∗∗Amazon,Google,Microsoft,andNvidia∗∗,aswellasventurefirmslikeGICandCoatue.Itsrun−raterevenuereached∗∗14 billion** in 2025, with a 32% share of the enterprise LLM market.
Despite its commercial success, the company has faced significant legal and political challenges. It was sued by music publishers for allegedly using copyrighted lyrics and by a class of authors for training models on pirated copies of their books. In September 2025, the company agreed to a $1.5 billion settlement with authors—the largest copyright resolution in U.S. history. Internal documents also revealed the existence of "Project Panama," a confidential operation where the company purchased and scanned millions of physical books to train its models.
The company is currently at the center of a major national security dispute with the U.S. government. While its models were initially the only ones deployed on the Pentagon's classified networks via a partnership with Palantir, tensions escalated over safety guardrails. The company refused demands from the Department of Defense to drop restrictions preventing Claude from being used for mass surveillance of Americans or in fully autonomous lethal weapons. Consequently, in February 2026, President Donald Trump issued an executive order directing federal agencies to cease using the company's technology, and the Pentagon designated the firm a supply chain risk.
In addition to product development, the firm remains a major research laboratory. It publishes extensive work on mechanistic interpretability, using techniques like "dictionary learning" to identify millions of features—conceptual patterns of neural activation—within its models to better understand how they process information and ensure they do not "plan ahead" or lie. As of 2026, the company employs approximately 2,500 people and has established a "Long-Term Benefit Trust" to ensure its development of advanced AI remains focused on the long-term benefit of humanity.
Become a supporter of this podcast: https://www.spreaker.com/podcast/the-money-lab--6886555/support.