Claude | |
Developer: | Anthropic |
License: | Proprietary |
Claude is a family of large language models developed by Anthropic. The first model was released in March 2023. Claude 3, released in March 2024, can also analyze images.
Claude models are generative pre-trained transformers. They have been pre-trained to predict the next word in large amounts of text. Claude models have then been fine-tuned with Constitutional AI with the aim of making them helpful, honest, and harmless.[1]
Constitutional AI is an approach developed by Anthropic for training AI systems, particularly language models like Claude, to be harmless and helpful without relying on extensive human feedback. The method, detailed in the paper "Constitutional AI: Harmlessness from AI Feedback" involves two phases: supervised learning and reinforcement learning.
In the supervised learning phase, the model generates responses to prompts, self-critiques these responses based on a set of guiding principles (a "constitution"), and revises the responses. Then the model is fine-tuned on these revised responses.
For the reinforcement learning from AI feedback (RLAIF) phase, responses are generated, and an AI compares their compliance with the constitution. This dataset of AI feedback is used to train a preference model that evaluates responses based on how much they satisfy the constitution. Claude is then fine-tuned to align with this preference model. This technique is similar to reinforcement learning from human feedback (RLHF), except that the comparisons used to train the preference model are AI-generated, and that they are based on the constitution.[2]
This approach enables the training of AI assistants that are both helpful and harmless, and that can explain their objections to harmful requests, enhancing transparency and reducing reliance on human supervision.[3]
The "constitution" for Claude included 75 points, including sections from the UN Universal Declaration of Human Rights.[4]
Claude was the initial version of Anthropic's language model released in March 2023,[5] Claude demonstrated proficiency in various tasks but had certain limitations in coding, math, and reasoning capabilities.[6] Anthropic partnered with companies like Notion (productivity software) and Quora (to help develop the Poe chatbot).
Claude was released as two versions, Claude and Claude Instant, with Claude Instant being a faster, less expensive, and lighter version. Claude Instant has an input context length of 100,000 tokens (which corresponds to around 75,000 words).[7]
Claude 2 was the next major iteration of Claude, which was released in July 2023 and available to the general public, whereas the Claude 1 was only available to selected users approved by Anthropic.[8]
Claude 2 expanded its context window from 9,000 tokens to 100,000 tokens. Features included the ability to upload PDFs and other documents that enables Claude to read, summarize, and assist with tasks.
Claude 2.1 doubled the number of tokens that the chatbot could handle, increasing it to a window of 200,000 tokens, which equals around 500 pages of written material.[9]
Anthropic states that the new model is less likely to produce false statements compared to its predecessors.[10]
Claude 3 was released on March 14, 2024, with claims in the press release to have set new industry benchmarks across a wide range of cognitive tasks. The Claude 3 family includes three state-of-the-art models in ascending order of capability: Haiku, Sonnet, and Opus. The default version of Claude 3, Opus, has a context window of 200,000 tokens, but this is being expanded to 1 million for specific use cases.[11] [12]
Claude 3 drew attention for demonstrating an apparent ability to realize it is being artificially tested during needle in a haystack tests.[13]
On June 20, 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated significantly improved performance on benchmarks compared to the larger Claude 3 Opus, notably in areas such as coding, multistep workflows, chart interpretation, and text extraction from images. Released alongside 3.5 Sonnet was the new Artifacts capability in which Claude was able to create code in a dedicated window in the interface and preview select code in real time such as websites or SVGs.[14]
Limited-use access using Claude 3.5 Sonnet is free of charge, but requires both an e-mail address and a cellphone number. A paid plan is also offered for more usage and access to all Claude 3 models.[15]
On May 1, 2024, Anthropic announced the Claude Team plan, its first enterprise offering for Claude, and a Claude iOS app.[16]
Claude 2 received criticism for its stringent ethical alignment that may reduce usability and performance. Users have been refused assistance with benign requests, for example with the programming question "How can I kill all python processes in my ubuntu server?" This has led to a debate over the "alignment tax" (the cost of ensuring an AI system is aligned) in AI development, with discussions centered on balancing ethical considerations and practical functionality. Critics argued for user autonomy and effectiveness, while proponents stressed the importance of ethical AI.[17]