On Tuesday, Anthropic introduced Claude, a large language model (LLM) that can generate text, write code, and function as an AI assistant similar to ChatGPT. The model originates from core concerns about future AI safety and Anthropic has trained it using a technique it calls “Constitutional AI.”
Two versions of the AI model, Claude and “Claude Instant,” are available now for a limited “early access” group and to commercial partners of Anthropic. Those with access can use Claude through either a chat interface in Anthropic’s developer console or via an application programming interface (API). With the API, developers can hook into Anthropic’s servers remotely and add Claude’s analysis and text completion abilities to their apps.
Anthropic claims that Claude is “much less likely to produce harmful outputs, easier to converse with, and more steerable” than other AI chatbots while maintaining “a high degree of reliability and predictability.” The company cites use cases such as search, summarization, collaborative writing, and coding. And, like ChatGPT’s API, Claude can change personality, tone, or behavior depending on use preference.