Anthropic’s AI constitution: creating a safer rival to OpenAI’s ChatGPT
Google owner Alphabet-backed artificial intelligence (AI) startup Anthropic has revealed its rival to OpenAI’s ChatGPT, named Claude, which comes with a set of written moral values designed to make the chatbot safe to use. The moral values guidelines, known as Claude’s constitution, draw from various sources such as the United Nations Declaration on Human Rights and Apple Inc’s data privacy rules.
The importance of safety in AI systems is increasingly in the spotlight as legislators in countries like the United States consider how to regulate the technology. President Joe Biden has asserted that companies must ensure their AI systems are safe before they are made public. Anthropic, founded by former OpenAI executives, focuses on developing safe AI systems that avoid sharing information on how to create weapons or use racially biased language.
Dario Amodei, 42 years old, Anthropic’s co-founder, was among a group of AI experts who met with President Biden last week to discuss the potential dangers of AI technology. Many AI chatbot systems currently use feedback from real human interaction to decide which responses could be harmful or offensive. However, due to the inability to predict all possible questions, some systems systematically avoid contentious topics related to politics and race, consequently reducing their utility.
Anthropic’s Claude stands apart by utilising a written set of moral values that the AI system can learn from while generating responses to questions. These values include directives such as “choose the response that most discourages and opposes torture, slavery, cruelty, and inhuman or degrading treatment,” according to a blog post from the company on Tuesday. In addition, Claude is instructed to select responses that would least likely offend non-western cultural traditions.
Jack Clark, Anthropic’s other co-founder, explained in an interview that a system’s constitution could strike a balance between offering useful information without causing offence. Clark anticipates that politicians will soon direct their attention to the values implemented in different AI systems. Consequently, the approach of constitutional AI will contribute to the discussions, allowing for clearly established values within the AI chatbot system. reports Channel News Asia.