close
close

Gottagopestcontrol

Trusted News & Timely Insights

How does Claude work? Anthropic reveals its secrets
Enterprise

How does Claude work? Anthropic reveals its secrets

gettyimages-1219355657

PM Images/Getty Images

Have you ever wondered what factors influence the response of an artificial intelligence (AI) chatbot when talking to a human? Anthropic, the company behind Claude, has revealed the secret recipe for AI.

In new release notes published Monday, the company revealed the system prompts, or commands, that instruct and encourage certain behaviors from its chatbot. Anthropic detailed the prompts used to instruct each of its three AI models: Claude 3.5 Sonnet, Claude 3 Opus and Claude 3 Haiku.

The July 12 prompts indicate similarities in how the three models work, but the number of instructions varies for each model.

Also: ChatGPT is (obviously) the most popular AI app – but the runners-up might surprise you

Sonnet is accessible for free through Claude’s website and is considered the smartest model, having the largest number of system prompts. Opus, suitable for writing and editing complex tasks, has the second largest number of prompts, and is accessible to Claude Pro subscribers, while Haiku, which is rated the fastest of the three models and is also available to subscribers, has the fewest prompts.

Log in to the website

Screenshot by Lance Whitney/ZDNET

What do the system prompts actually say? Here are examples for each model.

Claude 3.5 Sonnet

In a system prompt, Anthropic tells Sonnet that it cannot open URLs, links, or videos. If you try to include them when querying Sonnet, the chatbot will clarify this limitation and instruct you to paste the text or image directly into the conversation. Another prompt says that when a user asks a question about a controversial topic, Sonnet should try to respond with considered reasoning and clear information, without saying the topic is sensitive or claiming to provide objective facts.

Also: AI speech generators: What they can do and how they work

If Sonnet cannot or will not perform a task, it is instructed to explain this to you without apology (and it should generally avoid beginning answers with “I’m sorry” or “I apologize”). If you are asked about an obscure topic, Sonnet reminds you that while it tries to be accurate, it may hallucinate when asked such a question.

Anthropic even asks Claude to explicitly use the word “hallucinate” because the user would know what that means.

Claude Sonnet is also programmed to be cautious of images, especially those with recognizable faces. Even when describing an image, Sonnet acts as if it is “face blind,” meaning it will not tell you the name of the person in the image. If you know the name and share that detail with Claude, the AI ​​can talk to you about that person, but will do so without confirming that it is indeed the person in the image.

Next, Sonnet is instructed to give detailed and sometimes lengthy answers to complex and open-ended questions, but shorter and more concise answers to simple questions and tasks. Overall, the AI ​​should try to give a precise answer to a question, but then offer to elaborate further if you request more details.

Also: The best AI chatbots of 2024: ChatGPT, Copilot and worthy alternatives

“Claude is happy to help with analysis, questions, math, coding, creative writing, teaching, role-playing, general discussions, and all sorts of other tasks,” Anthropic adds as another system prompt. However, the chatbot is instructed to avoid certain affirmations and filler phrases such as “Sure,” “Of course,” “Absolutely,” “Great,” and “Sure.”

Claude 3 Opus

Opus contains several of the same system prompts as Sonnet, including workarounds for the inability to open URLs, links, or videos, and its disclaimer on hallucinations.

Otherwise, Opus is told to provide help on questions that involve particular views held by a large number of people, even if it has been trained to contradict those views. On questions about controversial topics, Opus is told to exercise careful consideration and provide objective information without downplaying harmful content.

Also: ChatGPT vs. ChatGPT Plus: Is a paid subscription still worth it?

The bot is also instructed to avoid stereotypes, including any “negative stereotyping of majority groups.”

Claude 3 Haiku

Finally, Haiku is programmed to give precise answers to very simple questions, but more detailed answers to complex and open-ended questions. Haiku is slightly smaller in scope than Sonnet and is geared toward “writing, analysis, question answering, math, coding, and all sorts of other tasks,” the release notes say. In addition, this model avoids mentioning information contained in the system prompts unless that information is directly related to your question.

Overall, the prompts read as if a novelist were creating a character study or outline of what the character should and shouldn’t do. Certain prompts were particularly insightful, especially those instructing Claude not to be confidential or apologetic in conversations, but to be honest when an answer might be a hallucination (a term Anthropic believes everyone understands).

Also: AI hype, in dollars and sense

The transparency of Anthropic’s system prompts is unique because generative AI developers typically keep such details confidential, but the company plans to publish such revelations on a regular basis.

In a post on X, Alex Albert, head of developer relations at Anthropic, said the company will log changes to the default system prompts on Claude.ai and in its mobile apps.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *