Skip to content

refactor: migrate Anthropic provider to @ai-sdk/anthropic#11287

Open
daniel-lxs wants to merge 1 commit intomainfrom
migrate-anthropic-to-ai-sdk
Open

refactor: migrate Anthropic provider to @ai-sdk/anthropic#11287
daniel-lxs wants to merge 1 commit intomainfrom
migrate-anthropic-to-ai-sdk

Conversation

@daniel-lxs
Copy link
Member

@daniel-lxs daniel-lxs commented Feb 7, 2026

Summary

Migrates the Anthropic provider from the raw @anthropic-ai/sdk to @ai-sdk/anthropic (Vercel AI SDK) for consistency with other providers (Bedrock, DeepSeek, Mistral, etc.).

Changes

src/api/providers/anthropic.ts

  • Provider: Replace new Anthropic() client with createAnthropic() from @ai-sdk/anthropic
  • Streaming: Replace manual stream event parsing (message_start, content_block_start, content_block_delta, etc.) with streamText() + processAiSdkStreamPart()
  • Completion: Replace client.messages.create({stream: false}) with generateText()
  • Messages: Use convertToAiSdkMessages() for format conversion
  • Tools: Use convertToolsForAiSdk() and mapToolChoice() (replaces convertOpenAIToolsToAnthropic / convertOpenAIToolChoiceToAnthropic)
  • Caching: System prompt cached via providerOptions.anthropic.cacheControl on a system message; last 2 user messages cached via applyCacheControlToAiSdkMessages() (same walk-in-parallel approach as Bedrock)
  • Thinking/Reasoning: Extended thinking via providerOptions.anthropic.thinking with budgetTokens
  • Thinking Signatures: Added getThoughtSignature() and getRedactedThinkingBlocks() — improves on the original which had a TODO noting signatures required restructuring
  • Betas: Model-specific beta headers (output-128k, context-1m) set at provider creation
  • isAiSdkProvider(): Returns true

src/api/providers/__tests__/anthropic.spec.ts

  • Rewritten to mock @ai-sdk/anthropic and ai (streamText/generateText) instead of raw SDK
  • 29 tests covering: constructor options, streaming text/tools, prompt caching, thinking signature capture, redacted thinking blocks, model configuration, and isAiSdkProvider()

src/package.json

  • Added @ai-sdk/anthropic dependency (^3.0.38)

Test Results

  • All 29 Anthropic-specific tests pass
  • All 5436 tests in the full suite pass (369 test files, 4 skipped)

Important

Refactor Anthropic provider to use @ai-sdk/anthropic for consistency with other providers, updating streaming, completion, and caching logic.

  • Behavior:
    • Migrates Anthropic provider from @anthropic-ai/sdk to @ai-sdk/anthropic.
    • Replaces new Anthropic() with createAnthropic() in anthropic.ts.
    • Uses streamText() and generateText() for streaming and completion.
    • Implements convertToAiSdkMessages() and convertToolsForAiSdk() for message and tool conversion.
    • Caches system prompt and last 2 user messages using applyCacheControlToAiSdkMessages().
    • Adds getThoughtSignature() and getRedactedThinkingBlocks() for thinking features.
    • Sets model-specific beta headers during provider creation.
    • isAiSdkProvider() returns true.
  • Tests:
    • Updates anthropic.spec.ts to mock @ai-sdk/anthropic and ai.
    • Adds tests for constructor options, streaming, caching, thinking signatures, and model configuration.
  • Dependencies:
    • Adds @ai-sdk/anthropic to package.json.

This description was created by Ellipsis for 73ef718. You can customize this summary. It will automatically update as commits are pushed.

Replace the raw @anthropic-ai/sdk implementation with @ai-sdk/anthropic
(Vercel AI SDK) for consistency with other providers (Bedrock, DeepSeek,
Mistral, etc.).

Changes:
- Replace Anthropic() client with createAnthropic() from @ai-sdk/anthropic
- Replace manual stream parsing with streamText() + processAiSdkStreamPart()
- Replace client.messages.create() with generateText() for completePrompt()
- Use convertToAiSdkMessages() and convertToolsForAiSdk() for format conversion
- Handle prompt caching via AI SDK providerOptions (cacheControl on messages)
- Handle extended thinking via providerOptions.anthropic.thinking
- Add getThoughtSignature() and getRedactedThinkingBlocks() for thinking
  signature round-tripping (matching Bedrock pattern, improves on original
  which had a TODO for this)
- Add isAiSdkProvider() returning true
- Update tests to mock @ai-sdk/anthropic and ai instead of raw SDK
@dosubot dosubot bot added the size:XXL This PR changes 1000+ lines, ignoring generated files. label Feb 7, 2026
@roomote
Copy link
Contributor

roomote bot commented Feb 7, 2026

Rooviewer Clock   See task

Clean migration that follows the established AI SDK patterns from the Bedrock provider. Two items to address:

  • apiKey: "not-provided" fallback prevents env var lookup when no key is configured
  • System prompt cache control uses message-based approach instead of system + systemProviderOptions (the pattern used by Bedrock)

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

tool_choice: convertOpenAIToolChoiceToAnthropic(metadata?.tool_choice, metadata?.parallelToolCalls),
this.provider = createAnthropic({
baseURL: options.anthropicBaseUrl || undefined,
...(useAuthToken ? { authToken: options.apiKey } : { apiKey: options.apiKey ?? "not-provided" }),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ?? "not-provided" fallback changes behavior when options.apiKey is undefined. The old raw SDK code passed apiKey: undefined, which let the @anthropic-ai/sdk fall back to the ANTHROPIC_API_KEY environment variable. Here the literal string "not-provided" will be used as the actual API key, causing a 401 from the Anthropic API instead of a graceful env-var fallback. This affects anyone who configures the API key via environment variable rather than extension settings.

Suggested change
...(useAuthToken ? { authToken: options.apiKey } : { apiKey: options.apiKey ?? "not-provided" }),
...(useAuthToken ? { authToken: options.apiKey } : { apiKey: options.apiKey }),

Fix it with Roo Code or mention @roomote and request a fix.

Comment on lines +127 to +146
// Prepend system prompt as a system message with cache control
const systemMessage = {
role: "system" as const,
content: systemPrompt,
providerOptions: {
anthropic: { cacheControl: { type: "ephemeral" } },
},
}

const lastUserMsgIndex = userMsgIndices[userMsgIndices.length - 1] ?? -1
const secondLastMsgUserIndex = userMsgIndices[userMsgIndices.length - 2] ?? -1

try {
stream = await this.client.messages.create(
{
model: modelId,
max_tokens: maxTokens ?? ANTHROPIC_DEFAULT_MAX_TOKENS,
temperature,
thinking,
// Setting cache breakpoint for system prompt so new tasks can reuse it.
system: [{ text: systemPrompt, type: "text", cache_control: cacheControl }],
messages: sanitizedMessages.map((message, index) => {
if (index === lastUserMsgIndex || index === secondLastMsgUserIndex) {
return {
...message,
content:
typeof message.content === "string"
? [{ type: "text", text: message.content, cache_control: cacheControl }]
: message.content.map((content, contentIndex) =>
contentIndex === message.content.length - 1
? { ...content, cache_control: cacheControl }
: content,
),
}
}
return message
}),
stream: true,
...nativeToolParams,
},
(() => {
// prompt caching: https://x.com/alexalbert__/status/1823751995901272068
// https://github.com/anthropics/anthropic-sdk-typescript?tab=readme-ov-file#default-headers
// https://github.com/anthropics/anthropic-sdk-typescript/commit/c920b77fc67bd839bfeb6716ceab9d7c9bbe7393

// Then check for models that support prompt caching
switch (modelId) {
case "claude-sonnet-4-5":
case "claude-sonnet-4-20250514":
case "claude-opus-4-6":
case "claude-opus-4-5-20251101":
case "claude-opus-4-1-20250805":
case "claude-opus-4-20250514":
case "claude-3-7-sonnet-20250219":
case "claude-3-5-sonnet-20241022":
case "claude-3-5-haiku-20241022":
case "claude-3-opus-20240229":
case "claude-haiku-4-5-20251001":
case "claude-3-haiku-20240307":
betas.push("prompt-caching-2024-07-31")
return { headers: { "anthropic-beta": betas.join(",") } }
default:
return undefined
}
})(),
)
} catch (error) {
TelemetryService.instance.captureException(
new ApiProviderError(
error instanceof Error ? error.message : String(error),
this.providerName,
modelId,
"createMessage",
),
)
throw error
// Build streamText request
const requestOptions: Parameters<typeof streamText>[0] = {
model: this.provider(modelConfig.id),
messages: [systemMessage, ...aiSdkMessages],
temperature: modelConfig.temperature,
maxOutputTokens: modelConfig.maxTokens ?? ANTHROPIC_DEFAULT_MAX_TOKENS,
tools: aiSdkTools,
toolChoice: mapToolChoice(metadata?.tool_choice),
...(Object.keys(anthropicProviderOptions).length > 0 && {
providerOptions: { anthropic: anthropicProviderOptions } as any,
}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The system prompt is passed as a { role: "system" } message in the messages array with providerOptions for cache control. The established AI SDK pattern in this codebase (see the Bedrock handler) uses the system parameter with systemProviderOptions instead, which is the documented streamText() interface for system-level provider options. With the message-based approach, @ai-sdk/anthropic may not forward providerOptions from the system message to the underlying Anthropic API's cache_control on the system prompt, silently disabling prompt caching and increasing costs.

Consider aligning with the Bedrock pattern:

const requestOptions = {
  model: this.provider(modelConfig.id),
  system: systemPrompt,
  systemProviderOptions: {
    anthropic: { cacheControl: { type: "ephemeral" } },
  },
  messages: aiSdkMessages,
  ...
};

Fix it with Roo Code or mention @roomote and request a fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:XXL This PR changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant