Anthropic Adapter
The Anthropic adapter provides integration with Anthropic’s Claude API, supporting Claude 3.5 Sonnet, Claude 3 Opus, and other models.
Features
- ✅ Prompt Engineering for JSON: Uses prompt-based structured output (Anthropic doesn’t have native
response_format) - ✅ Streaming Support: Stream responses token-by-token for real-time applications
- ✅ Logging: Comprehensive logging with request context and performance metrics
- ✅ Health Checks: Built-in health check to verify API connectivity
- ✅ Token Tracking: Automatic token usage tracking for cost monitoring
Installation
pip install anthropicBasic Usage
from parsec.models.adapters import AnthropicAdapter
adapter = AnthropicAdapter(
api_key="your-anthropic-api-key",
model="claude-3-5-sonnet-20241022"
)
# Generate a response
result = await adapter.generate("What is the capital of France?")
print(result.output) # "Paris"
print(result.tokens_used) # e.g., 25
print(result.latency_ms) # e.g., 342.5Structured Output with Schema
The Anthropic adapter uses prompt engineering to enforce JSON schemas:
schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"email": {"type": "string"}
},
"required": ["name", "age"]
}
result = await adapter.generate(
"Extract: John Doe is 30 years old, john@example.com",
schema=schema,
temperature=0.7,
max_tokens=1024
)
print(result.output) # '{"name": "John Doe", "age": 30, "email": "john@example.com"}'Streaming
Stream responses for real-time applications:
async for chunk in adapter.generate_stream(
"Write a short story about a robot",
temperature=0.8,
max_tokens=2048
):
print(chunk, end="", flush=True)Configuration Options
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key | str | Required | Your Anthropic API key |
model | str | Required | Model name (e.g., “claude-3-5-sonnet-20241022”) |
temperature | float | 0.7 | Sampling temperature (0.0 to 1.0) |
max_tokens | int | 4096 | Maximum tokens to generate (required by Anthropic) |
schema | dict | None | JSON schema for structured output (via prompt) |
Logging
The adapter includes comprehensive logging:
import logging
logging.basicConfig(level=logging.INFO)
# Logs will show:
# INFO - Generating response from Anthropic model claude-3-5-sonnet-20241022
# DEBUG - Success: 25 tokensHealth Check
Verify API connectivity:
is_healthy = await adapter.health_check()
if is_healthy:
print("Anthropic API is accessible")Supported Models
claude-3-5-sonnet-20241022- Latest Claude 3.5 Sonnet (recommended)claude-3-opus-20240229- Most capable modelclaude-3-sonnet-20240229- Balanced performance and speedclaude-3-haiku-20240307- Fastest and most compact
Error Handling
try:
result = await adapter.generate("Hello", max_tokens=100)
except Exception as e:
# Logs automatically include full stack trace
print(f"Generation failed: {e}")Important Notes
Schema Handling
Unlike OpenAI, Anthropic doesn’t have a native response_format parameter. Instead, this adapter:
- Appends schema instructions to the prompt
- Formats the schema as readable JSON
- Instructs the model to return ONLY the JSON object
Max Tokens Requirement
Anthropic requires max_tokens to be specified. The adapter defaults to 4096 if not provided.
Token Counting
Token usage includes:
input_tokens- Prompt tokensoutput_tokens- Generated tokens- Total reported in
tokens_used
Example with Enforcement Engine
from parsec.enforcement import EnforcementEngine
from parsec.validators import JSONValidator
validator = JSONValidator()
engine = EnforcementEngine(adapter, validator, max_retries=3)
schema = {
"type": "object",
"properties": {
"sentiment": {"enum": ["positive", "negative", "neutral"]},
"score": {"type": "number", "minimum": 0, "maximum": 1}
},
"required": ["sentiment", "score"]
}
result = await engine.enforce(
"This product is amazing! I love it.",
schema
)
print(result.parsed_output)
# {"sentiment": "positive", "score": 0.95}Last updated on