Getting Started
Get structured output from any LLM in three steps.
Install
pip install parsec-llmChoose Your Provider
Pick an adapter for your preferred LLM provider:
# OpenAI
from parsec.models.adapters import OpenAIAdapter
adapter = OpenAIAdapter(api_key="your-key", model="gpt-4o-mini")
# Anthropic
from parsec.models.adapters import AnthropicAdapter
adapter = AnthropicAdapter(api_key="your-key", model="claude-3-5-sonnet-20241022")
# Gemini
from parsec.models.adapters import GeminiAdapter
adapter = GeminiAdapter(api_key="your-key", model="gemini-pro")
# Ollama
from parsec.models.adapters import OllamaAdapter
adapter = OllamaAdapter(model="llama3", base_url="http://localhost:11434")Enforce Structure
Define your schema and get validated output:
from parsec.validators import JSONValidator
from parsec import EnforcementEngine
# Set up enforcement
validator = JSONValidator()
engine = EnforcementEngine(adapter, validator, max_retries=3)
# Define what you want
schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"email": {"type": "string"}
},
"required": ["name", "email"]
}
# Get structured output
result = await engine.enforce(
"Extract: John Doe is 30, contact at john@example.com",
schema
)
print(result.parsed_output)
# {"name": "John Doe", "age": 30, "email": "john@example.com"}What Just Happened?
- Enforcement Engine sent your prompt to the LLM
- Validator checked the response against your schema
- If invalid, the engine automatically retried with feedback
- You got guaranteed valid output matching your schema
Add Caching (Optional)
Reduce API costs by caching responses:
from parsec.cache import InMemoryCache
# Create cache
cache = InMemoryCache(max_size=100, default_ttl=3600)
# Add to engine
engine = EnforcementEngine(adapter, validator, cache=cache)
# First call hits API
result1 = await engine.enforce(prompt, schema)
# Second identical call uses cache (no API call!)
result2 = await engine.enforce(prompt, schema)
# Check stats
print(cache.get_stats())
# {'hits': 1, 'misses': 1, 'hit_rate': '50.00%'}Use Templates (Optional)
Create reusable, versioned prompts:
from parsec.prompts import PromptTemplate, TemplateRegistry, TemplateManager
# Create template
template = PromptTemplate(
name="extract_person",
template="Extract person info from: {text}\n\nReturn as JSON.",
variables={"text": str},
required=["text"]
)
# Register with version
registry = TemplateRegistry()
registry.register(template, "1.0.0")
# Use with enforcement
manager = TemplateManager(registry, engine)
result = await manager.enforce_with_template(
template_name="extract_person",
variables={"text": "John Doe, age 30"},
schema=schema
)
# Save templates to file
registry.save_to_disk("templates.yaml")Next Steps
Production Features
- Prompt Templates - Version-controlled, reusable prompts
- Analytics - Track performance metrics and optimize templates
- A/B Testing - Test and compare prompt variations
- Resilience - Circuit breakers, retry policies, and failover
- Caching - Reduce API costs and improve performance
- Dataset Collection - Collect training data automatically
- Logging - Monitor performance and debug issues
Core Concepts
- Model Adapters - OpenAI, Anthropic, and Gemini integration
- Validators - JSON schema and Pydantic validation options
- Engines - Enforcement and streaming capabilities
Last updated on