Getting Started
Get structured output from any LLM in three steps.
Install
pip install parsec-llmChoose Your Provider
Pick an adapter for your preferred LLM provider:
# OpenAI
from parsec.models.adapters import OpenAIAdapter
adapter = OpenAIAdapter(api_key="your-key", model="gpt-4o-mini")
# Anthropic
from parsec.models.adapters import AnthropicAdapter
adapter = AnthropicAdapter(api_key="your-key", model="claude-3-5-sonnet-20241022")
# Gemini
from parsec.models.adapters import GeminiAdapter
adapter = GeminiAdapter(api_key="your-key", model="gemini-pro")Enforce Structure
Define your schema and get validated output:
from parsec.validators import JSONValidator
from parsec.enforcement import EnforcementEngine
# Set up enforcement
validator = JSONValidator()
engine = EnforcementEngine(adapter, validator, max_retries=3)
# Define what you want
schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"email": {"type": "string"}
},
"required": ["name", "email"]
}
# Get structured output
result = await engine.enforce(
"Extract: John Doe is 30, contact at john@example.com",
schema
)
print(result.parsed_output)
# {"name": "John Doe", "age": 30, "email": "john@example.com"}What Just Happened?
- Enforcement Engine sent your prompt to the LLM
- Validator checked the response against your schema
- If invalid, the engine automatically retried with feedback
- You got guaranteed valid output matching your schema
Next Steps
- Model Adapters - Learn about OpenAI, Anthropic, and Gemini adapters
- Validators - JSON schema and Pydantic validation options
- Testing - Write tests for your structured outputs
- Logging - Monitor performance and debug issues
Last updated on