Build production LLM applications with guaranteed structured output, intelligent failover, and data-driven optimization.
from parsec import EnforcementEngine
from parsec.models.adapters import OpenAIAdapter
from parsec.validators import PydanticValidator
from pydantic import BaseModel
class User(BaseModel):
name: str
email: str
age: int
# Setup with automatic validation
adapter = OpenAIAdapter(api_key="...", model="gpt-4o-mini")
validator = PydanticValidator()
engine = EnforcementEngine(adapter, validator)
# Get guaranteed valid output
result = await engine.enforce(
"Extract: John Doe, john@example.com, 30 years old",
User
)
print(result.data)
# User(name='John Doe', email='john@example.com', age=30)Everything you need to build reliable LLM applications
Automatic validation and repair with JSON Schema and Pydantic. Never parse malformed JSON again.
Circuit breakers, retry policies, and automatic failover keep your apps running when LLMs fail.
Track performance metrics and A/B test prompts with statistical significance testing built-in.
One API for OpenAI, Anthropic, Gemini, and Ollama. Switch providers or run A/B tests without code changes.
Manage prompts as code with semantic versioning, YAML persistence, and type-safe variables.
LRU caching with TTL reduces API costs and improves response times. Monitor with built-in analytics.
Extract structured data from documents, emails, and unstructured text with guaranteed schema compliance.
Ensure LLM-powered endpoints always return valid JSON. Automatic validation prevents malformed responses.
Chain LLM calls with confidence. Validated outputs become reliable inputs, enabling complex agent workflows.
Get predictable categories and labels for routing logic. A/B test prompts to maximize accuracy.
Install with pip and start building reliable LLM applications in minutes.
pip install parsec-llmMIT License • Created by Oliver Kwun-Morfitt