Skip to Content
DocsWelcome to Parsec

Welcome to Parsec

Parsec makes it easy to get structured, reliable output from large language models.

Whether you’re working with OpenAI, Anthropic, or Google’s Gemini, parsec gives you a simple way to ensure the AI returns exactly the data format you need—every time.

Created by Oliver Kwun-Morfitt 

What is Parsec?

Parsec is a lightweight toolkit that helps you:

  • Get predictable, structured responses from any major LLM provider
  • Validate responses against your requirements automatically
  • Fix common formatting issues without manual intervention
  • Switch between AI providers without rewriting code

Think of it as a safety net for AI outputs. Instead of hoping the model returns valid JSON or the right data structure, parsec makes sure of it.

Why Use Parsec?

For developers building with AI:

  • 🎯 Stop fighting with inconsistent LLM outputs
  • 🔄 Easily switch between OpenAI, Claude, and Gemini
  • ⚡ Get started in minutes with a simple, intuitive API
  • 🛡️ Validate and repair responses automatically

For production applications:

  • 📊 Monitor token usage and performance
  • 🔍 Debug issues with comprehensive logging
  • ✅ Trust your AI responses are valid
  • 📈 Scale confidently with async support

Quick Peek

# Define what you want schema = { "type": "object", "properties": { "name": {"type": "string"}, "age": {"type": "number"} } } # Get it from any LLM result = await engine.enforce( "Extract: John is 30 years old", schema ) # Use reliable, structured data print(result.parsed_output) # {"name": "John", "age": 30}

That’s it. No parsing, no error handling, no hoping it works.

What Can You Build?

Parsec is perfect for:

  • Data extraction - Pull structured info from unstructured text
  • API responses - Ensure LLMs return valid JSON for your endpoints
  • Form filling - Extract user input into structured formats
  • Content generation - Get blog posts, summaries, or reports in consistent formats
  • Classification tasks - Get predictable categories and labels
  • Multi-step workflows - Chain reliable outputs together

Choose Your Path

New to Parsec?

Start here to get up and running:

  • Get Started - Install and create your first structured output

Working with AI Providers?

Learn about our adapters:

Building for Production?

Essential guides for deployment:

  • Logging - Monitor performance and debug issues
  • Testing - Write reliable tests with mocking examples

Deep Dives

Explore the details:

  • Validators - JSON schema and Pydantic validation
  • Engines - Enforcement and streaming

Supported Providers

ProviderWhat You Get
OpenAIGPT-4, GPT-4o, GPT-3.5-turbo with native JSON mode
AnthropicClaude 3.5 Sonnet and Claude 3 Opus
GeminiGoogle’s Gemini Pro and Ultra models

All providers work with the same simple API—switch anytime without changing your code.

Community & Support

  • Documentation - You’re reading it! Explore the guides in the sidebar
  • Issues - Found a bug? Report it on GitHub 
  • Examples - Check the examples/ folder in the repo

Ready to build something reliable? Let’s get started →

Last updated on