Skip to Content
DocsTesting

Testing

Comprehensive guide to testing parsec components, including adapters, validators, and the enforcement engine.

Overview

The parsec project uses pytest for testing with async support, mocking, and coverage reporting. Tests are organized by component type and use mocking to avoid real API calls.

Running Tests

Run All Tests

pytest

Run with Verbose Output

pytest -v

Run with Coverage

pytest --cov=src/parsec --cov-report=html

Run Specific Test File

pytest tests/unit/adapters/test_openai_adapter.py

Run Specific Test

pytest tests/unit/adapters/test_openai_adapter.py::TestOpenAIAdapter::test_generate

Test Structure

tests/ ├── conftest.py # Shared fixtures and test configuration ├── unit/ │ ├── adapters/ │ │ ├── test_openai_adapter.py │ │ ├── test_anthropic_adapter.py │ │ └── test_gemini_adapter.py │ ├── validators/ │ │ ├── test_json_validator.py │ │ └── test_pydantic_validator.py │ └── enforcement/ │ └── test_engine.py └── integration/ └── test_end_to_end.py

Testing Adapters

Why Mock Adapters?

  • Cost: Avoid API charges during testing
  • Speed: Tests run instantly without network calls
  • Reliability: No rate limits or API downtime
  • Isolation: Test your code, not external APIs

Basic Adapter Test Pattern

import pytest from unittest.mock import AsyncMock, MagicMock, patch from parsec.models.adapters import OpenAIAdapter from parsec.core import GenerationResponse class TestOpenAIAdapter: @pytest.mark.asyncio @patch('parsec.models.adapters.openai_adapter.AsyncOpenAI') async def test_generate(self, mock_openai_class): # 1. Create mock client mock_client = AsyncMock() # 2. Define mock response matching API structure mock_response = MagicMock() mock_response.choices = [MagicMock( message=MagicMock(content='{"name": "John"}'), finish_reason="stop" )] mock_response.usage = MagicMock( prompt_tokens=10, completion_tokens=15, total_tokens=25 ) # 3. Connect mocks mock_client.chat.completions.create.return_value = mock_response mock_openai_class.return_value = mock_client # 4. Create adapter (gets mock client) adapter = OpenAIAdapter(api_key="test", model="gpt-4") # 5. Call method under test result = await adapter.generate("Hello") # 6. Verify behavior assert isinstance(result, GenerationResponse) assert result.output == '{"name": "John"}' assert result.tokens_used == 25 # 7. Verify mock was called correctly mock_client.chat.completions.create.assert_called_once()

Testing with Schema

@pytest.mark.asyncio @patch('parsec.models.adapters.openai_adapter.AsyncOpenAI') async def test_generate_with_schema(self, mock_openai_class): # Setup mocks (same as above) # ... schema = { "type": "object", "properties": {"name": {"type": "string"}}, "required": ["name"] } result = await adapter.generate("Hello", schema=schema) # Verify response_format was set call_args = mock_client.chat.completions.create.call_args assert 'response_format' in call_args.kwargs assert call_args.kwargs['response_format']['type'] == 'json_object'

Testing Streaming

@pytest.mark.asyncio @patch('parsec.models.adapters.openai_adapter.AsyncOpenAI') async def test_generate_stream(self, mock_openai_class): mock_client = AsyncMock() # Create async generator for streaming async def mock_stream(): chunks = ['{"name":', ' "John"', '}'] for chunk_text in chunks: chunk = MagicMock() chunk.choices = [MagicMock( delta=MagicMock(content=chunk_text) )] yield chunk mock_client.chat.completions.create.return_value = mock_stream() mock_openai_class.return_value = mock_client adapter = OpenAIAdapter(api_key="test", model="gpt-4") # Collect streamed chunks chunks = [] async for chunk in adapter.generate_stream("Hello"): chunks.append(chunk) assert ''.join(chunks) == '{"name": "John"}'

Testing Error Handling

@pytest.mark.asyncio @patch('parsec.models.adapters.openai_adapter.AsyncOpenAI') async def test_generate_api_error(self, mock_openai_class): mock_client = AsyncMock() # Make the API call raise an exception mock_client.chat.completions.create.side_effect = Exception("API Error") mock_openai_class.return_value = mock_client adapter = OpenAIAdapter(api_key="test", model="gpt-4") # Verify exception is raised with pytest.raises(Exception, match="API Error"): await adapter.generate("Hello")

Testing Validators

JSON Validator Tests

from parsec.validators import JSONValidator from parsec.validators.base_validator import ValidationStatus def test_validate_valid_json(): validator = JSONValidator() schema = { "type": "object", "properties": {"name": {"type": "string"}}, "required": ["name"] } result = validator.validate('{"name": "John"}', schema) assert result.status == ValidationStatus.VALID assert result.parsed_output == {"name": "John"} assert result.errors == [] def test_validate_invalid_json(): validator = JSONValidator() schema = { "type": "object", "properties": {"name": {"type": "string"}}, "required": ["name"] } result = validator.validate('{"age": 30}', schema) assert result.status == ValidationStatus.INVALID assert len(result.errors) > 0

Testing Enforcement Engine

@pytest.mark.asyncio async def test_enforcement_engine_success(): # Use real validator, mock adapter validator = JSONValidator() mock_adapter = AsyncMock() mock_response = MagicMock() mock_response.output = '{"name": "John"}' mock_adapter.generate.return_value = mock_response engine = EnforcementEngine(mock_adapter, validator, max_retries=3) schema = { "type": "object", "properties": {"name": {"type": "string"}}, "required": ["name"] } result = await engine.enforce("Extract name", schema) assert result.status == ValidationStatus.VALID assert result.parsed_output == {"name": "John"}

Shared Fixtures

Define reusable fixtures in conftest.py:

import pytest from unittest.mock import AsyncMock, MagicMock @pytest.fixture def simple_schema(): return { "type": "object", "properties": {"name": {"type": "string"}}, "required": ["name"] } @pytest.fixture def mock_openai_response(): response = MagicMock() response.choices = [MagicMock( message=MagicMock(content='{"name": "John"}') )] response.usage = MagicMock(total_tokens=25) return response

Coverage Goals

  • Unit Tests: 80%+ coverage for all modules
  • Integration Tests: Cover key end-to-end workflows
  • Adapter Tests: Test all methods (generate, generate_stream, health_check)
  • Validator Tests: Test valid, invalid, and edge cases
  • Engine Tests: Test retry logic, validation flow, error handling

Common Testing Patterns

1. Test Initialization

def test_adapter_initialization(): adapter = OpenAIAdapter(api_key="test", model="gpt-4") assert adapter.api_key == "test" assert adapter.model == "gpt-4" assert adapter.provider == ModelProviders.OPENAI

2. Test Property Methods

def test_supports_native_structure_output(): adapter = OpenAIAdapter(api_key="test", model="gpt-4") assert adapter.supports_native_structure_output() == True

3. Test Async Methods

@pytest.mark.asyncio async def test_async_method(): # Must use @pytest.mark.asyncio for async tests result = await some_async_function() assert result is not None

4. Test Mock Call Arguments

# Verify exact call mock_func.assert_called_once_with(arg1="value1", arg2="value2") # Verify call happened mock_func.assert_called_once() # Access call arguments call_args = mock_func.call_args assert call_args.kwargs['model'] == "gpt-4"

Best Practices

  1. Mock External APIs: Never call real APIs in unit tests
  2. Match API Structure: Ensure mocks match actual API response structure
  3. Test Edge Cases: Test error conditions, empty inputs, malformed data
  4. Use Fixtures: Share common test data and mocks
  5. Async/Await: Use @pytest.mark.asyncio for async tests
  6. Descriptive Names: Name tests clearly (e.g., test_generate_with_invalid_schema)
  7. Verify Logging: Check that errors are logged correctly
  8. Clean Up: Close clients and resources in teardown

Debugging Tests

Run with Print Statements

pytest -s # Shows print() output

Run Single Test with Debugging

pytest tests/unit/adapters/test_openai_adapter.py::test_generate -vv -s

Use pytest-pdb for Debugging

pytest --pdb # Drop into debugger on failure

Continuous Integration

Tests run automatically on push via GitHub Actions:

# .github/workflows/test.yml name: Tests on: [push, pull_request] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Set up Python uses: actions/setup-python@v2 - name: Install dependencies run: pip install -e ".[dev]" - name: Run tests run: pytest --cov=src/parsec

Next Steps

  • Add tests for your custom adapters
  • Increase coverage to 80%+
  • Add integration tests
  • Set up CI/CD pipeline
Last updated on