7.0 KiB
7.0 KiB
Test Suite Documentation
This document describes the automated test suite for the FHIR to PADneXt converter.
Overview
The test suite provides comprehensive coverage of the converter functionality:
- 70+ test cases covering all major components
- Unit tests for individual functions
- Integration tests for end-to-end workflows
- Edge case tests for error handling
- Performance tests for scalability
Test Structure
test_fhir_to_pad_converter.py
├── Fixtures (sample data for testing)
├── TestUtils - Tests for utils.py
├── TestValidation - Tests for validation.py
├── TestCodeTranslator - Tests for translator.py
├── TestFhirValidation - FHIR validation tests
├── TestGrouping - Resource grouping logic tests
├── TestClaimMapping - Claim-to-PAD mapping tests
├── TestPlaceholders - Placeholder and validation tests
├── TestXmlBuilding - PAD XML building tests
├── TestPadValidation - PAD XML validation tests
├── TestIntegration - End-to-end integration tests
├── TestEdgeCases - Edge cases and error conditions
└── TestPerformance - Basic performance tests
Installation
1. Install Dependencies
# Install production dependencies
pip install -r requirements.txt
# Install development/testing dependencies
pip install -r requirements-dev.txt
2. Verify Installation
pytest --version
# Should output: pytest 7.4.3 or similar
Running Tests
Run All Tests
pytest test_fhir_to_pad_converter.py -v
Run Specific Test Class
# Run only utility function tests
pytest test_fhir_to_pad_converter.py::TestUtils -v
# Run only integration tests
pytest test_fhir_to_pad_converter.py::TestIntegration -v
Run Specific Test
pytest test_fhir_to_pad_converter.py::TestUtils::test_parse_iso_date_valid_with_z -v
Run with Coverage Report
# Generate coverage report
pytest test_fhir_to_pad_converter.py -v --cov=. --cov-report=html
# View coverage report
open htmlcov/index.html
Run Tests in Parallel
# Run tests using multiple CPU cores
pytest test_fhir_to_pad_converter.py -n auto
Run with Detailed Output
# Show print statements and detailed failures
pytest test_fhir_to_pad_converter.py -v -s --tb=long
Test Coverage
The test suite covers:
utils.py (100% coverage target)
- ✓ Date parsing (valid, invalid, edge cases)
- ✓ Date formatting
- ✓ Reference ID extraction
- ✓ XML text extraction
- ✓ Effective date collection
validation.py (100% coverage target)
- ✓ Temporal consistency validation
- ✓ Code validation
- ✓ Validation runner
translator.py (90% coverage target)
- ✓ Translator initialization
- ✓ Concept map parsing
- ✓ Code translation
- ✓ Missing code handling
fhir_to_pad_converter.py (80% coverage target)
- ✓ FHIR validation (valid/invalid bundles)
- ✓ FHIR statistics computation
- ✓ Resource grouping (by encounter/claim)
- ✓ Claim item to position mapping
- ✓ Resource lookup by reference
- ✓ Claim to header extraction
- ✓ Ziffer validation and truncation
- ✓ Placeholder handling
- ✓ PAD XML building
- ✓ PAD XML validation
- ✓ PAD statistics computation
- ✓ End-to-end conversion workflows
Integration Tests
- ✓ Full encounter-based conversion
- ✓ Full claim-based conversion
- ✓ Conversion with missing data
- ✓ Placeholder fallback behavior
Edge Cases
- ✓ Empty bundles
- ✓ Null entries
- ✓ Missing subject references
- ✓ Empty claim items
- ✓ Malformed references
- ✓ Various date formats
Test Results Interpretation
Success Output
test_fhir_to_pad_converter.py::TestUtils::test_parse_iso_date_valid_with_z PASSED [1%]
...
======================== 70 passed in 2.34s ========================
Failure Output
FAILED test_fhir_to_pad_converter.py::TestUtils::test_parse_iso_date_valid_with_z
AssertionError: assert None is not None
Coverage Output
Name Stmts Miss Cover
-------------------------------------------------
fhir_to_pad_converter.py 1506 120 92%
utils.py 47 0 100%
validation.py 36 2 94%
translator.py 45 3 93%
-------------------------------------------------
TOTAL 1634 125 92%
Continuous Integration
GitHub Actions Example
Create .github/workflows/test.yml:
name: Test Suite
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.11", "3.12"]
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
pip install -r requirements-dev.txt
- name: Run tests
run: |
pytest test_fhir_to_pad_converter.py -v --cov=. --cov-report=xml
- name: Upload coverage
uses: codecov/codecov-action@v3
Writing New Tests
Template for New Test
def test_new_feature(self):
"""Test description."""
# Arrange - Set up test data
input_data = {...}
# Act - Execute the function
result = function_to_test(input_data)
# Assert - Verify the result
assert result == expected_value
Best Practices
- Use descriptive test names:
test_parse_iso_date_with_timezone - Test one thing per test: Focus each test on a single behavior
- Use fixtures for common data: Reuse sample data across tests
- Test edge cases: Empty inputs, None values, boundary conditions
- Test error paths: Not just happy path
- Keep tests fast: Avoid slow operations like file I/O when possible
Troubleshooting
Import Errors
If you get ModuleNotFoundError:
# Make sure you're in the project directory
cd /path/to/fhir2padnext
# Run tests from project root
pytest test_fhir_to_pad_converter.py
Missing Dependencies
If tests fail due to missing modules:
pip install -r requirements-dev.txt
Skipped Tests
If you see skipped tests:
pytest test_fhir_to_pad_converter.py -v -rs
# -rs shows reason for skipped tests
Next Steps
- Run the tests:
pytest test_fhir_to_pad_converter.py -v - Check coverage:
pytest test_fhir_to_pad_converter.py --cov=. --cov-report=html - Fix any failures: Address test failures before committing
- Add new tests: When adding features, add corresponding tests
- Set up CI: Configure automated testing in your CI/CD pipeline
Test Metrics
Current test suite metrics:
- Total test cases: 70+
- Test files: 1
- Lines of test code: ~1,200
- Fixtures: 5
- Test classes: 12
- Expected coverage: 85-95%
- Execution time: < 5 seconds
Support
For questions or issues with the test suite:
- Check test output for specific error messages
- Review the test code for expected behavior
- Consult the main CLAUDE.md documentation
- Open an issue in the project repository