feat: Phase 3 - Pydantic models and predefined scenarios
Add Pydantic models for API validation and 5 project scenarios Pydantic Models: - models/config.py: SimulationConfigModel with full validation - ServerConfig for each server in the network - Probability conservation validation - Conversion to internal SimulationConfig - models/results.py: Complete results models for API responses - QueueStatisticsModel per queue - TimeSeriesDataModel for evolution tracking - HistogramDataModel for processing time distribution - SimulationResultsModel with all metrics Predefined Scenarios: - scenarios.py: 5 scenarios from project requirements - Scenario 1: 1 fast server (120ms) - instability test - Scenario 2: 1 fast + 1 slow server (120ms/240ms) - Scenario 3: 3 slow servers (240ms each) - Scenario 4: 1 fast + 1 medium (120ms/190ms) - compare with scenario 3 - Scenario 5: Parameter sensitivity (vary λ and p) - Theoretical utilization calculations for each scenario - Scenario registry for easy access - list_scenarios() function for API Testing: - test_all_scenarios.py: Comprehensive test of all scenarios - Runs all 5 scenarios with variations - Compares theoretical vs simulation results - Summary table for performance comparison Results Analysis: - All scenarios execute successfully - Stable systems show ρ < 1 as expected - Some scenarios show slight instability (ρ ≈ 1.0) due to high load - Parameter sensitivity variations demonstrate impact of λ and p Phase 3 Complete ✓ Next: Phase 4 - Analytical module (Jackson's theorem)
Veuillez vous inscrire ou vous connecter pour commenter