📊 Monitoring & Observability Guide¶
Autor: Anderson Henrique da Silva Localização: Minas Gerais, Brasil Última Atualização: 2025-10-13 15:15:18 -0300
Author: Anderson Henrique da Silva Last Updated: 2025-09-20 07:28:07 -03 (São Paulo, Brazil)
Overview¶
Cidadão.AI implements a comprehensive observability stack providing real-time insights into system health, performance, and business metrics.
🎯 Observability Pillars¶
1. Metrics (Prometheus)¶
- System performance indicators
- Business KPIs
- Custom application metrics
2. Logs (Structured JSON)¶
- Centralized logging
- Correlation IDs
- Contextual information
3. Traces (OpenTelemetry)¶
- Distributed request tracking
- Service dependency mapping
- Performance bottleneck identification
🏗️ Architecture¶
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Application │────▶│ Prometheus │────▶│ Grafana │
│ │ │ │ │ │
│ - Metrics │ │ - Storage │ │ - Dashboards │
│ - Health │ │ - Alerting │ │ - Alerts │
│ - SLO/SLA │ │ - Rules │ │ - Reports │
└─────────────────┘ └─────────────────┘ └─────────────────┘
📈 Metrics Implementation¶
Business Metrics¶
Location: src/infrastructure/observability/metrics.py
# Agent task execution
agent_tasks_total = Counter(
'cidadao_ai_agent_tasks_total',
'Total agent tasks executed',
['agent_name', 'task_type', 'status']
)
# Investigation lifecycle
investigations_total = Counter(
'cidadao_ai_investigations_total',
'Total investigations',
['status', 'investigation_type']
)
# Anomaly detection
anomalies_detected_total = Counter(
'cidadao_ai_anomalies_detected_total',
'Total anomalies detected',
['anomaly_type', 'severity', 'agent']
)
System Metrics¶
# API performance
@observe_request(
histogram=request_duration_histogram,
counter=request_count_counter
)
async def api_endpoint():
# Automatic metric collection
Metric Endpoints¶
/health/metrics- Prometheus format/health/metrics/json- JSON format/api/v1/observability/metrics/custom- Custom metrics
🔍 Health Monitoring¶
Dependency Health Checks¶
Location: src/infrastructure/health/dependency_checker.py
Monitored Dependencies: 1. Database - Connection pool, query performance 2. Redis - Cache availability, latency 3. External APIs - Portal da Transparência, LLM services 4. File System - Disk space, write permissions
Health Check Features: - Parallel execution - Configurable timeouts - Retry logic - Trend analysis - Degradation detection
Health Endpoints¶
GET /health # Basic health (for load balancers)
GET /health/detailed # Comprehensive health report
GET /health/dependencies/{name} # Specific dependency health
POST /health/check # Trigger manual health check
📊 SLA/SLO Monitoring¶
SLO Configuration¶
Location: src/infrastructure/monitoring/slo_monitor.py
Default SLOs:
# API Availability
- Target: 99.9% uptime
- Time Window: 24 hours
- Warning: 98%
- Critical: 95%
# API Response Time
- Target: P95 < 2 seconds
- Time Window: 1 hour
- Warning: 90% compliance
- Critical: 80% compliance
# Investigation Success Rate
- Target: 95% success
- Time Window: 4 hours
- Warning: 92%
- Critical: 88%
# Agent Error Rate
- Target: < 1% errors
- Time Window: 1 hour
- Warning: 0.8%
- Critical: 1.5%
Error Budget Tracking¶
# Automatic error budget calculation
error_budget_remaining = 100 - ((100 - current_compliance) / (100 - target))
# Alerts on budget consumption
if error_budget_consumed > 80%:
alert("High error budget consumption")
SLO Endpoints¶
GET /api/v1/monitoring/slo # All SLO status
GET /api/v1/monitoring/slo/{name} # Specific SLO
POST /api/v1/monitoring/slo # Create SLO
GET /api/v1/monitoring/error-budget # Error budget report
GET /api/v1/monitoring/alerts/violations # SLO violations
📝 Structured Logging¶
Implementation¶
Location: src/infrastructure/observability/structured_logging.py
Log Format:
{
"timestamp": "2025-09-20T10:28:07.123Z",
"level": "INFO",
"correlation_id": "uuid-1234-5678",
"service": "cidadao-ai",
"component": "agent.zumbi",
"message": "Anomaly detected",
"context": {
"investigation_id": "inv-123",
"anomaly_type": "price_spike",
"confidence": 0.95
}
}
Features: - JSON structured format - Correlation ID propagation - Contextual enrichment - Performance metrics inclusion - Sensitive data masking
🔗 Distributed Tracing¶
OpenTelemetry Integration¶
Location: src/infrastructure/observability/tracing.py
Trace Context:
@trace_operation("investigation.analyze")
async def analyze_contracts(contracts):
with tracer.start_span("data_validation"):
# Automatic span creation
Trace Propagation: - B3 headers support - W3C Trace Context - Baggage propagation - Custom attributes
Trace Visualization¶
- Jaeger UI integration
- Service dependency graphs
- Latency analysis
- Error tracking
🚨 Alerting System¶
Prometheus Alert Rules¶
Location: monitoring/prometheus/rules/cidadao-ai-alerts.yml
Alert Categories:
1. System Health¶
- alert: SystemDown
expr: up{job="cidadao-ai-backend"} == 0
for: 30s
severity: critical
- alert: HighErrorRate
expr: error_rate > 5
for: 2m
severity: warning
2. Infrastructure¶
- alert: DatabaseConnectionsCritical
expr: db_connections_used / db_connections_total > 0.95
for: 30s
severity: critical
- alert: CacheHitRateLow
expr: cache_hit_rate < 70
for: 5m
severity: warning
3. Agent Performance¶
- alert: AgentTaskFailureHigh
expr: agent_error_rate > 10
for: 3m
severity: warning
- alert: AgentQualityScoreLow
expr: agent_quality_score < 0.8
for: 5m
severity: warning
4. Business Metrics¶
- alert: InvestigationSuccessRateLow
expr: investigation_success_rate < 90
for: 10m
severity: warning
- alert: AnomalyDetectionAccuracyLow
expr: anomaly_accuracy < 0.85
for: 15m
severity: warning
📊 Grafana Dashboards¶
System Overview Dashboard¶
Location: monitoring/grafana/dashboards/cidadao-ai-overview.json
Panels: 1. System health status 2. Active investigations count 3. API response time P95 4. Anomalies detected (24h) 5. Request rate graph 6. Agent tasks performance 7. SLO compliance table 8. Error budget consumption 9. Database connection pool 10. Cache hit rate 11. External API health 12. Investigation success rate 13. Top anomaly types 14. Memory/CPU usage 15. Alert status
Agent Performance Dashboard¶
Location: monitoring/grafana/dashboards/cidadao-ai-agents.json
Panels: 1. Agent task success rate 2. Active agents count 3. Average task duration 4. Reflection iterations 5. Performance by agent type 6. Task duration percentiles 7. Agent status distribution 8. Top performing agents 9. Error distribution 10. Agent-specific metrics 11. Memory usage by agent 12. Communication matrix 13. Quality score trends
🔧 Monitoring Configuration¶
Prometheus Configuration¶
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'cidadao-ai-backend'
static_configs:
- targets: ['localhost:8000']
metrics_path: '/health/metrics'
Grafana Data Sources¶
🎯 Key Performance Indicators¶
Technical KPIs¶
- Uptime: Target 99.95%
- API Latency P99: < 500ms
- Error Rate: < 0.1%
- Cache Hit Rate: > 90%
- Agent Success Rate: > 95%
Business KPIs¶
- Investigations/Day: Track growth
- Anomalies Detected: Measure effectiveness
- Report Generation Time: < 30s
- User Satisfaction: Via feedback metrics
🚀 APM Integration¶
Supported Platforms¶
Location: src/infrastructure/apm/
-
New Relic
-
Datadog
-
Elastic APM
APM Features¶
- Performance tracking decorators
- Error reporting with context
- Custom business metrics
- Distributed trace correlation
🧪 Chaos Engineering¶
Chaos Experiments¶
Location: src/api/routes/chaos.py
Available Experiments: 1. Latency Injection - Configurable delays - Probability-based - Auto-expiration
- Error Injection
- HTTP error codes
- Configurable rate
-
Multiple error types
-
Resource Pressure
- Memory consumption
- CPU load
- Controlled intensity
Chaos Endpoints¶
POST /api/v1/chaos/inject/latency
POST /api/v1/chaos/inject/errors
POST /api/v1/chaos/experiments/memory-pressure
POST /api/v1/chaos/experiments/cpu-pressure
POST /api/v1/chaos/stop/{experiment}
GET /api/v1/chaos/status
📈 Best Practices¶
- Set Meaningful SLOs: Based on user expectations
- Monitor Business Metrics: Not just technical ones
- Use Correlation IDs: For request tracing
- Alert on Symptoms: Not causes
- Document Runbooks: For each alert
- Regular Reviews: Of metrics and thresholds
- Capacity Planning: Based on trends
🔍 Troubleshooting¶
Missing Metrics¶
- Check Prometheus scrape configuration
- Verify metrics endpoint accessibility
- Review metric registration code
Alert Fatigue¶
- Tune alert thresholds
- Implement alert grouping
- Use inhibition rules
Dashboard Performance¶
- Optimize query time ranges
- Use recording rules
- Implement caching
📚 Additional Resources¶
For monitoring questions or improvements, contact: Anderson Henrique da Silva